Monday, December 30, 2019

Academic Publications 2019

Another year, another end of year review of academic productivity. As I noted in last year's entry, 2018 was the year in which modesty and self-deprecation were in vogue. I've seen less of that this year. The preference seems to be for people to announce, without noticeable shame, that they are 'thrilled' or 'humbled' to share their latest publications and related career successes.

As per usual, I try to sidestep these fashions and offer this list unapologetically for anyone who might care to read the things I have published over the past 12 months. You can access free versions of most publications (the book is the only exception) by clicking on the links provided.

The typical rules apply: I've only included items that were published for the first time in 2019. I've excluded journal articles that were previously published in an online only version and got bumped into an official journal issue this year. I've also excluded items that were accepted for publication in 2019 but haven't yet seen the light of day.


Peer-reviewed Journals

Book Chapters

Friday, December 27, 2019

Some recent media and podcasts

Regular readers will know that I have been shilling for my book Automation and Utopia for the past couple of months. In that vein, I did two recent podcasts on the book and related topics.

  • The first was on Mike Hagan's 'Radio Orbit' show. This was a fun and wide-ranging interview. It was recorded via phone so my voice is a bit muffled but overall it's probably one of my better interview performances. You can download the episode here.

  • The second was on Matt Ward's 'The Disruptors' podcast. This one focuses a lot on the likelihood of automation in the workplace and Matt plays a good devil's advocate on some of my claims. You can listen to it here or watch a video version (which I was not aware was being recorded) here.

This is a bit more out of date but my lecture 'Mass Surveillance, Artificial Intelligence and New Legal Challenges' was featured in a couple of news stories in Ireland, if you are interested. Here's one report from The Irish Times and another from The Unrelated to this, I was also briefly quoted in this story about the ethics (and law) of people creating 3D avatars of celebs and exes for sexual purposes.

Finally, for some unknown reason, I was featured on this list of 30 people to follow in Europe on AI. I'm not sure what the methodology was but it is nice to be featured nonetheless.

Tuesday, December 17, 2019

67 - Rini on Deepfakes and the Epistemic Backstop


In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation.

You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here).

Show Notes

  • 0:00 - Introduction
  • 3:20 - What are deepfakes?
  • 7:35 - What is the academic justification for creating deepfakes (if any)?
  • 11:35 - The different uses of deepfakes: Porn versus Politics
  • 16:00 - The epistemic backstop and the role of audiovisual recordings
  • 22:50 - Two ways that recordings regulate our testimonial practices
  • 26:00 - But recordings aren't a window onto the truth, are they?
  • 34:34 - Is the Golden Age of recordings over?
  • 39:36 - Will the rise of deepfakes lead to the rise of epistemic elites?
  • 44:32 - How will deepfakes fuel political partisanship?
  • 50:28 - Deepfakes and the end of public reason
  • 54:15 - Is there something particularly disruptive about deepfakes?
  • 58:25 - What can be done to address the problem?

Relevant Links

Thursday, December 12, 2019

What causes moral change? Some reflections on Appiah's Honour Code

Chinese Foot Binding

Morality changes over time. Once upon time, racism, sexism, and torture were widely practiced and, in some cases, celebrated. None of these practices has been completely eliminated, but there has been a significant change in our moral attitudes toward them. The vast majority of people now view them as unacceptable. What causes this kind of moral change?

In his book, The Honor Code, Kwame Anthony Appiah examines three historical moral revolutions (and one ongoing revolution) and comes up with an answer. He argues that changing perceptions of honour, as opposed to changes in moral belief, do most of the work. Indeed, he argues that in each of the three cases he examines, both moral argumentation and legal norms had already condemned the practices in question. They prevailed in spite of this. It was only when the practices were perceived to be dishonourable that the moral revolutions really took effect.

I recently read (well listened to) Appiah’s book. I found it a fascinating exploration of moral change, but I couldn’t figure out whether it the central thesis was interesting or not. I couldn’t shake the sense that there was something trivial about it. In what follows, I want to bring some order to my thoughts and see whether my initial impression is wrong. Is there, in fact, something insightful about Appiah’s argument? I will give an equivocal assessment in what follows.

1. Preliminary Thoughts about the Mechanics of Moral Change
Before I get into Appiah’s argument, I want to make a few general comments about the nature of moral change. Morality can be thought of as a system of propositions and imperatives. It consists of propositions describing the value of certain actions, events and states of affairs, e.g. “pleasure is good”, “pain is bad”, “friendship is good”, “torture is bad” and so forth. It consists of imperatives telling people to do or forbear from doing certain things, e.g. “don’t torture people”, “do give money to charity” and so forth.

The system of propositions and imperatives that constitute morality can be thought of in purely intellectual terms. That is to say, you might think of a moral system as something that is offered to us in order to garner our intellectual assent: we are asked to ‘believe’ in the propositions and ‘accept’ the imperatives, or not. That said, most people agree that a moral system ought to have some practical impact as well. If it is really a system of morality, it ought to present us with reasons for action and ought to change our actual behaviour. To put it more succinctly, most people think that morality is both an intellectual and practical affair.

What then is moral change? Presumably, moral change involves changes in the collection of propositions and imperatives to which we offer our intellectual assent, i.e. changes to what we believe is good and bad or right and wrong. And also changes in our moral behaviour. Full moral change would require both; partial moral change would involve one or the other.

The critical question then is: what causes changes in the intellectual and practical aspects of morality. Why do people no longer believe that torture is morally acceptable? Why is the practice no longer so prevalent? Broadly speaking, there are two drivers of moral change: intellectual and material. Intellectual drivers of change are ideas or concepts that change how we think about the system of morality. Perhaps someone presents a really good argument for thinking that torture is not morally permissible and this leads us to change our mind about it. That would be an intellectual driver of change at work. Material drivers of change are changes to the material or technological conditions of existence that have implications for moral beliefs and practices. For example, technology that makes it easier to extract information from people without causing tremendous pain might reduce the incentive to use certain kinds of torture, which might in turn affect our moral beliefs and practice about the permissibility of torture. That would be a material driver of change at work.

The distinction between intellectual and material drivers of change is not, of course, sharp. There are probably cases in which it is difficult to decide whether a given driver counts as intellectual or material. This is particularly true if you are a reductive materialist or idealist who thinks there is no ultimate distinction between mind and matter.

If we ignore this philosophical complication, however, my guess would be that most episodes of moral change involve a combination of both intellectual and material drivers of change (operating in a complex feedback loop). For present purposes, I will largely ignore material drivers of change because they do not feature heavily in Appiah’s account (although they do lurk in the background). Instead, I will focus on different kinds of intellectual drivers of moral change. Appiah’s account, it turns out, focuses on a distinction between moral and non-moral intellectual drivers of change.

What is this distinction? In the example I just gave I assumed that the intellectual driver of moral change was itself part of the system of morality. But this need not always be the case. Non-moral ideas and incentives might also affect moral beliefs and practices. Consider the following example. Suppose one day I decide to read Peter Singer’s famous essay on famine and the duty to give more money to charities in the third world. I carefully consider his arguments and come to believe they are correct. The following day I radically change my moral practices and start giving more money to charity. In this case, we have an intellectual driver of change that is clearly moral in nature: I was persuaded by reading Singer’s moral arguments. Contrast that with the following case. One of my close friends is an avowed Singerite who routinely gives half his money to charity. I really like my friend. I like the people he hangs out with and would like to win his respect. Consequently, even though I haven’t read any of Singer’s moral arguments, I decide to give half my money to charity as well. In this case, we have a non-moral intellectual driver of change: I change my behaviour because I care about winning my friend’s respect.

The central thesis of Appiah’s book is that a particular kind of non-moral driver of intellectual change — our conception honour — plays an outsized role in changing moral behaviour.

2. Honour Worlds and Moral Revolutions
What, then, is honour? Appiah has a somewhat intricate theory that underlies his book. It starts by stipulating that honour is a form of respect. You are honourable if you are respected by a relevant cohort of your peers; you are dishonourable if you are not. Following Stephen Darwall, Appiah goes on to suggest that there are two forms of respect:

Recognition Respect: The kind of respect that comes from being recognised as an equal member of a given social or cultural group. People with recognition respect ‘belong’ to their relevant social groups and hence have equal standing among their peers.

Appraisal Respect: The kind of respect that comes from being recognised as having superior capacities to one’s peers. People with appraisal respect are esteemed in the eyes of other members of their social group for their prowess, virtue, ability and so on.

Honour attaches to both kinds of respect but they are different in nature. Recognition respect is flat and egalitarian: once you have it, you have the same amount as everyone else. Appraisal respect is hierarchical and inegalitarian: you can be more or less esteemed, depending on your capacities. This is important because it means that recognition respect is non-competitive and non zero-sum (everyone can have the same amount) whereas appraisal respect is highly competitive and zero sum.

Appiah goes on to argue that each of us belongs to (or would like to belong to) an ‘honour world’. Honour worlds consist of people who share basic recognition respect and compete for appraisal respect. Honour worlds can come in a variety of shapes and sizes. A family, a tribe, a religion, a profession, or even a social class (e.g. the nobility or the working class) could constitute an honour world. What is crucial about honour worlds is that they are defined by an ‘honour code’, i.e. a set of rules or norms that tells members of the Honour World what they must do to win or maintain the respect of their peers.

Honour worlds are not fixed or immutable. Their boundaries are always being contested. Some people find themselves excluded from an Honour World and fight for inclusion. Some people find themselves being pushed out because they failed to live up to the Honour Code. Honour Worlds can expand and contract, depending on the circumstances.

Honour codes are also not fixed or immutable. What you have to do to win the respect of your peers can change from time to time. Indeed, it is the very fact that honour codes can change — coupled with the fact that honour worlds can expand and contract — that is at the heart of Appiah’s argument. His claim is that changing conceptions of what you must do to win honour, along with changes in the structure of given honour worlds, lie at the heart of several important moral revolutions.

3. Appiah’s Three Moral Revolutions
Appiah focuses on three moral revolutions that took place over the course of the 19th and early 20th century. These revolutions were: the abolition of duelling amongst the British nobility; the end of foot-binding in China, and the end of slavery in the British empire. The detailed discussion of each revolution, its causes and its consequences, are the highlight of Appiah’s book. I learned a lot reading about each revolution. I won’t be able to do justice to the richness of Appiah’s discussion here. All I can do is summarise the key points, highlighting what Appiah sees as the central role that honour played in facilitating all three revolutions.

Let’s start with the example of duelling. Pistol duelling was once a popular way for members of the aristocracy to resolve disputes concerning honour. If a gentleman thought his honour was being impugned by another gentleman, he would challenge him to a duel. Appiah starts his discussion of duelling with the famous case of the Duke of Wellington and the Earl of Winchilsea. The Duke, who was at the time the Prime Minister of the UK, challenged Winchilsea to a duel because the latter wrote an article accusing Wellington of being dishonest when it came to Catholic emancipation. Appiah then documents the fascinating history of duelling among members of the aristocracy in England and France. It was not uncommon at the time for members of this social group to participate in duels. Indeed, there were thousands of documented cases in France and several previous British prime ministers and ministers of state, prior to Wellington, had participated in duels whilst in office. The practice continued despite the fact that (a) the official churches spoke out against it; (b) it was illegal; and (c) many Enlightenment intellectuals argued against it on moral grounds.

Appiah then wonders: Why did duelling come to an end if (a), (b) and (c) weren't enough? His claim is that changing conceptions of honour played a key role. For starters, the practice itself was faintly ridiculous. There were lots of odd rules and habits in duelling that allowed you to get out of actually killing someone (neither Winchilsea nor Wellington were injured in their duel). This ridiculousness became much more apparent in the age of mass media when reports of duels were widely circulated among the nobility and beyond. This also drew attention the scandalous and hypocritical fact that the aristocracy were not abiding by the law. Whilst duelling was primarily a game played by the aristocracy and not known to the masses, it could be sustained as a system for maintaining honour. But when it was exposed and discussed in the era of emerging mass media, this was more difficult to sustain. This impacted on the conception of what was honourable. The duel was no longer a way of sustaining and protecting honour; it was a way of looking ridiculous and hypocritical. In short, it became dishonourable.

The next example is that of foot-binding in China. This was the horrendously painful practice of tightly binding and, essentially, breaking women’s feet in order to change their shape (specifically so that they appeared to be smaller and pointier). Appiah explores some fascinating socio-cultural explanations of why this practice took root. It seems that foot-binding began as a class/status thing. It was upper class women (consorts and members of the imperial harem) who bound their feet. This may have been because foot-binding was a way to control the sexual fidelity of these women. It is difficult for women whose feet are bound to walk without assistance. Thus, one way for the emperor to ensure the sexual fidelity of his harem was to literally prevent them from walking around. Whatever the explanation, once it became established in the upper classes, the practice of foot-binding spread ‘downwards’. Appiah argues that this was because it was a way of signalling one’s membership in an honour world.

As with duelling, foot-binding was frequently criticised by intellectuals in China and was widely recognised as being painful. Nevertheless, it persisted for hundreds of years before quickly dropping out of style in the late-19th and early 20th century. Why? Appiah argues that it was due to the impact of globalisation and the changing perception of national honour. As industrialisation sped up, and the ships and armies of other countries arrived at their door, it became apparent to the Chinese elite that China was losing global influence to other countries — Britain, America and Japan being the obvious examples. These were all cultures that did not practice foot-binding. There was also, at the same time, an influx of Western religious missionaries to China, who were keen on changing the practice. They focused their efforts on the upper classes and tried to persuade them that there was something dishonourable about the practice. They argued it brought shame to the Chinese nation. These missionaries embedded themselves in Chinese culture, and succeeded in getting members of the Chinese nobility to accept that it would be dishonourable to bind the feet of their daughters and to marry their sons to a woman whose feet were bound. This led to a rapid decline in the practice and its eventual abolition. It was, consequently, changing perceptions of honour, particularly national honour, that did for foot-binding in China.

The final historical case study is the abolition of slavery in the British empire. This took place in the early part of the 19th century. I’ll be briefer with this example. Appiah argues that the moral revolution around slavery came in two distinct phases. The first took place largely among the nobility and upper middle class, where abolitionists argued that the practice brought shame on the British Empire. The second phase, which was possibly more interesting, took place among members of the working class. One of the distinctive features of slavery as a practice was that it signalled that certain people did not belong to an honour world: that they were not owed basic recognition respect. These people were slaves and one of the reasons they were denied recognition respect was because they were manual workers. There was, consequently, the tendency to assume that there was something dishonourable about manual work. This changed in the early 19th century because of the rise of the working class. As working class identity became a thing, and as members of the working class wanted to be recognised as honour-bearing citizens, they pushed for the abolition of slavery because it brought dishonour to the kind of work they did.

In addition to these three historical revolutions, Appiah also discusses one ongoing moral revolution: the revolution in relation to honour killing in Pakistan. In Pakistan, honour killing is illegal and is often condemned by religious authorities as being contrary to Islamic teachings. Despite this, the practice persists and politicians and authorities often turn a blind eye to it. Appiah argues that this is because of the honour code that exists in certain communities. In order for this to change there will need to be a revolution in how honour is understood in those communities. Appiah documents some of the efforts to do that in the book.

4. Is Appiah’s theory an interesting one?
That’s a quick overview of Appiah’s theory. Let’s now take stock. What is Appiah really saying? As I see it, his theory boils down to two main claims:

Claim 1: Changes to moral beliefs and practices (at least in the cases Appiah reviews) are primarily driven by changing perceptions and understandings of honour.

Claim 2: Honour is not necessarily moral in nature. That is to say, what people think is honourable is not necessarily the same thing as what they think is morally right.

Are these claims interesting? Do they tell us something significant about the nature of moral revolution? Let’s take each in turn.

Claim 1 strikes me as being relatively plausible but not exactly groundbreaking. All it really says is that one of things we care most about is how we are perceived by our peer groups. We want them to like us and think well of us and so we tend to behave in ways that will raise us in their eyes. This is what honour and the honour code boils down to (particularly since Appiah defines honour in terms of recognition respect and appraisal respect).

I’m sure this is true. Humans are a social species and we care about our social position. One of the more provocative books of recent years is Kevin Simler and Robin Hanson’s book The Elephant in the Brain. In this book, Simler and Hanson argue that the majority of our behaviour can be understood as a form of social signalling. We do things not for their intrinsic merits nor for the reasons we often state but, rather, to send signals to our peers. Although Simler and Hanson push an extreme form of this social signalling model of human behaviour, I’m confident that signalling of this sort is a major driver of human moral behaviour. But does that make it an interesting idea? Not necessarily. For one thing, it may be the case that there are many other critical drivers of moral change that are not adequately covered by Appiah’s case studies. Honour may be the catalyst in his selected cases but other factors may be more important in other cases. For another thing, even if honour is the critical variable in some cases, we can’t do anything with this information unless we can say something useful about the kinds of things that honour tends to latch onto. Are there some universal or highly stable laws of what counts as honourable or is it all arbitrary?

This is where Claim 2 becomes important. This is, in many ways, the distinctive part of Appiah’s thesis: that honour is not necessarily moral in nature. Sometimes people have moralised honour codes — i.e. codes in which that which is perceived to be honourable is also understood to be moral — but sometimes they don’t. Indeed, each of the three historical case studies illustrate this. In all three cases, moral arguments were already marshalled against the practices of duelling, foot-binding and slavery. It was the recalcitrant honour code that was the impediment to moral change.

But let’s pause for a moment and think about this in more detail. When Appiah says that honour is not necessarily moralised is he saying that from his own perspective — i.e. that of a 21st century outsider to the honour codes he is analysing — or is he saying it from some other, universally objective stance where there is a single moral code that is open to both the insiders and outsides to a given honour world? The answer could make a big difference. For claim 2 to be true (and interesting) it would be have to be the case that insiders to a given honour code know that their duties to their honour code are in conflict with their moral duties and I’m just not sure that this is the case. I suspect many insiders to a given honour world think that their honour code is already moralised, i.e. that following the honour code means doing the morally right thing. I suspect there are also others that think they have conflicting moral duties: those specified by the honour code and those specified by some external source (e.g the law). But that in itself doesn’t mean that the honour code is perceived by them to be amoral or immoral. Such conflicts of duty are a standard part of everyday morality anyway. Beyond that, I would guess that it is relatively rare for people to think that their honour code is completely immoral but that they are bound to follow it regardless.

All that said, Appiah’s theory might be interesting insofar as it gives us some guidance as to how we can change moral practices. Appiah suggests that moral criticism and argumentation by itself is going to be relatively ineffective; as is top-down legal and regulatory reform, particularly when it is pushed on an honour world from the outside. So if we find a set of beliefs and practices that are morally objectionable, but honourable, we should approach its reform in a different way. Specifically, we should try doing any of the following three things (alone or in combination): (a) make moral conduct honourable (i.e. moralise the honour code), or (b) become an insider to an existing honour code and show how, within the terms of that code, some given conduct is, in fact, dishonourable (e.g. Muslim critics of honour killing can show how the practice conflicts with the superior duty to the rules of Islam) or (c) expand the honour world (i.e. the group identity-circle of respect) to include those with a moralised honour code and then try to reform the honour code to match the moralised ideal (e.g. what happened in China with foot-binding).

This might, indeed, be sound advice. Arguing from an ivory tower is unlikely to start a moral revolution.

Monday, December 9, 2019

Ten Years of Philosophical Disquisitions

Once upon a time, I used to mark yearly anniversaries on this blog. I stopped doing that a few years ago but since today is the 10th anniversary of this blog I though I should mark the occasion. For better or worse, this blog has been a major part of my life for the last 10 years. I have published over 1100 posts on it. (This was the first one). The blog itself has received just over 4 million pageviews. At the moment it is averaging about 70,000 pageviews per month. Given the way the internet works, I'm guessing about 90% of those pageviews are robots, but in light of my own stated philosophical views, I guess I shouldn't be too concerned about that!

As I have said before, I don't do any of this in the hope of getting readers. I do it mainly as an outlet for my own curiosity and understanding. That may well sound selfish, but I believe that if I didn't focus on the intrinsically fascinating nature of what I was reading and writing I wouldn't have sustained this for 10 years. Fame and fashion are, after all, fickle things.

That said, I do appreciate the fact that so many people seem to have derived some value from the things I have written on here. It amazes me that even one person has read it, never mind hundreds of thousands.

Anyway, in light of the occasion, here are the ten most popular posts from the past ten years:

The most popular post is the one on intoxicated consent to sexual relations. I guess that says something about what gets popular on the internet. One thing that I find interesting about this list is that the philosophy of religion doesn't feature much on it. This is despite the fact that the majority of the articles I wrote in the first few years were largely focused on that topic.

Friday, December 6, 2019

66 - Wong on Confucianism, Robots and Moral Deskilling

Pak-Hang Wong

In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 2:56 - How do robots disrupt our moral lives?
  • 7:18 - Robots and Moral Deskilling
  • 12:52 - The Folk Model of Virtue Acquisition
  • 21:16 - The Confucian approach to Ethics
  • 24:28 - Confucianism versus the European approach
  • 29:05 - Confucianism and situationism
  • 34:00 - The Importance of Rituals
  • 39:39 - A Confucian Response to Moral Deskilling
  • 43:37 - Criticisms (moral silencing)
  • 46:48 - Generalising the Confucian approach
  • 50:00 - Do we need new Confucian rituals?

Relevant Links

Wednesday, December 4, 2019

Will we ever have fully autonomous vehicles? Some reasons for pessimism

What is the future of the automotive industry? If you’ve been paying attention over the past decade, you’ll know the answer: self-driving (a.k.a autonomous) vehicles. Instead of relying on imperfect, biased, lazy and reckless human beings to get us from A to B, we will rely on sophisticated and efficient computer programs. This future may not be that far away. We already rely on computers to fly planes and drive trains. All we will be doing is extending our reliance on them to the roads and public highways.

There are, of course, some technical hurdles to overcome. The public highways are more unpredictable than the skies and railways. But impressive strides have been made with driverless technology in the recent past and it doesn’t seem implausible to think that it will become widespread within the next 10-15 years. Once it does, the benefits will be great — at least if you believe the hype — there will be fewer accidents and we will all have more time to focus on the things we love to do during our daily commutes: catch up on work or TV, post to social media and so on. There will also be other beneficial side effects. Less space will need to be allocated to carparks in our cities and towns, allowing us to create more pleasant urban living spaces, the traffic system might become more efficient and less crowded, there may even be a drop in light pollution.

Will any of this come to pass? In this article, I want to argue for a slightly unusual form of scepticism about the future of self-driving vehicles. This scepticism has two elements to it. First, I will argue that a combination of ethical, legal and strategic factors will encourage us not to make and market fully autonomous vehicles. Second, I will argue that despite this disincentive, many of us will, in fact, treat vehicles as effectively fully autonomous. This could be very bad for those of us expected to use such vehicles.

I develop this argument in three stages. I start with a quick overview of the five different ‘levels’ of automated driving that has been proposed by the Society of Automotive Engineers. Second, I argue that concerns about responsibility and liability ‘gaps’ may cause us to get stuck on the middle levels of automated driving. Third, and finally, I consider some of the consequences of this.

1. Getting Stuck: The Levels of Autonomous Driving
If you have spent any time reading up about autonomous vehicles you will be familiar with the ‘levels’ of autonomy framework. First proposed and endorsed by the Society of Automotive Engineers, the framework tries to distinguish between different types of vehicle autonomy. The diagram below illustrates the framework.

This framework has been explained to me in several different ways over the years. I think it is fair to say that nobody thinks the different levels are obvious and discrete categories. The assumption is that there is probably a continuum of possible vehicles ranging from the completely non-autonomous at one end of the spectrum* to the fully autonomous at the other. But it is hard for the human mind to grasp a smooth continuum of possibility and so it helps if we divide it up into discrete categories or, in this case, levels.

What of the levels themselves? The first level — so called ‘Level 0’ — covers all traditional vehicles: the ones where the human driver performs all the critical driving functions like steering, braking, accelerating, lane changing and so on. The second level (Level 1) covers vehicles with some driver assist technologies, e.g. enhanced or assisted braking and parking. Many of the cars we buy nowadays have such assistive features. Level 2 covers vehicles with some automated functions, e.g. automated steering, acceleration, lane changing, but in which the human driver is still expected to play an active supervisory and interventionist role. Tesla’s enhanced autopilot is often said to be an example of Level 2 automation. The contract Tesla user’s sign when they download the autopilot software stipulates that they must be alert and willing to take control at all times. Level 3 covers vehicles with more automated functionality than Level 2. It is sometimes said to involve ‘conditional autonomy’, which means the vehicle can do most things by itself, but a human is still expected to be an alert supervisor of the vehicle and has to intervene when requested to do so by the vehicle (usually if the vehicle encounters some situation involving uncertainty). The Waymo vehicles that Uber uses are claimed to be Level 3 vehicles (though there is some dispute about this). Level 4 covers vehicles with the capacity for full automation, but with a residual role for human supervisors. Finally, Level 5 covers vehicles that involve full automation, with no role for human intervention.

The critical point is that all the levels of automation between 1 and 4 (and especially between 2-4) assume that there is an important role for human ‘drivers’ in the operation of autonomous vehicles. That is to say, until we arrive at Level 5 automation, the assumption is that humans will never be ‘off the hook’ or ‘out of the loop’ when it comes to controlling the autonomous vehicle. They can never sit back and relax. They have to be alert to the possibility of taking control. This, in turn, means that all autonomous vehicles that fall short of Level 5 will have to include some facility or protocol for handing over control from the vehicle to the human user, in at least some cases.

While this ‘levels’ of automation model has been critiqued, it is useful for present purposes. It helps me to clarify my central thesis, which is that I think there are important ethical, legal and strategic reasons for trying to prevent us from ever getting Level 5 automation. This means we are most likely to get stuck somewhere around Levels 3 and 4 (most likely Level 3), at least officially. Some people will say that this is a good thing because they think it is a good thing for humans to exercise ‘meaningful control’ over autonomous driving systems. But I think it might be a bad thing because people will tend to treat these vehicles as effectively fully autonomous.

Let me now explain why I think this is the case.

2. Why we might get stuck at Level 3 or 4
The argument for thinking that we might get stuck at level 3 or 4 is pretty straightforward and I am not the first to make it. In the debate about autonomous vehicles, one of the major ethical and legal concerns arising from their widespread deployment is that they might create responsibility or liability gaps. The existence, or even the perceived existence, of these gaps creates an incentive not to create fully autonomous vehicles.

Our current legal and ethical approach to driving assumes that, in almost all cases, the driver is responsible if something goes wrong. He or she can be held criminally liable for reckless or dangerous driving, and can be required to pay compensation to the victims of any crashes resulting from this. The latter is, of course, usually facilitated through a system of insurance, but, except in countries like New Zealand, the system of insurance still defaults to the assumption of individual driver responsibility. There are some exceptions to this. If there was a design defect in the car then liability may shift to the manufacturer, but it can be quite difficult to prove this in practice.

The widespread deployment of autonomous vehicles throws this existing system into disarray because it raises questions as to who or what is responsible in the event of an accident. Is the person sitting in the vehicle responsible if the autonomous driving program does something wrong? Presumably not, if they were not the ones driving the car at the time. This implies that the designers and manufacturers should be held responsible. But what if the defect in the driving program was not reasonably foreseeable or if it was acquired as a result of the learning algorithm used by the system? Would it be fair, just and reasonable to impose liability on the manufacturers in this case? Confusion as to where responsibility lies in such cases gives rise to worries about responsibility ‘gaps’.

There are all sorts of proposals to plug the gap. Some people think it is easy enough to ‘impute’ driverhood to the manufacturers or designers of the autonomous vehicle program. Jeffrey Gurney, for example, has made this argument. He points out that if a piece of software is driving the vehicle, it makes sense to treat it as the driver of the car. And since it is under the ultimate control of the manufacturer, it makes sense to impute driverhood to them, by proxy. What it doesn’t make sense to do, according to Gurney, is to treat the person sitting in the vehicle as the driver. They are really just a passenger. This proposal has the advantage of leaving much of the existing legal framework in place. Responsibility is still applied to the ‘driver’ of the vehicle; the driver just happens to no longer be sitting in the car.

There are other proposals too, of course. Some people argue that we should modify existing product liability laws to cover defects in the driving software. Some favour applying a social insurance model to cover compensation costs arising from accidents. Some like the idea of extending ‘strict liability’ rules to prevent manufacturers from absolving themselves of responsibility simply because something wasn’t reasonably foreseeable.

All these proposals have some merit but what is interesting about them is that (a) they assume that the responsibility ‘gap’ problem arises when the car is operating in autonomous mode (i.e. when the computer program is driving the car) and (b) that in such a case the most fair, just and reasonable thing to do is to apply liability to the manufacturers or designers of the vehicle. This, however, ignores the fact that most autonomous vehicles are not fully autonomous (i.e. not level 5 vehicles) and that manufacturers would have a strong incentive to push liability onto the user of the vehicle, if they could get away with it.

This is exactly what the existence of Levels 2 to 4 autonomous driving enables them to exploit. By designing vehicles in such a way that there is always some allowance for handover of control to a human driver, manufacturers can create systems that ‘push’ responsibility onto humans at critical junctures. To repeat the example already given, this is exactly what Tesla did when it initially rolled out its autopilot program: it required users to sign an agreement stating that they would remain alert and ready to take control at all times.

Furthermore, it’s not just the financial and legal incentives of the manufacturers that might favour this set-up. There are also practical reasons to favour this arrangement in the long run. It is a very difficult engineering challenge to create a fully autonomous road vehicle. The road environment is too unpredictable and messy. It’s much easier to create a system that can do some (perhaps even most) driving tasks but leave others to humans. Why go to the trouble of creating a fully autonomous Level 5 vehicle when it would be such a practical challenge and when there is little financial incentive for doing so? Similarly, it might even be the case that policy-makers and legal officials favour sticking with Levels 2 to 4. Allowing for handover to humans will enable much of the existing legal framework to remain in place, perhaps with some adjustments to product liability law to cover software defects. Drivers might also like this because allows them to maintain some semblance of control over their vehicles.

That said, there are clearly better and worse ways to manage the handover from computer to human. One of the problems with the Tesla system was that it required constant vigilance and supervision, and potentially split-second handover to a human. This is tricky since humans struggle to maintain concentration when using automated systems and may not be able to do anything with a split-second handover.

Some engineers refer to this as the ‘unsafe valley’ problem in the design of autonomous vehicles. In a recent paper on the topic, Frank Flemisch and his colleagues have proposed a way to get out of this unsafe valley by having a much slower and safer system of handover to a human. Roughly, they call for autonomous vehicles that handle the more predictable driving tasks (e.g. driving on a motorway), have a long lead-in time for warning humans when they need to take control of the vehicle, and go to a ‘safe state’ (e.g. slow down and pull in to the hard shoulder or lay-by) if the human does not heed these warnings.

This model of autonomous driving is interesting. If it works, it could make Level 3 type systems much safer. But either way, the momentum seems to be building toward a world in which we never get to fully autonomous vehicles. Instead we stuck somewhere in-between.

3. The Consequences of Getting Stuck
Lots of people will be happy if we stuck at Level 3 or 4. Getting stuck means that we retain some illusion of meaningful human control over these systems. Even if the motives for getting stuck are not entirely benevolent, it still means that we get some of the benefits of the technology, while at the same time respecting the dignity and agency of the human beings who use these systems. Furthermore, even if we might prefer it if manufacturers took more responsibility for what happened with these systems, getting stuck at Level 3 or 4 means we still get to live in a world where some human is in charge. That sounds like a win-win.

But I’m a little more sceptical. I think getting stuck might turn out to be a bad thing. To make the case for this I will use the legal distinction between de jure and de facto realities. The de jure reality is what the law says should be the case; the de facto reality is what actually happens on the ground. For example, it might say in a statute somewhere that people who possess small quantities of recreational drugs are doing something illegal and ought to be sentenced to jail as a result. That’s the de jure reality. In practice, it might turn out that the legal authorities turn a blind eye to anyone that possesses a small quantity of such drugs. They don’t care because they have limited resources and bigger fish to fry. So the de facto reality is very different from the de jure reality.

I think a similar divergence between the official, legal, reality and what’s happening on the ground might arise if we get stuck at Level 3 or 4. The official position of manufacturers might be that their vehicles are not fully autonomous and require human control in certain circumstances. And the official legal and policy position might be that fully autonomous vehicles cannot exist and that manufacturers have to create ‘safe’ handover systems to allow humans to take control of the vehicles when needs be. But what will the reality be on the ground? We already know that drivers using Level 2 systems flout the official rules. They sit in the back seat or watch movies on their phones when they should be paying attention to what is happening (they do similar things in non-autonomous vehicles). Is this behaviour likely to discontinue even in a world with safer handover systems? It’s hard to see why it would. So we might end up with a de facto reality in which users treat their vehicles as almost fully autonomous, and a de jure world in which this is not supposed to happen.

Here’s the crucial point: the users might be happy with this divergence between de facto and de jure reality. They might be happy to treat the systems as if they are fully autonomous because this gives them the most of the benefits of the technology: their time and attention can be taken up by something else. And they might be happy to accept the official legal position because they don’t think that they are likely to get into an accident that makes the official legal rules apply to them in a negative way. Many human drivers already do this. How many people reading this article have broken the speed limits whilst driving, or have skirted with driving while borderline on the legal limit for alcohol, or have driven when excessively tired? Officially, most drivers know that they shouldn’t do these things; but in practice they do because they doubt that they will suffer the consequences. The same might be true in the case of autonomous vehicles. Drivers might treat them as close to fully autonomous because the systems are safe enough to allow them to get away with this most of the time. They discount the possibility that something will go wrong. What we end up with then is a world in which we have an official illusion of ‘meaningful control’ that disadvantages the primary users of autonomous vehicles, but only when something goes wrong.

Of course, there is nothing inevitable about the scenario I am sketching. It might be possible to design autonomous driving systems so that it is practically impossible for humans to flout the official rules (e.g. perhaps facial recognition technology could be used to ensure humans are paying attention and some electric shock system could be used to wake them up if they are falling asleep). It might also be possible to enforce the official position in a punitive way that makes it very costly for human users to flout the official rules (though we have been down this path before with speeding and drink-driving laws). The problem with doing this, however, is that we have to walk a very fine line. If we go too far, we might make using an autonomous vehicle effectively the same as using a traditionally human-driven vehicle and thus prevent us from realising the alleged benefits of these systems. If we don’t go far enough, we don’t resolve the problem.

Alternatively, we could embrace the idea of autonomous driving and try not to create an incentive to get stuck at Level 3 or 4. I’m not sure what the best outcome is but there are tradeoffs inherent in both.

* Although I do have some qualms about referring to any car or automobile as non-autonomous since, presumably, at least some functions within the vehicle are autonomous. For example, many of the things that happen in the engine of my car happen without my direct supervision or control. Indeed, if you asked me, I wouldn’t even know how to supervise and control the engine.

Tuesday, November 26, 2019

Anticipating Automation and Utopia

On the 11th of January 2020, I will be giving a talk to the London Futurists group about my book Automation and Utopia. The talk will take place from 2 to 4pm in Birkbeck College London. The full details are available here. If you are around London on that date, then you might be interested in attending. If you know of anyone else who might be, then please spread the word.

In advance of the event, I sat down with the Chair of the London Futurists, David Wood, to chat about some of the key themes from my book. You can watch the video of our conversation above.

I can promise that the talk on the 11th won't simply be a rehash or elaboration of this conversation, nor indeed a rehash of any of my recent interviews or talks about the book. I'll be focusing on something different.

Friday, November 22, 2019

65 - Vold on How We Can Extend Our Minds With AI

Karina Vold

In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:55 - Some examples of AI cognitive extension
  • 13:07 - Defining cognitive extension
  • 17:25 - Extended cognition versus extended mind
  • 19:44 - The Coupling-Constitution Fallacy
  • 21:50 - Understanding different theories of situated cognition
  • 27:20 - The Coupling-Constitution Fallacy Redux
  • 30:20 - What is distinctive about AI-based cognitive extension?
  • 34:20 - The three/four different ways of thinking about human interactions with AI
  • 40:04 - Problems with this framework
  • 49:37 - The Problem of Cognitive Atrophy
  • 53:31 - The Moral Status of AI Extenders
  • 57:12 - The Problem of Autonomy and Manipulation
  • 58:55 - The policy implications of recognising AI cognitive extension

Relevant Links

Tuesday, November 19, 2019

The Case Against Righteous Anger

There is a lot of anger in the world right now. You hear it people’s voices; you feel it in the air. Turn on a TV and what will you see? Journalists snapping questions at politicians; politicians snapping back with indignation. Dip your toe into social media and what will you read? People seething and roiling in rage. Anyone who disagrees with them is a ‘fucking idiot’, ‘garbage’, ‘worthless’. The time for quiet reflection and dialogue is over. We are at war. Anger is our fuel.

As someone raised to view anger as a bad thing, but who falls prey to it all the time, I find this to be unwelcome development. There are, however, some who believe that anger is a good thing. There are moral philosophers, for example, argue that anger is an essential foundation for our moral beliefs and practices — that it is an appropriate response to injustice. Amia Srinivasan, for instance, has argued that even if anger can be counterproductive it is, for victims of injustice, often ‘apt’ and we need to factor that into our understanding of injustice. Similarly, the philosopher Sally Haslanger has said that being angry is important because it helps her to care about certain political issues. Indeed, she cites the need for some anger as one reason why she quit doing certain Eastern meditative practices such as yoga:

Eventually I quit doing yoga because I found it left me too cut off from the world, especially from the political engagement that I cared so much about. I didn't want to be serene. I didn't want to be centered. Or at least not as much as my involvement in yoga then required. My anger and my intensity are an important part of who I am, and I couldn't find a way to combine them with the yoga I was doing at the time. 
(Haslanger - “What is it like to be a philosopher?”)

In his fascinating book, The Geography of Morals, Owen Flanagan takes a long hard look at this positive view of anger by contrasting it with the Buddhist/Stoic view of anger (which favours purging ourselves of anger). He does this as part of an effort to understand what different moral traditions can learn from each other.

In the remainder of this article I want to examine what Flanagan says. In particular, I want to clarify and analyse the argument he presents for thinking that the Western moral tradition (currently in thrall to righteous anger) should shift to become more like the Buddhist/Stoic tradition.

1. Identifying Different Moral Worlds
I will start by saying something about Flanagan’s perspective and method. In many ways, this is more interesting and important than his specific arguments about anger.

Flanagan wants to explore different possible moral worlds. He believes that we each get raised and encultured in a particular set of moral traditions. These traditions tell us how we ought to feel and behave . Over time, these feelings and behaviours become solidified. We learn to see them as natural, perhaps even necessary. We refuse to accept that other communities exist with different, but equally valid, moral traditions. Flanagan’s goal is to get us to ‘see’ these other moral possibilities and take them seriously.

Flanagan tries to tread a fine line between moral objectivism and moral relativism when staking out this view. As I read him, he is committed to some form of objectivism. Thus he thinks there are some moral ‘worlds’ that are beyond the pale (e.g. the moral world of fascist Germany). Nevertheless, he thinks that the space of moral possibility is much wider than we typically believe. We shouldn’t dismiss all alternative moral traditions off the bat. We should reflect upon them and see if there are any good reasons (consistent with at least some of our existing beliefs) to shift over to those alternative moral traditions. This means adopting a form of super-wide reflective equilibrium: a method that looks to achieve balanced, reasonable judgments across different moral traditions and not just within one.

The discussion of anger is just a case study in the application of this method. Nevertheless, it is a case study that Flanagan takes very seriously indeed, dedicating three and half chapters of his book to its analysis. Being a Westerner, Flanagan starts within the Western moral tradition that endorses righteous anger. He argues that this tradition consists of angry feelings and behaviours that are endorsed and perpetuated by a superstructure of anger-related norms and scripts. In other words, individuals in the Western tradition experience feelings of anger (emotional hotness, rage, indignation, impulsiveness) and behave in angry ways (criticising, shaming, punishing, lashing out, violent rebuke). Some, but not all, of these feelings and behaviours are then reinforced and protected by norms (i.e. permissions and recommendations about when one ought to feel and behave angrily) and scripts (i.e. sets of patterned angry behaviours that are deemed appropriate in certain circumstances). These feelings, behaviours, norms and scripts are, in turn, supported by a deeper set of metaphysical and moral beliefs about individualism, inter-personal relationships and justice. This is what supports the view that at least one form of anger — righteous anger — is a good thing.

As someone raised in this Western tradition, Flanagan believed that it was necessary and correct for many years. He knew that other traditions saw things differently, but he couldn’t see those as viable options. If someone wrongs you, of course you should feel angry and look for retribution. How else are you supposed to behave? A couple of visits to post-apartheid South Africa helped him to reconsider. I’ll let him speak for himself on this issue:

Both times I visited…I found myself in feeling in awe that Nelson Mandela and his comrades had found it in themselves not to kill all the white folk…It amazed me that apartheid ended, that it could have ended, without an even worse bloodbath than had already occurred, and that South Africa found its way to enter an era of “truth and reconciliation”…The best explanation was that I was not raised to see how ending a practice like apartheid was psychologically, morally, or practically possible without a bloodbath. I didn’t see that this was a variety of moral possibility…I was raised in a world where every tale of the victories of the forces of good over the forces of evil involved righteous fury, death and destruction. 
(Flanagan 2017, 159)
This led him to look more closely at the Buddhist/Stoic view of anger.

The Buddhist/Stoic view of anger is very different from the Western one.* Both traditions think that anger is something that ought to be eliminated from human life (as much as possible). The Buddhist view is deeply metaphysical in nature. Life involves suffering, according to the Buddhist. This suffering stems from mistaken beliefs about the nature of reality and the emotional reactions that arise from these beliefs. Anger arises from egoism: a belief that individuals are wronged by the actions of others. Egoism is mistaken. There is no self: the idea of a single conscious self is an illusion that can be revealed through meditative practice. Similarly, our belief that the world is divided up into concrete categories and individuals is also mistaken. The world is a single, interconnected whole. When we appreciate this, we can see how destructive anger can really be. Each instance of anger has ripple effects across the whole. It doesn’t just affect us or a handful of others. It affects everyone. Persisting with it prolongs our suffering. (I’m greatly simplifying a long discussion in Flanagan’s book)

The Stoic view is more pragmatic. The classic Stoic text on anger comes from Seneca. He argues that anger emerges as a response to injury and is manifested by the desire to cause injury in kind. There are three problem with this. First, anger tends to overreach and overreact. This is something you probably experience yourself: when you are angry you tend to lash out in a wild manner. You are rarely measured or proportionate. You need to ‘cool down’ to do that. Second, Seneca argues that anger is practically useless. Anger leads to the breakdown of relations and the severing of bonds of trust. The perpetual cycles of anger prevent us from moving forward with our lives and getting what we want. Third, Seneca argues that anger is not spur to virtue. It tends to wither the virtuous response and block us from true happiness. It is only the non-virtuous person who takes pleasure in causing pain and suffering to others.

Flanagan sees something attractive in the Buddhist/Stoic view. A world not prey to the dark side of anger sounds like a good thing. He thinks we should consider shifting from our current embrace of righteous anger to this alternative. But there are four major objections to this suggestion. Let’s address each of them in turn.

2. The Impossibility Objection
The first objection is that the Buddhist/Stoic view asks the impossible of us:

Impossibility Objection: Anger is hard-wired into the human mind/body. It is a psychobiological necessity. We cannot eliminate it without fundamentally changing human nature (which is something we cannot, yet, do).

This is a common view. Flanagan quotes several philosophers who have endorsed a version of it. Perhaps the most well-known is Peter Strawson who wrote a famous article back in the 1960s about the ‘reactive attitudes’ (anger, resentment, indignation etc) and the central role they play in human moral life. His view has been influential in both philosophy and psychology. Followers of his view tend to see anger as an instinctual given: as part of the fundament of humanity.

Is this really the case? Flanagan spends a long time answering this question (taking up an entire chapter). But he only really makes three key points. His first is that we need to critically scrutinise what it means to say that anger is a ‘psychobiological necessity’. Clearly, there are some things that are hard-wired into (most) humans from birth. Flanagan gives the example of crying. A newborn baby will naturally — without any instruction or learning — cry. They won’t, however, get angry. This is an emotional and behavioural trait that emerges later in childhood. This means that if anger is a psychobiological necessity it is one that emerges in the course of childhood development and not something that is there from the start. Furthermore, when it does first emerge it is not in its sophisticated adult form, with the associated norms and scripts. It is more like a raw emotion that gets expressed in various, not always consistent ways. This ‘developmental distance’ between birth and the emergence of anger should give us some pause. How do we know that something is a psychobiological necessity, and just not a strongly entrenched cultural norm, if it emerges in the course of childhood development? Flanagan argues that we have been historically too quick to assume that cultural norms are psychobiological necessities.

The second point Flanagan makes is that there are some cultures where anger, if it can be said to exist at all, gets expressed in very different ways from what we see in the West. There is, indeed, a long-standing debate about whether you can meaningfully compare emotions across different cultures, but even if we accept that you can, we must also accept that the shared emotions can be quite minimal. Flanagan gives the example of Catherine Lutz’s work on the emotional repertoire of the Ifaluk people from the South Pacific. Lutz argues that the Ifaluk have a very different emotional repertoire from what you would see in North America. Their equivalent of justifiable anger — an emotion called song — is both triggered by different moral transgressions (much more minor that what would provoke an American) and results in different behaviours (the refusal to eat being one way of expressing anger). Similarly, Lutz argues that the Ifaluk don’t have an equivalent to the Western emotion of love; instead they have fago, which combines what we might call love, compassion and sadness into a single emotional response. Cross-cultural work of this sort suggests that there is more ‘cultural plasticity’ to our reactive attitudes than we might think. Thus, even if there is some basic reactive response like anger, there is room to play around with the behavioural norms and scripts associated with that response.

This brings us to Flanagan’s third key point which is that this plasticity opens up some space in which the moral reformer can play around. We can ask the question whether our current practices and beliefs around anger are morally optimal. Maybe they are not. Maybe they were once adaptive but we now have reason to think they are less so. Flanagan makes an analogy with vegetarianism to underscore this point. He argues that the desire to eat meat may be ‘programmed’ into us (to some extent) because it was adaptive to eat meat in the past. But we have since discovered reasons to think that eating meat is not morally optimal. Thus, if we can survive without eating meat — and many people do — there may be reason to shift our moral beliefs and practices to vegetarianism. Something similar could be true for anger and the shift to the Buddhist/Stoic view. All of this leads Flanagan to conclude that:

Even if anger is original and natural in some forms, those forms are inchoate until a moral ecology speaks, forms and authorizes them. 
(Flanagan 2017, 199)

The claim then is that we should not authorize righteous anger. The persuasiveness of this, of course, depends on whether righteous anger is morally optimal or not. That’s where the next three objections come in.

3. The Attachment Objection
The second objection is that Buddhist/Stoic view asks us to forgo the goods of attachment:

The Attachment Objection: A flourishing human life will consist of relationships involving deep attachments to others. Deep attachments to others necessitate some capacity for anger. Therefore, in order to access the good of attachment we need to allow for anger.

It is often said that love and anger go together. How many times have you felt angry at someone you love? Surprisingly often, I suspect. This might seem paradoxical but it is not. When you are attached to another person, you care deeply about them. You want them to do well and act well. If they do, you will feel the positive emotions of respect, admiration and love. Conversely, you don’t want them to step out of line and do wrong. If they do, you will feel the negative emotions of anger, resentment and indignation. The claim underlying this second objection is that you cannot break the axiological link between the positive and negative emotions. You cannot have the goods of attachment without also being open to negative emotions such as anger. This is healthy, normal and desirable. If you were completely detached from others — if you viewed their actions with equipoise — you would be inhuman, alien.

I covered a variant of this objection previously when looking at the ethics of grief. To briefly recap what I said there, one common argument about grief is that experiencing it is a good thing because it means that the person who died meant something to you. If you felt nothing after their deaths, that would be an indictment of the relationship you had with them. Although this might be true, there are problems when it comes to the calibration of grief. Sometimes grief is overwhelming. It dominates your conscious life. You cannot move beyond it. In these cases the grief, though perhaps initially indicative of a positive relationship with the deceased, becomes destructive. This is one reason why Buddhist and Stoic philosophers also recommend limiting and extirpating grief from our lives. This doesn’t mean completely forgoing our attachments to others. It just means moderating those attachments and ensuring they don’t become destructive.

Flanagan thinks we should adopts a similar strategy when it comes to anger and attachment. We should recognise that attachment to others comes with a whole suite of emotions (respect, love, admiration, sorrow, grief, anger, indignation, rage). It is not at all obvious that each of these emotions is essential to attachment, i.e. that we cannot feel attached without one or more of them. Indeed, it already seems to be the case that some people can live deeply attached lives without experiencing one or more of these emotions. If this is true, and if some of the emotions commonly associated with attachment are destructive, then perhaps we should look to extirpate them from our lives.

Flanagan bolsters this by arguing that of all the emotions and passions associated with attachment, there is something troubling about anger. It is not just that anger tends to be miscalibrated and prone to overreach (as Seneca argued) but that there is something inherently destructive about it:

But anger is a response that marks injury and seeks to do harm. It is vengeful and spiteful. It does not seek to heal like forgiveness and sorrow. Nor does it encourage and compliment goodness as gratitude does. It is ugly and harmful, and in the business of passing pain. 
(Flanagan 2017, 203)

At its extremes, anger can sever the bonds of attachment and destroy once positive relationships. Flanagan’s suggestion then is that we redirect our emotional energies away from anger and towards sorrow, gratitude and forgiveness. These emotions are still associated with attachment and thus allow us to access the goods of attachment, but enable us to do so without the destructive consequences of anger. So when someone transgresses or wrongs us we should feel sorrow for their transgression, gratitude for the good they have done, and seek to forgive or move on.

4. The Injustice and Catharsis Objections
This idea that sorrow, gratitude and forgiveness should be our go-to emotions in the event of a moral transgression will be unsettling to anyone raised to think that anger and retribution are the appropriate responses to wrongdoing. If someone wrongs us surely we should not roll over and forgive; we should meet fire with fire? This is essential to the process of identifying and responding to injustice.
This is something that the third and fourth objections to the Buddhist/Stoic view try to get at. We can treat these objections as a pair since they are closely related:

The Injustice Objection: Anger is necessary, socially, if we are to properly identify and respond to injustice/moral wrongdoing.

The Catharsis Objection: Anger is necessary, personally, if we are to heal and move on from injustice/wrongdoing.

It is these kinds of objections that seem to motivate feminist and minority critics of Buddhist/Stoic passivity. I think this is apparent in the previously-mentioned work of Amia Srinivasan and Sally Haslanger. Their claim appears to be that women (and other minorities), as victims of oppression, need to embrace their anger if they are to address the conditions of their oppression. Flanagan cites other examples of this in his book, focusing in particular on work done on the appropriate response to sexual violence.

Flanagan is not as dismissive of these two objections as he is of the others. He recognises the importance of responding to injustice and accepts that some anger (minimal though it may be) might be necessary for psychological healing. Nevertheless, he thinks there are good reasons to think that anger is less important than proponents of these critiques make out. He makes three points in response to them.

First, he reemphasises that embracing the Buddhist/Stoic view does not mean giving up on all passions or emotions. It means accentuating and encouraging useful emotions and discouraging and extirpating destructive ones. This is done not by denying feelings but by moderating the beliefs, norms and scripts associated with them. This is important because it means that embracing the Buddhists/Stoic view does not entail ignoring all instances of injustice and becoming a pushover. It just means responding to injustice in a different way. To illustrate the point, Flanagan discusses a thought experiment (first proposed by Martha Nussbaum and based on the life of Elie Wiesel) involving a soldier liberating a Nazi death camp. In Nussbaum’s original formulation the soldier experiences profound rage and anger at what has happened to the people in the death camp. Nussbaum argues that this is the appropriate and desirable response to the injustices that occurred. Flanagan replies by asking us to imagine that instead of experiencing rage and anger the soldier experiences profound sorrow and compassion for the victims of the Nazis. Would this be any less appropriate and desirable a response to the injustice? Flanagan argues that it would not.

Second, he argues that anger is clearly not necessary in order to recognise and respond to injustice. To illustrate this he turns to his favoured example of the restoration movement in post-apartheid Africa. The leaders of this movement did not deny that people felt angry at what happened, but they did work hard to ensure that anger did not play a “pivotal or sustaining role” in seeking truth and reconciliation. They saw that anger could be destructive and that there was a need to ‘let go’ of anger if the society was to heal and move forward.

Third, and specifically in response to the catharsis objection, Flanagan argues that expressing an emotion such as anger often has a psychologically destructive effect, not a healing effect. The intuition underlying the catharsis objection is that anger is something that builds up inside us and needs to be released. Once it is released we return to a more normal, less angry state. The problem is that whenever this has been tested in practice, the opposite result is usually found to occur. In other words, instead of releasing and reducing angry, the expression of anger often just begets more anger (for a review of the main studies done to date, see here). This suggests that if we want to avoid destructive cycles of anger, we should avoid too much catharsis.

5. Conclusion
To sum up, Flanagan argues that we should consider shifting our moral equilibrium. Instead of viewing righteous anger as morally necessary and occasionally positive, we should see it as potentially destructive and counter-productive. We should shift to a more Buddhist/Stoic approach to anger.

To repeat what I said above, I think Flanagan’s argument about anger is not just interesting in and of itself, but also interesting from a methodological perspective. Trying to achieve super-wide equilibrium between different moral traditions can open up the moral possibility space. Doing this allows us to imagine new moral realities.

* Yes, of course, Stoicism is a Western tradition and so it is wrong to suppose that there is a single, unchallenged Western view of anger. Flanagan focuses on what he takes to be the dominant view within the Western liberal tradition.