Thursday, October 10, 2019

Escaping Skinner's Box: AI and the New Era of Techno-Superstition




[The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book]

The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. The third law states that “any sufficiently advanced technology is indistinguishable from magic”. The idea, I take it, is that if someone from the Paleolithic was transported to the modern world, they would be amazed by what we have achieved. Supercomputers in our pockets; machines to fly us from one side of the planet to another in less than a day; vaccines and antibiotics to cure diseases that used to kill most people in childhood. To them, these would be truly magical times.

It’s ironic then that many people alive today don’t see it that way. They see a world of materialism and reductionism. They think we have too much knowledge and control — that through technology and science we have made the world a less magical place. Well, I am here to reassure these people. One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about.

Let me explain by way of an analogy. In the late 1940s, the behaviorist psychologist BF Skinner — famous for his experiments on animal learning —got a bunch of pigeons and put them into separate boxes. Now, if you know anything about Skinner you’ll know he had a penchant for this kind of thing. He seems to have spent his adult life torturing pigeons in boxes. Each box had a window through which a food reward would be presented to the bird. Inside the box were different switches that the pigeons could press with their beaks. Ordinarily, Skinner would set up experiments like this in such a way that pressing a particular sequence of switches would trigger the release of the food. But for this particular experiment he decided to do something different. He decided to present the food at random intervals, completely unrelated to the pressing of the switches. He wanted to see what the pigeons would do as a result.

The findings were remarkable. Instead of sitting idly by and waiting patiently for their food to arrive, the pigeons took matters into their own hands. They flapped their wings repeatedly, they danced around in circles, they hopped on one foot, convinced that their actions had something to do with the presentation of the food reward. Skinner and his colleagues likened what the pigeons were doing to the ‘rain dances’ performed by various tribes around the world: they were engaging in superstitious behaviours to control an unpredictable and chaotic environment.

It’s important that we think about this situation from the pigeon’s perspective. Inside the Skinner box, they find themselves in an unfamiliar world that is deeply opaque to them. Their usual foraging tactics and strategies don’t work. Things happen to them, food gets presented, but they don’t really understand why. They cannot cope with the uncertainty; their brains rush to fill the gap and create the illusion of control.

Now what I want to argue here is that modern workers, and indeed all of us, in an environment suffused with AI, can end up sharing the predicament of Skinner’s pigeons. We can end up working inside boxes, fed information and stimuli by artificial intelligence. And inside these boxes, stuff can happen to us, work can get done, but we are not quite sure if or how our actions make a difference. We end up resorting to odd superstitions and rituals to make sense of it all and give ourselves the illusion of control, and one of the things I worry about, in particular, is that a lot of the current drive for transparent or explainable AI will reinforce this phenomenon.



This might sound far-fetched, but it’s not. There has been a lot of talk in recent years about the ‘black box’ nature of many AI-systems. For example, the machine learning systems used to support risk assessments in bureaucratic, legal and financial settings. These systems all work in the same way. Data from human behaviour gets fed into them, and they then spit out risk scores and recommendations to human decision-makers. The exact rationale for those risk scores — i.e. the logic the systems use — is often hidden from view. Sometimes this is for reasons intrinsic to the coding of the algorithm; other times it is because it is deliberately concealed or people just lack the time, inclination or capacity to decode the system.

The metaphor of the black box, useful though it is, is, however, misleading in one crucial respect: It assumes that the AI is inside the box and we are the ones trying to look in from the outside. But increasingly this is not the case. Increasingly, it is we who are trapped inside the box, being sent signals and nudges by the AI, and not entirely sure what is happening outside.



Consider the way credit-scoring algorithms work. Many times neither the decision-maker (the human in the loop) nor the person affected knows why they get the score they do. The systems are difficult to decode and often deliberately concealed to prevent gaming. Nevertheless, the impact of these systems on human behaviour is profound. The algorithm constructs a game in which humans have to act within the parameters set by the algorithm to get a good score. There are many websites dedicated to helping people reverse engineer these systems, often giving dubious advice about behaviours and rituals you must follow to improve your scores. If you follow this advice, it is not too much of a stretch to say that you end up like one Skinner’s pigeons - flapping your wings to maintain some illusion of control.

Some of you might say that this is an overstatement. The opaque nature of AI is a well-known problem and there are now a variety of technical proposals out there for making it less opaque and more “explainable” [some of which have been discussed here today]. These technical proposals have been accompanied by increased legal safeguards that mandate greater transparency. But we have to ask ourselves a question: will these solutions really work? Will they help ordinary people to see outside the box and retain some meaningful control and understanding of what is happening to them?

A recent experiment by Ben Green and Yiling Chen from Harvard tried to answer these questions. It looked at how human decision-makers interact with risk assessment algorithms in criminal justice and finance (specifically in making decisions about pretrial release of defendants and the approval loan applications). Green and Chen created their own risk assessment systems, based on some of the leading commercially available models. They then got a group of experimental subjects (recruited via Amazon’s Mechanical Turk) to use these algorithms to make decisions under a number of different conditions. I won’t go through all the conditions here, but I will describe the four most important. In the first condition, the experimental subjects were just given the raw score provided by the algorithm and asked to make a decision on foot of this; in the second they were asked to give their own prediction initially and then update it after being given the algorithm’s prediction; in the third they were given the algorithm’s score, along with an explanation of how that score was derived, and asked to make a choice; and in the fourth they were given the opportunity to learn how accurate the algorithm was based on real world results (did someone default on their loan or not; did they show up to their trial or not). The question was: how would the humans react to these different scenarios? Would giving them more information improve the accuracy, reliability and fairness of their decision-making?

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.



Now, the original title of my talk promised five reasons for pessimism about AI in the workplace. But what we have here is one big reason that breaks down into five sub-reasons. Let me explain what I mean. The problem of techno-superstitionism stems from two related problems: (i) a lack of understanding/knowledge of how the world (in this case the AI system) works and (ii) the illusion of control over that system.

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

Related to this is the fourth problem: that in order to make AI systems work effectively with humans, the designers and manufacturers have to control human attention and behaviour in a way that undermines human autonomy. Humans cannot be given free rein inside the box. They have to be guided, nudged, manipulated and possibly even coerced, to do the right thing. Explanations have to be packaged in a way that prevents the humans from undermining the accuracy, reliability and fairness of the overall system. This, of course, is not unusual. Workplaces are always designed with a view to controlling and incentivising behaviour, but AI enables a rapidly updating and highly dynamic form of behavioural control. The traditional human forms of resistance to outside control cannot easily cope with this new reality.

This all then culminates in the fifth and final problem: the pervasive use of AI in the workplace (and society more generally) (v) undermines human agency. Instead of being the active captains of our fates; we become the passive recipients of technological benefits. This is a tragedy because we have built so much of our civilisation and sense of self-worth on the celebrations of agency. We are supposed to be agents of change, responsible to ourselves and to one another for what happens in the world around us. This is why we value the work we do and why we crave the illusion of control. What happens if agency can no longer be sustained?

As per usual, I have left the solutions to the very end — to the point in the talk where they cannot be fully fleshed out and where I cannot be reasonably criticised for failing to do so — but it seems to me that we face two fundamental choices when it comes to addressing techno-superstition: (i) we can tinker with what’s presented to us inside the box, i.e. we can add more bells and whistles to our algorithms, more levers and switches. These will either give humans genuine understanding and control over the systems or the illusion of understanding and control. The problem with the former is that frequently involves tradeoffs or compromises to the system’s efficacy and the problem with the latter is that involves greater insults to the agency of the humans working inside the box. But there is an alternative: we can stop flapping our wings and get out of the box altogether. Leave the machines to do what they are best at while we do something else. Increasingly, I have come to think we should do the latter; that do so would acknowledge the truly liberating power of AI. This is the argument I develop further in my book Automation and Utopia.

Thank you for your attention.



Tuesday, September 24, 2019

Automation and Utopia is Now Available!




[Amazon.com] [Amazon.co.uk] [Book Depository] [Harvard UP] [Indiebound] [Google Play]

"Armed with an astonishing breadth of knowledge, John Danaher engages with pressing public policy issues in order to lay out a fearless exposition of the radical opportunities that technology will soon enable. With the precision of analytical philosophy and accessible, confident prose, Automation and Utopia demonstrates yet again why Danaher is one of our most important pathfinders to a flourishing future.”  
James Hughes, Institute for Ethics and Emerging Technologies

After 10 years, over 1000 blog posts, 50+ academic papers, and 60+ podcasts, I have finally published my first solo-authored book Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press 2019). I'm excited to finally share it with you all.

The book tries to present a rigorous case for techno-utopianism and a post-work future. I wrote it partly as a result of my own frustration with techno-futurist non-fiction. I like books that present provocative ideas about the future, but I often feel underwhelmed by the strength of the arguments they use to support these ideas. I don't know if you are like me, but if you are then you don't just want to be told what someone thinks about the future; you want to be shown why (and how) they think about the future and be able to critically assess their reasoning. If I got it right, then Automation and Utopia will allow you to do this. You may not agree with what I have to say in the end, but you should at least be able to figure out where I have gone wrong.

The book defends four propositions:


  • Proposition 1 - The automation of work is both possible and desirable: work is bad for most people most of the time, in ways that they don’t always appreciate. We should do what we can to hasten the obsolescence of humans in the arena of work.

  • Proposition 2 - The automation of life more generally poses a threat to human well-being, meaning, and flourishing: automating technologies undermine human achievement, distract us, manipulate us and make the world more opaque. We need to carefully manage our relationship with technology to limit those threats.

  • Proposition 3 - One way to mitigate this threat would be to build a Cyborg Utopia, but it’s not clear how practical or utopian this would really be: integrating ourselves with technology, so that we become cyborgs, might regress the march toward human obsolescence outside of work but will also carry practical and ethical risks that make it less desirable than it first appears.

  • Proposition 4 - Another way to mitigate this threat would be to build a Virtual Utopia: instead of integrating ourselves with machines in an effort to maintain our relevance in the “real” world, we could retreat to “virtual” worlds that are created and sustained by the technological infrastructure that we have built. At first glance, this seems tantamount to giving up, but there are compelling philosophical and practical reasons for favouring this approach.


If you have ever enjoyed anything I've written, and if you have any interest in technology, the future of work, human flourishing, utopianism, virtual reality, cyborgs, transhumanism, autonomy, anti-work philosophy, economics, philosophy, techno-optimism and, indeed, techno-pessimism, please consider getting a copy.

If you want to whet your appetite for the contents of the book, please check out my earlier blog series on technological unemployment and the value of work. Below is a short trailer with additional context and information.






Saturday, September 21, 2019

Should we create artificial moral agents? A Critical Analysis




I recently encountered an interesting argument. It was given in the midst of one of those never-ending Twitter debates about the ethics of AI and robotics. I won’t say who made the argument (to be honest, I can’t remember) but the gist of it was that we shouldn’t create robots with ethical decision-making capacity. I found this intriguing because, on the face of it, it sounds like a near-impossible demand. My intuitive reaction was that any robot embedded in a social context, with a minimal degree of autonomous agency, would have to have some ethical decision-making capacity.

Twitter is not the best forum for debating these ideas. Neither the original argument nor my intuitive reaction to it was worked out in any great detail. But it got me thinking. I knew there was a growing literature on both the possibility and desirability of creating ethical robots (or ‘artificial moral agents’ - AMAs - as some people call them). So I decided to read around a bit. My reading eventually led me to an article by Amanda Sharkey called ‘Can we program or train robots to be good?’, which provided the inspiration for the remainder of what you are about to read.

Let me start by saying that this is a good article. In it, Sharkey presents an informative and detailed review of the existing literature on AMAs. If you want to get up to speed on the current thinking, I highly recommend it. But it doesn’t end there. Sharkey also defends her own views about the possibility and desirability of creating an AMA. In short, she argues that it is probably not possible and definitely not desirable. One of the chief virtues of Sharkey’s argumentative approach is that it focuses on existing work in robotics and not so much on speculative future technologies.

In what follows I want to critically analyse Sharkey’s main claims. I do so because, although I agree with some of what she has to say, I find that I am still fond of my intuitive reaction to the Twitter argument. As an exercise in self-education, I want to try to explain why.


1. What is an ethical robot?
A lot of the dispute about the possibility and desirability of creating an ethical robot hinges on what we think such a robot would look like (in the metaphorical sense of ‘look’). A robot can be defined, loosely, as any embodied artificial agent. This means that a robot is an artifact with some degree of actuating power (e.g. a mechanical arm) that it can use to change its environment in order to achieve a goal state. In doing this, it has some capacity to categorise and respond to environmental stimuli.

On my understanding, all robots also have some degree of autonomous decision-making capacity. What I mean is that they do not require direct human supervision and control in order to exercise all of their actuating power. In other words, they are not just remote controlled devices. They have some internal capacity to selectively sort environmental stimuli in order to determine whether or not a decision needs to be made. Nevertheless, the degree of autonomy can be quite minimal. Some robots can sort environmental stimuli into many different categories and can make many different decisions as a result, some can only sort stimuli into one or two categories and make only one type of decision.

What would make a robot, so defined, an ethical decision-maker? Sharkey reviews some of the work that has been done to date on this question, including in particular the work of Moor (2007), Wallach and Allen (2009) and Malle (2016). I think there is something to be learned from each of these authors, but since I don’t agree entirely with any of them, what I offer here is my own modification of their frameworks.

First, let me offer a minimal definition of what an ethical robot is: it is a robot that is capable of categorising and responding to ethically relevant variables in its environment with a view towards making decisions that humans would classify as ‘good’, ‘bad’, ‘permissible’, ‘forbidden’ etc. Second, following James Moor, let me draw a distinction between two kinds of ethical agency that such a robot could exhibit:

Implicit Ethical Agency: The agent identifies and acts upon ethically relevant variables (principles, norms, values etc) without explicitly representing, using or reporting on those variables, or without explicitly using ethical language to explain and justify its actions (Moor’s definition of this stipulates that an implied ethical agent has ethical considerations designed into its decision-making mechanisms).

Explicit Ethical Agency: The agent identifies and acts upon ethically relevant variables (principles, norms, values etc) and does explicitly represent, use and report on those variables, and may use ethical language to explain and justify its actions.

You can think of these two forms of ethical agency as defining a spectrum along which we can classify different ethical agents. At one extreme we have a simple implicit ethical agent that acts upon ethically relevant considerations but never explicitly relies upon those considerations in how it models, reports or justifies its choices. At the other extreme you have a sophisticated explicit ethical agent, who knows all about the different ethical variables affecting their choices and explicitly uses them to model, report and justify its choices.

Degrees of autonomy are also relevant to how we categorise ethical agents. The more autonomous an ethical agent is the more ethically relevant variables it will be able to recognise and act upon. So, for example, a simple implicit ethical agent, with low degrees of autonomy, may be able to act upon one or two ethically relevant considerations. For example, it may be able to sort stimuli into two categories — ‘harmful’ and ‘not harmful’ — and make one of two decisions in response — ‘approach’ or ‘avoid’. An implicit ethical agent with high degrees of autonomy would be to sort stimuli into many more categories: ‘painful’, ‘pleasurable’, ‘joyous’, ‘healthy’, ‘laughter-inducing’ and so on; and would also be able to make many more decisions.

The difference between an explicit ethical agent with low degrees of autonomy and one with high degrees of autonomy would be something similar. The crucial distinction between an implicit ethical agent and an explicit ethical agent is that the latter would explicitly rely upon the ethical concepts and principles to categorise, classify and sort between stimuli and decisions. The former would not and would only appear to us (or be intended by us) to be reacting to them. So, for example, an implicit ethical agent may appear to us (and be designed by us) to sort stimuli into categories like ‘harmful’ and ‘not harmful’, but it may do this by reacting to how hot or cold a stimulus is.

This probably seems very abstract so let’s make it more concrete. An example of a simple implicit ethical agent (used by Moor in his discussion of ethical agency) would be an ATM. An ATM has a very minimal degree of autonomy. It can sort and categorise one kind of environmental stimulus (buttons pressed on a numerical key pad) and make a handful of decisions in response to these categorisations: give user the option to withdraw money or see account balance (etc); dispense money/do not dispense money. In doing so, it displays some implicit ethical agency insofar as its choices imply judgments about property ownership and the distribution of money. An example of a sophisticated explicit ethical agent would be an adult human being. A morally normal adult can categorise environmental stimuli according to many different ethical principles and theories and make decisions accordingly.

In short, then what we have here is a minimal definition of ethical agency and a framework for classifying different degrees of ethical agency along two axes: the implied-explicit axis; and the autonomy axis. The figure below illustrates the idea.




You might find this distinction between implied and explicit ethical agency odd. You might say: “surely the only meaningful kind of ethical agency is explicit? That’s what we look for in morally healthy adults. Classifying implied ethical agents as ethical agents is both unnecessary and over-inclusive.” But I think that’s wrong. It it is worth bearing in mind that a lot of the ethical decisions made by adult humans are examples of implied ethical agency. Most of the time, we do not explicitly represent and act upon ethical principles and values. Indeed, if moral psychologists like Jonathan Haidt are correct, the explicit ethical agency that we prize so highly is, in fact, an epiphenomenon: a post-hoc rationalisation of our implied ethical agency. I’ll return to this idea later on.

Another issue that is worth addressing before moving on is the relationship between ethical agency and moral/legal responsibility. People often assume that agency goes hand-in-hand with responsibility. Indeed, according to some philosophical accounts, a moral agent must, by necessity, be a morally responsible agent. But, it should be clear from the foregoing, that ethical agency does not necessarily entail responsibility. Simple implied ethical agency, for instance, clearly does not entail responsibility. A simple implied ethical agent would not have the capacity for volition and understanding, both of which we expect of a responsible agent. Sophisticated explicit ethical agents are another matter. They probably are responsible agents, though they may have excuses for particular actions.

This distinction between agency and responsibility is important. It turns out that much of the opposition to creating an ethical robot stems from the perceived link between agency and responsibility. If you don't accept that link, much of the opposition to the idea of creating an artificial moral agent ebbs away.


2. Methods for Creating an Ethical Robot
Now that we are a bit clearer about what an ethical robot might look like, we can turn to the question of how to create them. As should be obvious, most of the action here has to do with how we might go about creating the sophisticated explicit ethical agents. After all, creating simple implied ethical agents is trivial: it just requires creating a robot with some capacity to sort and respond to stimuli along lines that we would call ethical. Sophisticated explicit ethical agents pose a more formidable engineering challenge.

Wallach and Allen (2009) argue that there are two ways of going about this:

Top-down method: You explicitly use an ethical theory to program and design the robot, e.g. hard-coding into the robot an ethical principle such as ‘do no harm’.

Bottom-up method: You create an environment in which the robot can explore different courses of action and be praised or criticised (rewarded/punished) for its choices in accordance with ethical theories. In this way, the robot might be expected to develop its own ethical sensitivity (like a child that acquires a moral sense over the course of its development).

As Sharkey notes, much of the work done to date on creating a sophisticated AMA has tended to be theoretical or conceptual in nature. Still, there are some intriguing practical demonstrations of the idea. Three stood out from her discussion:

Winfield et al 2014: Created a robot that was programmed to stop other robots (designated as proxy humans in the experiment) from entering a ‘hole’/dangerous area. The robot could assess the consequences of trajectories through the experimental environment in terms of the degree of risk/harm they posed to the ‘humans’ and would then have to make a choice as to what to do to mitigate the risk (including blocking the ‘humans’ or, even, sacrificing itself). Sometimes the robot was placed in a dilemma situation where it had to choose between one of two ‘humans’ to save. Winfield et al saw this as a minimal attempt to implement Asimov’s first law of robotics. The method here is clearly top-down.
Anderson and Anderson 2007: Created a medical ethics robot that could give advice to healthcare workers about what do when a patient had made a treatment decision. Should the worker accept the decision or try to get the patient to change its mind? Using the ‘principlism’ theory in medical ethics, the robot was trained on a set of case studies (classified as involving ethically correct decisions) and then used inductive logic programming to understand how the ethical principles work in these cases. It could then abstract new principles from these cases. The Andersons claimed that their robot induced a new ethical principle from this process. Initially, it might sound like this involves the bottom-up method but Sharkey classifies it as top-down because a specific ethical theory (namely: principlism) was used when programming and training the robot. The Andersons did similar experiments subsequent to this original one along the same lines.
Riedl and Harrison 2015: Reported an initial attempt to use machine learning to train an AI to align its values with those of humans by learning from stories. The idea was that the stories contained information about human moral norms and the AI could learn human morality from them. A model of ‘legal’ (or permissible) plot transitions was developed from the stories, and the AI was then rewarded or punished in an experimental environment, depending on whether it made a legal transition or not. This was preliminary study only but would be an example of the bottom-up method at work.

I am sure there are other studies out there that would be worth considering. If anyone knows of good/important ones please let me know in the comments. But assuming these studies are broadly representative of the kind of work that has been done to date, one thing becomes immediately clear: we are a long way from creating a sophisticated explicit ethical agent. Will we ever get there?


3. Is it possible to create an ethical robot?
One thing Sharkey says about this — which I tend to agree with — is that much of the debate about the possibility of creating a sophisticated explicit ethical robot seems to come down to different groups espousing different faith positions. Since we haven’t created one yet, we are forced to speculate about future possibilities and a lot of that speculation is not easy to assess. Some people feel strongly that it is possible to create such a robot; others feel strongly that it is not. These arguments are influenced, in turn, by how desirable this possibility is seen to be.

With this caveat in mind, Sharkey still offers her own argument for thinking that it is not possible to create an explicit ethical agent. I’ll quote some of the key passages from her presentation of this argument in full. After that, I’ll try to make sense of them. She starts with this:

One reason for being skeptical about the likelihood that non-living, non-biological machines could develop a sense of morality at some point in the future is their lack of a biological substrate. A case can be made for the grounding of morality in biology. 
(Sharkey 2017, p 8)

She then discusses the work of Patricia Churchland, which argues that the social emotions are key to human morality and that these emotions have a clear evolutionary history and bio-mechanical underpinning. This leads Sharkey to argue that:

Current robots, lacking living bodies, cannot feel pain, or even care about themselves, let alone extend that concern to others. How can they empathise with a human’s pain and distress if they are unable to experience either emotion? Similarly, without the ability to experience guilt or regret, how could they reflect on the effects of their actions, modify their behavior, and build their own moral framework? 
(Sharkey 2017, p 8)

She continues by discussing the work of other authors on the important link between the emotions, morality and biology.

So what argument is being made? At first, it might look like Sharkey is arguing that moral agency depends on biology, but I think that is a bit of a red herring. What she is arguing is that moral agency depends on emotions (particularly second personal emotions such as empathy, sympathy, shame, regret, anger, resentment etc). She then adds to this the assumption that you cannot have emotions without having a biological substrate. This suggests that Sharkey is making something like the following argument:


  • (1) You cannot have explicit moral agency without having second personal emotions.

  • (2) You cannot have second personal emotions without being constituted by a living biological substrate.

  • (3) Robots cannot be constituted by a living biological substrate.

  • (4) Therefore, robots cannot have explicit moral agency.



Assuming this is a fair reconstruction of the reasoning, I have some questions about it. First, taking premises (2) and (3) as a pair, I would query whether having a biological substrate really is essential for having second personal emotions. What is the necessary connection between biology and emotionality? This smacks of biological mysterianism or dualism to me, almost a throwback to the time when biologists thought that living creatures possessed some élan vital that separated them from the inanimate world. Modern biology and biochemistry casts all that into doubt. Living creatures are — admittedly extremely complicated — evolved biochemical machines. There is no essential and unbridgeable chasm between the living and the inanimate. The lines are fuzzy and gradual. Current robots may be much less sophisticated than biological machines, but they are still machines. It then just becomes a question of which aspects of biological form underlie second personal emotions, and which can be replicated in synthetic form. It is not obvious to me that robots could never bridge the gap.

Of course, all this assumes that you accept a scientific, materialist worldview. If you think there is more to humans than matter in motion, and that this ‘something more’ is what supports our rich emotional repertoire, then you might be able to argue that robots will never share that emotional repertoire. But in that case, the appeal to biology and the importance of a biological substrate will make no sense, and you will have to defend the merits of the non-materialistic view of humans more generally.

In any event, as a said previously, I think the discussion of biology is a red herring. What Sharkey really cares about is the suite of second personal emotions and the claim is that robots will never share those emotions. This is where premise (1) becomes important. There are two questions to ask about this premise: what do you need in order to have second personal emotions? And why is it that robots can never have this?

There are different theories of emotion out there. Some people would argue that in order to have emotions you have to have phenomenal consciousness. In other words, in order to be angry you have to feel angry; in order to be empathetic you have to feel what another person is feeling. There is ‘something it is like’ to have these emotions and until robots have this something, they cannot be said to be emotional. This seems to be the kind of argument Sharkey is making. Look back to the quoted passages above. She places a lot of emphasis on the capacity to feel the pain of another, to feel guilt and regret. This suggests that Sharkey’s argument against the possibility of a robotic moral agent really boils down to an argument against the possibility of phenomenal consciousness in robots. I cannot get into that debate in this article, but suffice to say there are plenty of people who argue that robots could be phenomenally conscious, and that the gap here is, once again, not as unbridgeable as is supposed. Indeed, there are people, such Roman Yampolskiy, who argue that robots may already be minimally phenomenally conscious and that there are ways to test for this. Many people will resist this thought, but I find Yampolskiy’s work intriguing because it cuts through a lot of the irresolvable philosophical conundrums about consciousness and tries to provide clear, operational and testable understandings of it.

There is, also, another way of understanding the emotions. Instead of being essentially phenomenal experiences they can be viewed as cognitive tools for appraising and evaluating what is happening in the world around the agent. When a gazelle sees a big, muscly predator stalking into their field of vision, their fear response is triggered. The fear is the brain’s way of telling them that the predator is a threat to their well-being and that they may need to run. In other words, the emotion of fear is a way of evaluating the stimulus that the gazelle sees and using this evaluating to guide behaviour. The same is true for all other emotions. They are just the brain’s way of assigning different weights and values to environmental stimuli and then filtering this forward into its decision-making processes. There is nothing in this account that necessitates feelings or experiences. The whole process could take place sub-consciously.

If we accept this cognitive theory of emotions, it becomes much less obvious why robots cannot have emotions. Indeed, it seems to me that if robots are to be autonomous agents at all, then they will have to have some emotions: they have to have some way of assigning weights and values to environmental stimuli. This is essential if they are going to make decisions that help them achieve their goal states. I don’t see any reason why these evaluations could not fall into the categories we typically associated with second personal emotions. This doesn’t mean that robots will feel the same things we feel when we have those emotions. After all, we don’t know if other humans feel the same things we feel. But robots could in principle act as if they share our emotional world and, as I have argued before, that acting ‘as if’ is enough.

Before I move on, I want to emphasise that this argument is about what is possible with robots, not what is actually the case. I’m pretty confident that present day robots do not share our emotional world and so do not rise to the level of sophisticated, explicit moral agents. My view would be similar to that of Yampolskiy’s with respect to phenomenal consciousness: present day robots probably have a minimal, limited form of cognitive emotionality and roboticists can build upon this foundation.


4. Should we create an ethical robot?
Even if it were possible, should we want to create robots with sophisticated ethical agency? Some people think we should. They argue that if we want robots to become more socially useful and integrated into our lives, then they will have to have improved moral agency. Human social life depends on moral agency and robots will not become integrated into human social life without it. Furthermore, there are some use cases — medical care, military, autonomous vehicles — where some form of ethical agency would seem to be a prerequisite for robots. In addition to this, people argue that we can refine and improve our understanding of morality by creating a robotic moral agent: the process will force us to clarify moral concepts and principles, and remove inconsistencies in our ethical thinking.

Sharkey is more doubtful. It is hard to decipher her exact argument, but she seems to make three key points. First, as a preliminary point, she agrees with other authors that it is dangerous to prematurely apply the language of ethical agency to robots because this tends to obscure human responsibility for the actions of robots:

Describing such machines as being moral, ethical, or human, risks increasing the tendency for humans to fail to acknowledge their ultimate responsibility for the actions of these artefacts…an important component to undertaking a responsible approach to the deployment of robots in sensitive areas is to avoid the careless application of words and terms used to describe human behaviour and decision-making. 
(Sharkey 2017, 9)

As I say, this is a preliminary point. It doesn’t really speak to the long-term desirability of creating robots with ethical agency, but it does suggest that it is dangerous to speak of this possibility prematurely, which is something that might be encouraged if we are trying to create such a robot. This highlights the point I made earlier about the link between concerns about ‘responsibility gaps’ and concerns about ethical agency in robots.

Sharkey then makes two more substantive arguments against the long-term desirability of robots with ethical agency. First, she argues that the scenarios in which we need competent ethical agents are ones in which “there is some ambiguity and a need for contextual understanding: situations in which judgment is required and there is not a single correct answer” (Sharkey 2017, 10). This implies that if we were to create robots with sophisticated ethical agency it would be with a view to deploying them in scenarios involving moral ambiguity. Second, she argues that we should not want robots to be dealing with these scenarios given that they currently lack the capacity to understand complex social situations and given that they are unlikely to acquire that capacity. She then continues by arguing that this rules robots out of a large number of social roles/tasks. The most obvious of these would be military robots or any other robots involved in making decisions about killing people, but it would also include other social-facing robots such a teaching robots, care robots and even bar-tending robots:

But how could a robot make appropriate decisions about when to praise a child, or when to restrict his or her activities, without a moral understanding? Similarly how could a robot provide good care for an older person without an understanding of their needs, and of the effects of its actions? Even a bar-tending robot might be placed in a situation in which decisions have to be made about who should or should not be served, and what is and is not acceptable behaviour. 
(Sharkey 2017, 11)

What can we make of this argument? Let me say three things by way of response.

First, if we accept Sharkey’s view then we have to accept that a lot of potential use cases for robots are off the table. In particular, we have to accept that most social robots — i.e. robots that are intended to be integrated into human social life — are ethically inappropriate. Sharkey claims that this is not the case. She claims that there would still be some ethically acceptable uses of robots in social settings. As an example, she cites an earlier paper of hers in which she argued that assistive robots for the elderly were okay, but care robots were not. But I think her argument is more extreme than she seems willing to accept. Most human social settings are suffused with elements of moral ambiguity. Even the use of an assistive robot — if it has some degree of autonomy — will have the potential to butt up against cases in which a capacity to navigate competing ethical demands might be essential. This is because human morality is replete with vague and sometimes contradictory principles. Consider her own example of the bar-tending robot. What she seems to be suggesting with this example is that you ought not to have a robot that just serves people as much alcohol as they like. Sometimes, to both protect themselves and others, people should not be served alcohol. But, of course, this is true for any kind of assistance a robot might provide to a human. People don’t always want what is morally best for themselves. Sometimes there will be a need to judge when it is appropriate to give assistance and when it is not. I cannot imagine an interaction with a human that would not, occasionally, have features like this. This implies that Sharkey’s injunction, if taken seriously, could be quite restrictive.

People may be willing to pay the price and accept those restrictions on the use of robots, but this then brings me to the second point. Sharkey’s argument hinges on the premise that we want moral agents to be sensitive to moral ambiguities and have the capacity to identify and weigh competing moral interests. The concern is that robots will be too simplistic in their moral judgments and lack the requisite moral sensitivity. But it could be that humans are too sensitive to the moral ambiguities of life and, as a consequence, too erratic and flexible with their moral judgments. For example, when making decisions about how to distribute social goods, there is a tendency to get bogged down in all the different moral variables and interests at play, and then struggle to balance those interests effectively when making decisions. When making choices about healthcare, for instance, which rules should we follow: should we give to most needy? What defines those with the most need? What is a healthcare need in the first place? Should we force people to get insurance and refuse to treat those without? Should those who are responsible for their own ill-health be pushed down the order of priority when receiving treatment? If you take all these interests seriously, you can easily end up in a state of moral paralysis. Robots, with their greater simplicity and stricter rule-following behaviour, might be beneficial because they can cut through the moral noise. This, as I understand it, is one of the main arguments in favour of autonomous vehicles: they are faster at responding to some environmental stimuli but also stricter in how they follow certain rules of the road; this can make them safer, and less erratic, than human drivers. This remains to be seen, of course. We need a lot more testing of these vehicles before we can become reasonably confident of their greater safety, but it seems to me that there is a prima facie case that warrants this testing. I suspect this is true across many other possible use cases for robots too.

Third, and finally, I want to return to my original argument — the intuition that started this article — about the unavoidability of ethical robot agents. Even if we accept Sharkey’s view that we shouldn’t create sophisticated explicit ethical agents, it seems to me that if we are going to create robots at all, we will still have to create implicit ethical agents and hence confront many of the same design choices that would go into the design of an explicit ethical agent. The reasoning flows from what was said previously. Any interaction a robot has with a human is going to be suffused with moral considerations. There is no getting away from this: these considerations constitute the invisible framework of our social lives. If a robot is going to work autonomously within that framework, then it will have to have some capacity to identify and respond to (at least some of) those considerations. This may not mean that they explicitly identify and represent ethical principles in their decision-making, but they will need to do so implicitly.* This might sound odd but then recall the point I made previously: that according to some moral psychologists this is essentially how human moral agency functions: the explicit stuff comes after our emotional and subconscious mind has already made its moral choices. You could get around this by not creating robots with autonomous decision-making capacity. But I would argue that, in that case, you are not really creating a robot at all: you are creating a remote controlled tool.

* This is true even if the robot is programmed to act in an unethical way. In that case the implicit ethical agency contradicts or ignores moral considerations. This still requires some implicit capacity to exercise moral judgment with respect to the environmental stimuli.



Thursday, September 19, 2019

#64 - Munthe on the Precautionary Principle and Existential Risk


Christian Munthe

In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science & technology, the environment and society. He is probably best-known for his work on the precautionary principle and its uses in ethical and policy debates. This was the central topic of his 2011 book The Price of Precaution and the Ethics of Risk. We talk about the problems with the practical application of the precautionary principle and how they apply to the debate about existential risk. You can download the episode here or listen below.

You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:35 - What is the precautionary principle? Where did it come from?
  • 6:08 - The key elements of the precautionary principle
  • 9:35 - Precaution vs. Cost Benefit Analysis
  • 15:40 - The Problem of the Knowledge Gap in Existential Risk
  • 21:52 - How do we fill the knowledge gap?
  • 27:04 - Why can't we fill the knowledge gap in the existential risk debate?
  • 30:12 - Understanding the Black Hole Challenge
  • 35:22 - Is it a black hole or total decisional paralysis?
  • 39:14 - Why does precautionary reasoning have a 'price'?
  • 44:18 - Can we develop a normative theory of precautionary reasoning? Is there such a thing as a morally good precautionary reasoner?
  • 52:20 - Are there important practical limits to precautionary reasoning?
  • 1:01:38 - Existential risk and the conservation of value
 

Relevant Links




Thursday, September 12, 2019

Are robots like animals? In Defence of the Animal-Robot Analogy


Via Rochelle Don on Flickr


People dispute the ontological status of robots. Some insist that they are tools: objects created by humans to perform certain tasks — little more than sophisticated hammers. Some insist that they are more than that: that they are agents with increasing levels autonomy — now occupying some liminal space between object and subject. How can we resolve this dispute?

One way to do this is by making analogies. What is it that robots seem to be more like? One popular analogy is the animal-robot analogy: robots, it is claimed, are quite like animals and so we should model our relationships with robots along the lines of the relationships we have with animals.

In its abstract form, this analogy is not particularly helpful. ‘Animal’ denotes a broad class. When we say that a robots is like an animal do we mean it is like a sea slug or like a chimpanzee, or something else? Also, even if we agree that a robot is like a particular animal (or sub-group of animals) what significance does this actually have? People disagree about how we ought to treat animals. For example, we think it is acceptable to slaughter and experiment with some, but not others.

The most common animal-robot analogies in the literature tend to focus on the similarities between robots and household pets and domesticated animals. This makes sense. These are the kinds of animals with whom we have some kind of social relationships and upon whom we rely for certain tasks to be performed. Consider the sheep dog who is both a family pet and a farmyard helper. Are there not some similarities between it and a companion robot?

As seductive as this analogy might be, Deborah Johnson and Mario Verdicchio argue that we should resist it. In their paper “Why robots should not be treated like animals” they accept that there are some similarities between robots and animals (e.g. their ‘otherness’, their assistive capacity, the fact that we anthropomorphise and get attached to them etc.) but also argue that there are some crucial differences. In what follows I want to critically assess their arguments. I think some of their criticisms of the animal-robot analogy are valid, but others less so.


1. Using the analogy to establish moral status
Johnson and Verdicchio look at how the analogy applies to three main topics: the moral status of robots, the responsibility/liability of robots, and the effect of human-robot relationships on human relationships with other humans. Let’s start by looking at the first of those topics: moral status.

One thing people are very interested in when it comes to understanding robots is their moral status. Do they or could they have the status of moral patients? That is to say, could they be objects of moral concern? Might we owe them a duty of care? Could they have rights? And so on. Since we ask similar questions about animals, and have done for a long time, it is tempting to use the answers we have arrived at as a model for answering the questions about robots.

Of course, we have to be candid here. We have not always treated animals as though they are objects of moral concern. Historically, it has been normal to torture, murder and maim animals for both good reasons (e.g. food, biomedical experimentation) and bad (e.g. sport/leisure). Still, there is a growing awareness that animals might have some moral status, and that this means they are owed some moral duties, even if this doesn’t quite extend to the full suite of duties we owe to an adult human being. The growth in animal welfare laws around the world is testament to this. Given this, it is quite common for robot ethicists to argue that robots, due to their similarities with animals, might be owed some moral duties.

Johnson and Verdicchio argue that this style of argument overlooks the crucial difference between animals and robots. This difference is so crucial that they repeat it several times in the article, almost like a mantra:

Robots are machines. Animals are sentient organisms, that is, they are capable of perception and they feel, whereas robots do not, at least not in the important sense in which animals do [they acknowledge in a footnote that roboticists sometimes talk about robots sensing and feeling things but then argue that this language is being used in a metaphorical sense]. 
(Johnson and Verdicchio 2018, pg 4 of the pre-publication version).
The problem is that robots do not suffer and even those of the future will not suffer. Yes, future robots might have some states of being that could be equated with suffering [refs omitted] but, futuristic thinking leaves it unclear what—other than metaphorical representation—it could mean to say that a robot suffers. Thus, the animal–robot analogy doesn’t work here. Animals are sentient beings and robots are not. 
(Johnson and Verdicchio 2018, 4-5)
Robots of today do not have sentience or consciousness and do not suffer. Robots of the future might have characteristics that are equated with sentience, suffering, and consciousness, but if these features are going to be independent of each other…they will be fundamentally different from what humans and (some) animals have. It is the capacity to suffer that drives a wedge between animals and robots when it comes to moral status. 
(Johnson and Verdicchio 2018, 5)

I quote these passages at some length because they effectively summarise the argument the authors make. It is pretty clear what the reasoning is:


  • (1) Animals do suffer/have sentience or consciousness.

  • (2) Robots cannot and will not suffer or have sentience or consciousness (even if it is alleged that robots do have those capacities, the terms will be applied metaphorically to the case of robots)

  • (3) The capacity to suffer or have sentience or consciousness is the reason why animals have moral status.

  • (4) Therefore, the robot-animal analogy is misleading, at least when used to ground claims about robot moral status.



I find this argumentation relatively weak. Beyond the categorical assertion that animals are sentient and robots are not, we get little in the way of substantive reasoning. Johnson and Verdicchio seem to just have a very strong intuition or presumption against robot sentience. This sounds like a reasonable position since, in my experience, many people share this intuition. But I am sceptical of it. I’ve outlined my thinking at length in my paper ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’.

The gist of my position is this. A claim to the effect that another entity has moral status must be justified on the basis of publicly accessible evidence. If we grant that sentience/consciousness grounds moral status, we must then ask: what publicly accessible evidence warrants our belief that another entity is sentient/conscious? My view is that the best evidence — which trumps all other forms of evidence — is behavioural. The main reason for this is that sentience is inherently private. Our best window into this private realm (imperfect though it may be) is behavioural. So if sentience is going to be a rationally defensible basis for ascribing moral status to others, we have to work it out with behavioural evidence. This means that if an entity behaves as if it is conscious or sentient (and we have no countervailing behavioural evidence) then it should be treated as having moral status.

This argument, if correct, undercuts the categorical assertion that robots are not and cannot be sentient (or suffer etc.), as well as the claim that any application of such terminology to a robot must be metaphorical. It suggests that this is not something that can be be asserted in the abstract. You have to examine the behavioural evidence to see what the situation is: if robots behave like sentient animals (granting, for the moment that animals are sentient) then there is no reason to deny them moral status or to claim that their sentience is purely metaphorical. Since we do not have direct epistemic access to the sentience of humans or other animals, we have no basis by which to distinguish between ‘metaphorical’ sentience and ‘actual’ sentience, apart from the behavioural.
This does not mean, of course, that robots as they currently exist have moral status equivalent to animals. That depends on the behavioural evidence. It does mean, however, the chasm between animals and robots with respect to suffering and sentience is not, as Johnson and Verdicchio assert, unbridgeable.

It is worth adding that this is not the only reason to reject the argument. To this point the assumption has been that sentience or consciousness is the basis of moral status. But some people dispute this. Immanuel Kant, for instance, might argue that it is the capacity for reason that grounds moral status. It is because humans can identify, respond to and act on the basis of moral reason that they are owed moral duties. If robots could do the same, then perhaps they should be afforded moral status too.

To be fair, Johnson and Verdicchio accept this point and argue that it is not relevant to their focus since people generally do not rely on an analogy between animals and robots to make such an argument. I think this is correct. Despite the advances in thinking about animal rights, we do not generally accept that animals are moral agents capable of identifying and responding to moral reasons. If robots are to be granted moral status on this basis, then it is a separate argument.


2. Using the analogy to establish rules for robot responsibility/liability
A second way in which people use the animal-robot analogy is to develop rules for robot responsibility/liability. The focus here is usually on domesticated animals. So imagine you own a horse and you are guiding it through the village one day. Suddenly, you lose your grip and the horse runs wild through the farmers’ market, causing lots of damage and mayhem in its wake. Should you be legally liable for that damage? Legal systems around the world have grappled with this question for a long time. The common view is that the owner of an animal is responsible for the harm done by the animal. This is either because liability is assigned to the owner on a strict basis (i.e. they are liable even if they were not at fault) or on the basis of negligence (i.e. they failed to live up to some standard of care).

Some people argue that a similar approach should be applied to robots. The reason is that robots, like animals, can behave in semi-autonomous and unpredictable ways. The best horse-trainer in the world will not be able to control a horse’s behaviour at all times. This does not mean they should avoid legal liability. Likewise, for certain classes of autonomous robot, the best programmer or roboticst will not be able to perfectly predict and control what the robot will do. This does not mean they should be off the hook when it comes to legal liability. Schaerer et al (2009) are the foremost proponents of this ‘Robots as Animals’ framework. As they put it:

The owner of a semi-autonomous machine should be held liable for the negligent supervision of that machine, much like the owner of a domesticated animal is held liable for the negligent supervision of that animal. 
(2009, 75)

Johnson and Verdicchio reject this argument. Although they agree with the overall conclusion — i.e. that robot manufacturers/owners should not be ‘off the hook’ when it comes to liability — Johnson and Verdicchio argue that the analogy being made between robots and animals is unhelpful because there are crucial differences between robots and animals:

no matter what autonomy is in robots, the robots will have been created entirely by humans. Differently from what happens in genetics, humans do have a complete knowledge of the workings of the electronic circuitry of which a robot’s hardware is comprised, and the instructions that constitute the robot’s software have been written by a team of human coders. Even the most sophisticated artefacts that are able to learn and perfect new tasks, thanks to the latest machine learning techniques, depend heavily on human designers for their initial set-up, and human trainers for their learning process. 
(Johnson and Verdicchio 2018, 7)

They continue to argue that these differences mean we should take a different route to the conclusion that robots manufacturers ought to be liable:

The concepts of strict liability and negligence seem relevant to legal liability for robot behaviour but not because robots are like domesticated animals, but simply because they are manufactured products with some degree of unpredictability. The fundamental difference between animals and robots—that one is a living organism and the other a machine—makes analogies suspect…In the case of animals, owners exert their influence through training of a natural entity; in the case of robots, manufacturers exert their influence in the creation of robots and they or others (those who buy the robots) may also exert influence via training. For this, animals are not a good model. 
(Johnson and Verdicchio 2018, 7)

I have mixed feelings about this argument. One minor point I would make is that I suspect the value of the animal-robot analogy will depend on the dialectical context. If you are talking to someone who thinks that robot manufacturers ought not be liable because robots are autonomous (or semi-autonomous), then the analogy might be quite helpful. You can disarm their reasoning by highlighting the fact that we already hold the owners of autonomous/semi-autonomous animals liable. This might cause them to question their original judgment and lead them toward the conclusion preferred by Johnson and Verdicchio. So to say that the analogy is unhelpful or obfuscatory does not strike me as being always true.

More seriously, the argument Johnson and Verdicchio make rests on what are, for me, some dubious assumptions. Foremost among them are (a) there is an important difference between training a natural entity and designing, manufacturing and training an artificial entity, (b) we have complete knowledge of robot hardware (and don’t have complete knowledge of animal hardware) and (c) this knowledge and its associated level of control makes a crucial difference when it comes to assigning liability. Let’s consider each of these in more detail.

The claim that there is some crucial difference between a trained natural entity and a designed/manufactured/trained artificial entity is obscure to me. The suggestion elsewhere in the article is that an animal trainer is working with a system (the biological organism) that is a natural given: no human was responsible for evolving the complex web of biological tissues and organs (etc) that give the animal its capacities. This is very different from designing an artificial system from scratch.

But why is it so different? The techniques and materials needed to create a complex artificial system are also given to us: they are the product of generations of socio-technical development and not the responsibility of any one individual. Perhaps biological systems are more complex than the socio-technical system (though I am not sure how to measure complexity in this regard) but I don’t see why that is a crucial difference. Similarly, I would add that it is misleading to suggest that domesticated animals are natural. They have been subject to artificial selection for many generations and will be subject to more artificial methods of breeding and genetic engineering in the future. Overall, this leads me to conclude that the distinction between the natural and the artificial is a red herring in this debate.

The more significant difference probably has to do with the level of knowledge and control we have over robots vis-a-vis animals. Prima facie, it is plausible to claim that the level of knowledge and control we have over an entity should affect the level of responsibility we have for that entity’s activities, since both knowledge and control have been seen as central to responsibility since the time of Aristotle.

But there are some complexities to consider here. First, I would dispute the claim that people have complete knowledge of a robot’s hardware. Given that robots are not really manufactured by individuals but by teams, and given that these teams rely heavily on pre-existing hardware and software to assemble robots, I doubt whether the people involved in robot design and manufacture have complete knowledge of their mechanics. And this is to say nothing about the fact that some robotic software systems are inherently opaque to human understanding, which compounds this lack of complete knowledge. More importantly, however, I don’t think having extensive knowledge of another entity’s hardware automatically entails greater responsibility for its conduct. We have pretty extensive knowledge of some animal hardware — e.g. we have mapped the genomes and neural circuitry of some animals like c.elegans — but I would find it hard to say that because we have this knowledge we are somehow responsible for their conduct.

Second, when it comes to control, it is worth bearing in mind that we can have a lot of control over animals (and, indeed, other humans) if we wish to have it. The Spanish neuroscientist — Jose Delgado — is famous for his neurological experiments on bulls. In a dramatic presentation, he implanted an electrode array in the brain of a bull and used a radio controller to stop it from charging at a him in a bullring. Delgado’s techniques were quite crude and primitive, but he and others have shown that it is possible to use technology to exert a lot of control over the behaviour of animals (and indeed humans) if you so wish (at the limit, you can use technology to kill an animal and shut down any problematic behaviour).

At present, as far as I am aware, we don’t require the owners of domesticated animals to implant electrodes in their brains and then carry around remote controls that would enable them to shut down problematic behaviour. But why don’t we do this? It would be an easy way to address and prevent the harm caused by semi-autonomous animals. There could be several reasons but the main one would probably be because we think it would be cruel. Animals don’t just have some autonomy from humans; they deserve some autonomy. We can train their ‘natural’ abilities in a particular direction way, but we cannot intervene in such a crude and manipulative way.

If I am right, this illustrates something pretty important: the moral status of animals has some bearing on the level of control we both expect and demand of their owners. This means questions about the responsibility of manufacturers for robots cannot be disentangled from questions about their moral status. It is only if you assume that robots do not (and cannot) have moral status that you assume they are very different from animals in this respect. The very fact that the animal-robot analogy casts light on this important connection between responsibility and status strikes me as being useful.


3. Using the analogy to understand harm to others
A third a way of using the animal-robot analogy is to think about the effect that our relationships with animals (or robots) have on our relationships with other humans. You have probably heard people argue that those who are cruel to animals are more likely to be cruel to humans. Indeed, it has been suggested that psychopathic killers train themselves, initially, on animals. So, if a child is fascinated by torturing and killing animals there is an increased likelihood that they will transfer this behaviour over to humans. This is one reason why we might want to ban or prevent cruelty to animals (in addition to the intrinsic harm that such cruelty causes to the animals themselves).

If this is true in the case of animals then, by analogy, it might also be true in the case of robots. In other words, we might worry about human cruelty to robots because of how that cruelty might transfer over to other humans. Kate Darling, who studies human-robot interactions at MIT has made this argument. She doesn’t think that robots themselves can be harmed by the interactions they have with humans but that human cruelty to robots (simulated though it may be) could encourage and reinforce cruelty more generally.

This style of argument is, of course, common to other debates about violent media. For example there are many people who argue that violent movies and video games encourage and reinforce cruelty and violence toward real humans. Whatever about the merits of those other arguments, Johnson and Verdicchio are sceptical about the argument as it applies to animals and robots. There are two main reasons for this. The first is that the evidence linking violence to animals and violence to humans may not be that strong. Johnson and Verdicchio certainly cast some doubts on it, highlighting the fact that there are many people (e.g. farmers, abattoir workers) whose jobs involve violence (of a sort) to animals but who do not transfer this over to humans. The second reason is that even if there were some evidence to suggest that cruelty to robots did transfer over to humans, there would be ways of solving this problem that do not involve being less cruel to robots. As they put it:

…if it were found to be true that the sight of cruelty to humanoid robots desensitized us to the sight of cruelty in humans or that engaging in cruelty to humanoid robots increased the likelihood that we would be cruel to one another, this would provide some justification for action. The justified action could but need not necessarily be to grant rights to robots. There are at least two different directions that might be taken. One would be to restrict what could be done to humanoid robots and the other would be to restrict the design of robots. 
(Johnson and Verdicchio 2018, 8)

They clarify that the restrictive designs for robots could include ensuring that the robot does not appear too humanoid and does not display any signs of suffering. The crucial point then is that this second option is not available to us in the case of animals. To repeat the mantra from earlier: animals suffer and robots do not. We cannot redesign them to prevent this. Therefore there are independent reasons for banning cruelty to animals that do not apply to robots.

I have written about this style of argument ad nauseum in the past. My comments have focused primarily on whether sexual violence toward robots might transfer over to humans, and not on violence more generally, but I think the core philosophical issues are the same. So, if you want to my full opinion on whether this kind of argument works I would suggest reading some of my other papers on it (maybe start with this one and this one). I will, however, say a few things about it here.

First, I agree with Johnson and Verdicchio that the animal-robot analogy is probably superfluous when it comes to making this argument. One reason for this is that there are other analogies upon which to draw, such as the analogy with the violent video games debate. Another reason is that whether or not robot cruelty carries over to cruelty towards humans will presumably depend on its own evidence and not on analogies with animals or violent video games. How we treat robots could be sui generis. Until we have the evidence about robots, it will be difficult to know how seriously to take this argument.

Second, one point I have been keen to stress in my previous work is that it is probably going to be very difficult to get that evidence. There are several reasons for this. One reason is that it is probably going to be very difficult to do good scientific work on the link between human-robot interactions and human-human interactions. We know this from other debates about exposure to violent media. These debates tend to be highly contentious and the effect sizes are often weak. Researchers and funders have agendas and narratives they would like to support. This means we often end up in a epistemically uncertain position when it comes to understanding the effects of such exposure on real world behaviour. This makes sense since one thing we do know is that the causes of violence are multifactorial. There are many levers that can be pulled to both discourage and encourage violence. At any one time, different combinations of these levers will be activated. To think that one such lever — e.g. violence to robots — will have some outsized influence on violence more generally seems naive.

Third, it is worth noting, once again, that the persuasiveness of Johnson and Verdicchio’s argument hinges on whether you think robots have the capacity for genuine suffering or not. They do not think this is possible. And they are very clear in saying that all appearances of robot suffering must be simulative or deceptive, not real. This is something I disputed earlier on. I think ‘simulations’ (more correctly: outward behavioural signs) are the best evidence we have to go on when it comes to epistemically grounding our judgments about the suffering of others. Consequently, I do not think the gap between robots and animals is as definitive as they claim.

Fourth, the previous point notwithstanding, I agree with Johnson and Verdicchio that there are design choices that roboticists can make that might moderate any spillover effects of robot cruelty. This is something I discussed in my paper on ‘ethical behaviourism’. That said, I do think this is easier said than done. My sense from the literature is that humans tend to identify with and anthropomorphise anything that displays agency. But since agency is effectively the core of what it mean for something to be a robot, this suggests that limiting the tendency to over-identify with robots is tantamount to saying that we should not create robots at all. At the very least, I think the suggestions made by proponents of Johnson and Verdicchio’s view — e.g. having robots periodically remind human users that they do not feel anything and are not suffering — need to be tested carefully. In addition to this, I suspect it will be hard to prevent roboticists from creating robots that do ‘simulate’ suffering. There is a strong desire to create human-like robots and I am not convinced that regulation or ethical argumentation will prevent this from happening.

Finally, and this is just a minor point, I’m not convinced by the claim that we will always have design options when it comes to robots that we do not have when it comes to animals. Sophisticated genetic and biological engineering might make it possible to create an animal that does not display any outward signs of suffering (Douglas Adams’s famous thought experiment about the cow that wants to be eaten springs to mind here). If we do that, would that make animal cruelty okay? Johnson and Verdicchio might argue that engineering away the outward signs of suffering doesn’t mean that the animal is not really suffering, but then we get back to the earlier argument: how can we know that?


4. Conclusion
I have probably said too much. To briefly recap, Johnson and Verdicchio argue that the animal-robot analogy is misleading and unhelpful when it comes to (a) understanding the moral status of animals, (b) attributing liability and responsibility to robots, and (c) the likelihood of harm to robots translating into harm to humans. I have argued that this is not true, at least not always. The animal-robot analogy can be quite helpful in understanding at least some of the key issues. In particular, contrary to the authors, I think the epistemic basis on which we ascribe moral status to animals can carry over to the robot case, and this has important consequences for how we attribute liability to actions performed by semi-autonomous systems.




Friday, September 6, 2019

Is there a liberal case for no-platforming?



Via Newtown Grafitti

No platforming is the practice of denying speakers the opportunity to speak at certain venues because of the views they espouse or are expected to espouse. De-platforming is the related practice of trying to remove or prevent a speaker from speaking, after they have been invited to speak or have begun to speak. In this context, ‘speaking’ can be interpreted broadly to include any opportunity given to someone to express their views to an audience (for example, a newspaper opinion writer could be de-platformed).

Although both practices can occur anywhere that speakers are provided with a platform — witness the 2018 controversy about Steve Bannon at the New Yorker festival — they are most commonly associated with university campuses. There have been several well-known incidents over the past few years in which protesters (usually student groups) have tried (sometimes with limited success) to deny speakers a platform on university campuses. Some of the best known examples include: Milo Yiannopolous at UC Berkeley, Charles Murray at Middlebury College, Maryam Namazie at Goldsmiths University, Ayaan Hirsi Ali at Brandeis University, and Germaine Greer at Cardiff University.

If they succeed, both no platforming and de-platforming are, in effect, partial forms of censorship. They do not completely prevent certain points of view from being expressed (there are, after all, many platforms), but they do prevent them from being expressed at specific times and places. In liberal thought, there is a general presumption against content-based censorship of this type. The most famous defence of free speech in the Western tradition comes from John Stuart Mill. In chapter 2 of On Liberty, Mill argued that we ought to allow for the expression of all points of view because this was a way of getting at the truth. To justify content-based censorship we have to assume a level of epistemic authority on the part of the censors that we should be inclined to doubt. Academic institutions, in particular, should be reluctant to do this since they are in the business of getting at the truth.

That said, Mill did accept that certain forms of speech could be censored or prohibited if they caused clear and identifiable harm to others. This concession creates some practical problems. Many of the recent debates about no platforming and de-platforming have accepted this Millian premise and have argued that the forms of speech in dispute do cause clear and identifiable harms to others. Thus, for example, Charles Murray’s views about race and IQ are said to be harmful to African American students on college campuses, and Germaine Geer’s views about transgender identity are said to be harmful to transgender students. In other words, no platforming has been defended in essentially Millian terms: the defenders accept that there is a presumption in favour of free speech but that this presumption is overturned in these cases because the speech acts in question do cause harm.

These arguments are controversial. ‘Harm’ is an inherently fuzzy concept. It is easily stretched and tightened to suit the circumstance. Must the harm be physical or can it be psychological too? Must the harm be directly caused by the speech or can it be indirectly caused through the incitement of third parties? There are no bright lines here and reasonable people can and do disagree about where to draw them. Some people try to narrow the definition as much as possible, others, often with an eye towards tolerance and equality, try to broaden it.

This feature of the debate about free speech and no platforming troubles Robert Simpson and Amia Srinivasan. In their article ‘No Platforming’, they argue that the standard liberal arguments get sucked into interminable and difficult-to-resolve debates about which kinds of speech are legitimately provocative and which are illegitimately harmful. This prompts them to consider whether they might be another way to resolve the issue on lines that are acceptable to proponents of traditional liberal thought. They argue that there might be. Using the concept of academic freedom, they suggest that there could be some legitimate liberal grounds on which to favour no platforming on university campuses.

In what follows, I want to critically analyse their argument. I will suggest that their proposal, though intriguing, fares little better than the Millian one they seek to supplant. I will conclude by arguing that questions concerning which kinds of speech ought to be given a platform are difficult to resolve on principled grounds. This is consistent with my previous analysis of Mill’s argument.


1. Academic Freedom and No Platforming
Simpson and Srinivasan’s argument hinges on a particular interpretation of what the purpose of a university is and the kinds of speech protection that are essential to that purpose. ‘Academic freedom’ is the conceptual label applied to the set of speech-governing rules and norms that serves this purpose.

What then is the purpose of the university and the nature of academic freedom? One view, which they dismiss, is that universities are committed to the pursuit of truth in all its forms and that speech on a university campus ought to be regulated in the same manner as speech in the public square. On this view, academic freedom can cover all speech by members of a university community, including controversial extramural speech on issues of social and political morality, unrelated to the disciplinary expertise of the academics in question. This view is predominant in public universities in the US, but is expansive and seems tantamount to saying that there is no distinctive purpose to a university other than to provide a forum for debate and conversation of all kinds. Another view, which they also dismiss, is more deflationary and holds that academic freedom is just whatever academics need it to be in order to do their work in a congenial manner. This view would obviously make it very difficult to have speech principles of any kind. Academic freedom is just a kind of power politics: whoever is in power gets to determine what can be said and what cannot be said.

In lieu of these accounts, Simpson and Srinivasan favour an account of academic freedom that was first developed by Robert Post. They do not do so because they think this is the best or most defensible account of academic freedom. They do so because they think Post’s account is reasonable and consistent with mainstream liberal principles. This somewhat non-committal endorsement of Post’s account is consistent with their rhetorical strategy which is to say ‘imagine you were a liberal; if so, is there anyway you could get onboard with some forms of no platforming?” This allows them to defend no platforming from a liberal perspective without themselves committing to that liberal perspective.

What does Post’s account of academic freedom say? It says that universities are not like the public sphere. Universities serve particular teaching and research missions. These teaching and research missions are guided by specific disciplinary norms concerning the style and content of communication. For example, if you are a scientist there is a particular methodology that you are expected to follow and a set of topics for teaching and research that fall inside the acceptable boundaries of that methodology. A physicist who teaches that the Earth is flat or that there is a perpetual motion machine is saying something that is not consistent with the communicative norms of their discipline. Similarly a historian who denies the evidence of the holocaust, and refuses to engage with the critics of their view, is not following the communicative norms of their discipline.

Academic freedom, for Post, requires that we accept that members of the relevant academic disciplines act as independent epistemic gatekeepers for their disciplines. They get to decide what the relevant methodologies and standards of evidence are. This means that there is inevitably going to be some content-based suppression of ideas. Some stuff just isn’t going to be relevant to the research and teaching missions of the different disciplines; and some stuff is going to be counter-productive to those missions. This is not to say that there cannot be growth and change within a discipline. Once upon a time, physicists believed in the existence of the luminiferous ether, nowadays they do not. But this growth and change happens through reasoned debate and argument among the independent epistemic gatekeepers.

This account of academic freedom can justify at least some forms of no platforming and de-platforming. As the epistemic gatekeepers, academics are entitled to deny certain speakers platforms or to protest the platforms given to others. If a creationist is invited to speak at a biology department, the academics within that department are within their rights to try to disinvite or deplatform them. This is entirely consistent with the mission of the biology department. Indeed, academics do, clearly, deny people platforms all the time along these lines; it’s just that most of the time this goes unobserved because we don’t know who it is they are not inviting.

Conversely — and Simpson and Srinivasan are keen to emphasise this point — the academics who serve as epistemic gatekeepers can also argue that someone has a right to speak at a university, even if their views are controversial, if they are consistent with the standards within the relevant discipline. So, for example, although there are some university administrators and politicians that might like to deny a platform to certain climate scientists because of what they say about climate change, the gatekeepers within the relevant academic disciplines can insist that they be given a platform in the interests of academic freedom.

Who gets to play this epistemic gatekeeping function? Is it just professors or permanent members of academic staff? They are certainly the most plausible candidates but Simpson and Srinivasan argue that others, including graduate students and undergraduate students can play a (lesser?) gatekeeping role. Graduate students are budding members of the relevant disciplines and so clearly have a stake in how the disciplinary standards develop. It is easy enough to make the case for them having some say over who gets a platform and who does not. Undergraduate students are a trickier case but Simpson and Srinivasan argue that they can have a role too. Members of academic disciplines are not epistemically infallible, they can be guilty of narrow-mindedness and groupthink with respect to methods and topics. Undergraduates, because they are less entrenched in the disciplinary norms, can help to spot these flaws. Thus, they can also play some role in setting the standards.

This is just an ‘in principle’ argument. It shows how someone embracing the Postian conception of academic freedom could also accept the leigitimacy of certain forms of no platforming. The devil, however, is going to be in the detail. What speakers, specifically, can be denied a platform? What do they say? What are the disciplinary norms? Who should be performing the gatekeeping function in this case? These questions will need to be answered before any actual defence of no platforming becomes persuasive.


2. Criticisms and Concerns
As I said at the outset, Simpson and Srinivasan’s argument is interesting and provocative. There is undoubtedly some truth to it. It is undeniable that academic disciplines do have some epistemic standards and these standards play a role who gets given a platform and who does not. This happens all the time, irrespective of how much controversy these gatekeeping decisions attract. To give a trivial example, I once ran a seminar series on legal philosophy in which I, along with the co-organisers of the series, frequently rejected speakers on the grounds that their papers weren’t sufficiently philosophical or theoretical. Content-based suppression takes place all the time.

Nevertheless, there are some serious problems with the argument, many of which are identified and discussed by Simpson and Srinivsan in a reasonably persuasive way. I want to review these problems here.

First, as Simpson and Srinivasan point out, there are going to be easy cases and hard cases. The Flat-earther and Holocaust denier are easy cases. Their views obviously do not comply with the standards of the relevant academic disciplines. The hard cases arise when the standards within the relevant disciplines are undergoing some kind of change or flux. In other words, when the standards are being debated with a view to the potential exclusion or inclusion of certain points of view. They single out the case of Germaine Greer as an example of a hard case. Germaine Greer was protested for her ‘trans-exclusionary’ views. Are such views still reasonably on the table within relevant academic disciplines (philosophy, gender studies etc) or are they not? This is something that is being actively debated. Given the relatively recent and underdeveloped nature of this debate, Simpson and Srinivasan conclude that Greer could not be de-platformed in a way that is consistent with the principles of academic freedom:

Some scholars with apparent institutional and disciplinary credibility – in fields like cultural studies, sociology, anthropology, philosophy, gender studies, and queer studies – will insist that the questions of what a woman is and whether trans women qualify are central to feminist inquiry. Others scholars in those same fields, with similar credentials, will insist that the question has been settled and is no longer reasonably treated as open to inquiry. Given this backdrop, it is unclear whether the no platforming of someone like Greer, who denies the womanhood of trans women, could be defended as consistent with respect for academic freedom under the account we have presented. The fact that there is live controversy over the relevant standards in the relevant disciplines suggests, on its face, that there are not any authoritative disciplinary standards that could be invoked in order to characterize Greer’s no platforming as a case of someone being excluded for lacking disciplinary competence. 
(Simpson and Srinivasan 2018, 17-18)


They do, however, go on to say just after this that this might change in the future. It might eventually be the case that there is a disciplinary consensus that blocks the expression of the trans-exclusionary view.

Second, as Simpson and Srinivasan also point out, there are different standards across different disciplines and hence sometimes there are difficult inter-disciplinary disputes about what can be expressed. The so-called hard sciences are commonly thought to have clear and definitive epistemic standards that rule certain kinds of speech in and out (usually on methodological grounds as opposed to content grounds). The softer sciences and humanities have less definitive standards. Indeed, some disciplines appear to have few if any standards. In philosophy, for example, all manner of controversial views are regularly debated. Some philosophers deny the existence of numbers, universals, the self, morality and so on. Some philosophers defend infanticide and anti-natalism. All these views are thought to be consistent with the disciplinary standards of philosophy. If we follow a ‘lowest common standard’ approach to what can be expressed on a university campus, then it might be the case that no views can be de-platformed due to the openness of philosophy to all views, even if other disciplines disagree.

Simpson and Srinivasan argue that it is not quite true to say that anything goes in a discipline like philosophy — there are still standards of rational inquiry and logical argument that must be upheld — but they seem to concede that there isn’t a good answer as to what to do about this issue:

One way to address these hard cases would be to say that any speaker seen as within the bounds of disciplinary competence by at least one discipline cannot be legitimately no platformed for the sake of upholding the disciplinary standards of any other discipline. But then the worry is that in protecting the disciplinary integrity of philosophy – as a discipline resistant to seeing any view as rationally beyond the pale – we impair other disciplines’ attempts to police their own intellectual standards. 
(Simpson and Srinivasan 2018, 20)

They then go on to say that the existence of difficult cases like this does not undermine the value of the Postian-approach. Indeed, they suggest that the Postian approach may reveal what really makes these hard cases so hard, i.e. that they are not disputes about what kinds of speech are harmful (or not) but rather about what kinds of speech meet the relevant academic standards.

My own view is that there is a much more serious problem going on here than they seem willing to acknowledge. Even in the hard sciences, there are long-standing controversies about which views are accepted within the disciplinary norms and which views are not. To give a non-political/sociological example, theoretical physicists were, for a long period in the 20th century, unwilling to debate the correct interpretation of quantum theory. The few who did found themselves ridiculed and ostracised by their peers, often to the detriment of their careers (the history of this is discussed in Adam Becker’s book What is Real?). Looking back, there is now a slowly growing realisation that this suppression of work on quantum foundations was a mistake. People realise that there is something rotten at the heart of quantum theory and this needs to be resolved. There are similarly controversial cases within other disciplines. For example, the recent replication crises in biomedical science and psychology (and other experimental disciplines) has revealed serious, long-standing flaws in the disciplinary norms of biomedical science and psychology: Some kinds of studies are prioritised beyond their true academic value, and others are suppressed or ignored.

Given these historical mistakes by the epistemic gatekeepers, it doesn’t seem obvious to me that we should want anything other than a Millian approach to speech on university campuses. At the very least, the historical failure of academic disciplines to set the right epistemic standards seems to warrant a strong presumption against no platforming on content-based grounds. Censorship on purely methodological grounds might be more reasonable, but as the example of the replication crisis shows, this would seem to warrant at most the minimal epistemic standards imposed by a discipline such as philosophy, and not anything more robust and exclusionary.

Another way of putting this point is that if we accept that principles of academic freedom should determine what can be said on a university campus, it’s not clear that we end up anywhere all that different from the Millian position that Simpson and Srinivasan criticise at the start of their article. We end up with equally controversial and equally difficult-to-resolve disputes about what can be censored or not. The one advantage that the academic freedom approach has over the Millian position is that we focus on epistemic standards and not on harmfulness. But is that really a clear advantage? One could argue that the Millian position is more reasonable since it accepts that epistemic standards are too controversial a basis for censorship and focuses instead on non-epistemic reasons for censorship.

In addition to this, I also worry that the position being defended by Simpson and Srinivasan assumes too narrow a view of the purpose of a university and the members of its community. Should everything said on a university campus be beholden to the standards of academic disciplines? Universities do many things. They are engaged in teaching and research, to be sure, but they are also social communities for the students that attend them. For example, I have worked at universities with Quidditch societies for students. Quidditch is, obviously, a fictional magical game taken from the Harry Potter series. Suppose the Quidditch society invites a speaker who seems to take the fiction seriously. They talk about flying brooms and magic spells with seeming earnestness. Could the physics faculty rightfully de-platform this speaker on the grounds that what they are saying is not consistent the disciplinary norms of physics? I find that deeply counterintuitive and not because I think the Quidditch society has its own epistemic standards that it can use to regulate speech. The Quidditch society isn’t connected to the research and teaching mission of the physics department. It serves another purpose, one that the physics department has no right to overturn.

There is a serious point lurking here. Many of the controversial cases of no platforming and de-platforming arise from student societies inviting speakers to university campuses. Sometimes these student societies have purposes that are intimately linked to specific academic disciplines, but oftentimes they do not. Student religious societies or political societies or sports societies, for example, do not serve purposes that are obviously linked to academic disciplines. Why should principles of academic freedom constrain what gets said at the platforms provided by these student societies? Simpson and Srinivasan do allude to this issue in a footnote when comparing no platforming of crank ‘experts’ at research seminars vis-a-vis student societies. Here is what they say, in full:

It is a more complicated case if the Holocaust denier or oil company shill is a credentialed expert in the relevant discipline. If they were invited by their disciplinary peers to address an academic research seminar – say, if the history department unwittingly invited a crank, and then opted not to rescind the invitation – then their no platforming wouldn’t be acceptable under Post’s account. If they were invited to address a student club or the like, then the case for the acceptability of them being no platformed would be stronger, all else being equal. At minimum, it cannot be the case that the status of these speakers as disciplinary experts entails that their academic freedom (or that academic freedom per se) is infringed just because a particular student club has not given them a platform to espouse their views. 
(Simpson and Srinivasan 2018, fn 25)

The phrase ‘all else being equal’ might be doing a lot of work here but my immediate reaction to this is that the case for no platforming at the student society can only be more persuasive if (a) you accept that student societies are bound by the norms of academic freedom and (b) you assume students have much less epistemic authority than academics. Both of these assumptions can be questioned, particularly the first.

There could be a separate issue here as to whether certain kinds of student societies should be allowed to exist. Maybe universities shouldn’t allow students to set up groups (with institutional approval) that are inconsistent with academic research and teaching missions. But once they do allow them, I find it hard to accept that they must all abide by the principles of academic freedom. If that is right, then it is difficult to see how speech can be regulated at such societies other than by applying something like the Millian harm principle.


3. Conclusion
To sum up, Simpson and Srinivasan try to use the concept of academic freedom to justify (in principle) some forms of no-platforming. To be precise, they have used Robert Post’s account of academic freedom to argue that academic disciplines serve particular research and teaching missions and are entitled to use certain epistemic standards to regulate speech in a way that serves those missions. While this is an interesting proposal, I think its practical difficulties are more severe than Simpson and Srinivasan seem willing to acknowledge.