Tuesday, April 6, 2021

From Mind-as-Computer to Robot-as-Human: Can metaphors change morality?

Over the past three years, I have returned to one question over and over again: how does technology reshape our moral beliefs and practices? In his classic study of medieval technology, Lynn White Jr argues that simple technological changes can have a profound effect on social moral systems. Consider the stirrup. Before this device was created, mounted warriors had to rely largely on their own strength (the “pressure of their knees” to use White’s phrase) to launch an attack while riding horseback. The warrior’s position on top of the horse was precarious and he was limited to firing a bow and arrow or hurling a javelin.

The stirrup changed all that:

The stirrup, by giving lateral support in addition to the front and back support offered by pommel and cantle, effectively welded horse and rider into a single fighting unit capable of violence without precedent. The fighter’s hand no longer delivered the blow: it merely guided it. The stirrup thus replaced human energy with animal power, and immensely increased the warrior’s ability to damage his enemy. Immediately, without preparatory steps, it made possible mounted shock combat, a revolutionary new way of doing battle. 
(White 1962, p 2)


This had major ripple effects. It turned mounted knights into the centrepiece of the medieval army. And since the survival and growth of medieval society was highly dependent on military prowess, these knights needed to be trained and maintained. This required a lot of resources. According to White, the feudal manor system, with its associated legal and moral norms relating to property, social hierarchy, honour and chivalry, was established in order to provide knights with those resources.

This is an interesting example of technologically induced social moral change. The creation of a new technology afforded a new type of action (mounted shock combat) which had significant moral consequences for society. The technology needed to be supported and sustained, but it also took on new cultural meanings. Mounted knights became symbols of strength, valour, honour, duty and so forth. They were celebrated and rewarded. The entire system of social production was reoriented to meet their needs. There is a direct line that can traced from the technology through to this new ideological moral superstructure.

Can something similar happen with contemporary technologies? Is it already happening? In the remainder of this article I want to consider a case study. I want to look at social robots and the changes they may induce in our moral practices. I want to argue that there is a particular mechanism through which they may change our moral practices that is quite subtle but significant. Unlike the case of stirrup — in which the tool changed the social moral order because of the new possibilities for action that it enabled — I want to argue that social robots might change the social moral order by changing the metaphors that humans use to understand themselves. In particular, I want to argue that the more humans come to view themselves as robot-like (as opposed to robots being seen as human-like), the more likely it is that we will adopt a utilitarian mode of moral reasoning. I base this argument on the theory of hermeneutic moral mediation and some recent findings in human-robot interactions. This argument is highly speculative but, I believe, worth considering.

Terminological note: By ‘robot’ I mean any embodied artificial agent with the capacity to interpret information from its environment and act in response to that information. By ‘social robot’ I mean any robot that is integrated into human social practices and responds to human social cues and behaviours, e.g. care robots, service robots. Social robots may be very human like in appearance or behaviour but they need not be. For example, a robot chef or waiter in a restaurant might be very unhuman-like in appearance but may still respond dynamically and adaptively to human social cues and behaviours.

1. Hermeneutic Moral Mediation

In arguing that technology might alter human moral beliefs and practices, it is important to distinguish between two different understandings of morality. On the one hand, there is ‘ideal morality’. This is the type of morality studied by moral philosophers and ethicists. It consists of claims about what humans really ought to value and ought to do. On the other hand, there is ‘social morality’. This is the type of morality practiced by ordinary people. It consists in people’s beliefs about what they ought to value and what they ought to do. Social morality and ideal morality may not align with each other. Indeed, moral philosophers often lament the fact that they don’t. In considering how technology might change human morality, I am primarily interested in how it might change social morality, not ideal morality.

That said, there can be connections between ideal morality and social morality. Obviously, moral philosophers often use claims about ideal morality to criticise social morality. The history of moral reform is replete with examples of this including anti-slavery arguments in the Enlightenment era, pro-suffragette arguments in the late 1800s, and pro-same sex marriage arguments in the late 1990s early 2000s. But changes in social morality may also affect ideal morality, or at least our understanding of ideal morality. If people adopt a certain moral practice in reality, this can encourage moral philosophers to reconsider their claims about ideal morality. There is often a (suspicious?) correlation between changes in social morality and changes in theories of ideal morality.

How can technology induce changes in social morality? There are several theories out there. Peter Paul Verbeek’s theory of technological moral mediation is the one I will rely on in this article. Verbeek argues that technologies change how humans relate to the world and to themselves. To use the academic jargon: they mediate our relationships with reality. This can have moral effects.

Verbeek singles out two forms of mediation, in particular, for their moral impact: (i) pragmatic mediation and (ii) hermeneutic mediation. Pragmatic mediation arises when technology adds to, or subtracts from, the morally salient choices in human life. This forces us to consider new moral dilemmas and new moral questions. The impact of the stirrup on medieval warfare is an example of this. It made mounted knights more effective in battle and military commanders were thus forced to decide whether to use these more effective units. Given the overwhelming value attached to military success in that era, their use became a moral necessity: to not use them would be morally reckless and a dereliction of duty. Hermeneutic mediation is different. It arises when technology changes how we interpret the world, adding a new moral perspective to our choices. Verbeek argues that obstetric ultrasound is a classic example of hermeneutic moral mediation in action because the technology presents the foetus-in-utero to us as an independent being, situated inside but still distinct from its mother, and capable of being treated or intervened upon by medical practitioners. This alters our moral understanding of pre-natal care.

[If you are interested, I wrote a longer explanation of Verbeek’s theory here]

The widespread diffusion of social robots will undoubtedly pragmatically mediate our relationship with the world. We will face choices as to whether to deploy care robots in medical settings, whether to outsource tasks to robots that might otherwise have been performed by humans, and so on. But it is the hermeneutic effects of social robots that I want to dwell on. I think the diffusion of social robots could have a profound impact on how we understand ourselves and our own moral choices.

To make this point, I want to consider the history of another technology.

2. The Mind-as-Computer Metaphor

The computer was the defining technology of the 20th century. It completely reshaped the modern workplace, from the world of high finance, to scientific research, to graphic design. It also enabled communication and coordination at a global scale. In this way, the computer has pragmatically mediated our relationship with the world. We now think and act through the abilities that computers provide. Should I send that email or not? Should I use this bit of software to work on this problem?

Not only has the computer pragmatically mediated our relationship to the world, it has also hermeneutically mediated it. We now think of many processes in the natural world as essentially computational. Nowhere is this more true than in the world of cognitive science. Cognitive scientists try to figure out how the human mind works: how is it that we perceive the world, learn from it and act in it? Cognitive scientists have long used computers to help model and understand human cognition. But they go further than this too. Many of them have come to see the human mind as a kind of computer — to see thinking as a type of computation.

Gerd Gigerenzer and Daniel Goldstein explore this metaphorical turn in detail in their article ‘The Mind as Computer: The Birth of a Metaphor’. They note that it is not such an unusual turn of events. Scientists have always used tools to make sense of the world. Indeed, they argue that the history of science can be understood, at least in part, as the emergence of theories from the applied use of tools. They call this the ‘tools-to-theories’ heuristic. A classic example would be the development of the mechanical clock. Not long after this device was invented, scientists (or natural philosophers as they were then called) started thinking about physical processes (motion, gravitation etc) in mechanical terms. Similarly, when statistical tools were adopted for use in psychological experimentation in the 1900s, it didn’t take too long before psychologists starting to see human psychology as a kind of statistical analysis:

One of the most widely used tools for statistical inference is analysis of variance (ANOVA). By the late 1960s, about 70% of all experimental articles in psychological journals already used ANOVA (Edgington 1974). The tool became a theory of mind. In his causal attribution theory, Kelley (1967) postulated that the mind attributed a cause to an effect in the same way that a psychologist does — namely, by performing an ANOVA. Psychologists were quick to accept the new analogy between mind and their laboratory tool. 
(Gigerenzer and Goldstein 1996, 132).


The computational metaphor suffered from a similar fate. The story is a fascinating one. As Gigerenzer and Goldstein note, the early developers of the computer, such as Von Neumann and Turing, did work on the assumption that the devices they were building could be modeled on human thought processes (at either a biological or behavioural level). But they saw this as a one-way metaphor: the goal was to build a machine that was something like a human mind. In the 1960s and 70s, the metaphor turned around on itself: cognitive scientists started to see the human mind as a computational machine.

One of the watershed moments in this shift was the publication of Allan Newell and Herbert Simon’s Human Problem Solving in 1972. In this book, Newell and Simon outlined an essentially computational model of how the human mind works. In an interview, Simon documented how, through his use of computers, he started to think of the human mind as a machine that ran programs:

The metaphor I’d been using, of a mind as something that took some premises and ground them up and processed them into conclusions, began to transform itself into a notion that a mind was something that took some program inputs and data and had some processes which operated on the data and produced some output. 
(quoted in Gigerenzer and Goldstein 1996, 136)


This theory of the mind was initially resisted but, as Gigerenzer and Goldstein document, when the use of computational tools to simulate human cognition became more widespread it was eventually accepted by the mainstream of cognitive scientists. So much so that some cognitive scientists find it hard to see the mind as anything other than a computer.

3. The Human-as-Robot Metaphor

What significance does this have for robots and the moral transformations they might initiate? Well, in a sense, the robot is a continuation of the mind-as-computer metaphor. Robots are, after, all essentially just embodied computational devices, capable of receiving data inputs and processing them into actions. If the mind is seen as a computer, is it not then natural to see the whole embodied human as something like a robot?

We can imagine a similar metaphorical turn to the one outlined by Gigerenzer and Goldstein taking root, albeit over a much shorter timeframe since the computational metaphor is already firmly embedded in popular consciousness. We begin by trying to model robots on humans (already the established practice in social robotics), then, as robots become common tools for understanding human social interactions, the metaphor flips around: we start to view humans as robot-like themselves. This is already happening to some extent and some people (myself included) are comfortable with the metaphor; others much less so.

This thought is not original to me. Henrik Skaug Sætra, in a series of papers, has remarked on the possible emergence of ‘robotomorphy’ in how we think about ourselves. Many people have noted how humans tend to anthropomorphise robots (see e.g. Kate Darling’s work), but as robots become common Saetra argues that we might also tend to ‘robotomorphise’ ourselves. In a paper delivered to the Love and Sex with Robots Conference in December 2020, he remarks:

Roboticists and robot ethicists may similarly lead us to a situation in which all human phenomena are understood according to a computational, mechanistic and behaviourist logic, as this easily allows for the inclusion of robots in such phenomena. By doing so, however, they are changing the concepts. In what follows, our understanding of the concept, and of ourself, changes accordingly. 
(Saetra 2020, 10)*


But how does it change? Saetra has some interesting thoughts on how robot ethicists might use our interactions with robots to encourage a behaviourist understanding of human social interactions. This could lead to an impoverished (he says ‘deficient’) conception of certain human relationships, including loving relationships. Humans might favour efficient and psychologically simple robot partners over their more complex human alternatives. Since I am a defender of ‘ethical behaviourism’, I am, no doubt, one of the people that is guilty of encouraging this reconceptualisation of human relations (for what it’s worth, I don’t think this necessarily endorses an impoverished conception of love; what I do think is that it is practically unavoidable when it comes to understand our relationships with others).

Fascinating though that may be I want to consider another potential transformation here. This transformation concerns the general moral norms to which we are beholden. As moral psychologists have long noted, the majority of humans tend to follow a somewhat confusing, perhaps even contradictory, moral code. When asked to decide what the correct course of action is in moral dilemmas, humans typically eschew a simple utilitarian calculus (avoid the most suffering; do the most good) in favour of a more complex, non-consequentialist moral code. For example, in Joshua Greene’s various explorations of human reasoning in trolley-like scenarios (scenarios that challenge humans to sacrifice one person for the greater good), it is found that humans care about primary intentions, the physical proximity to a victim and other variables that are not linked to the outcome of our actions. In short, it seems that most people think they have act-related duties — not to intentionally harm another, not to intentionally violate trust, not intentionally violate an oath or duty of loyalty to another etc — that hold firm even when following this duty will lead to a worse outcome for all. This isn’t always true. There are some contexts in which outcomes are morally salient and override the act-related duties, but these are relatively rare.

Recent investigations into human moral judgment of robots paints a different picture. Studies by Bertram Malle and his colleagues, for example, suggests that we hold robots to different moral standards. In particular, we expect them to adopt a more utilitarian logic in their moral decision-making. They should aim for the greater good and they are more likely to be negatively evaluated if they do not. We do not (as readily) expect them to abide by duties of loyalty or community. Malle et al’s findings have been broadly confirmed by other studies into the asymmetrical moral norms that humans apply to robots. We think robots should focus on harm minimisation; we don’t judge them based on their perceived intentions or biases. For example, Hidalgo et al’s recent book-length discussion of a series of experiments done on over 6000 US subjects, How Humans Judge Machines, seems to be broadly consistent with the moral asymmetry thesis.

Now, I would be the first to admit that these findings are far from watertight. They are, for the most part, based on vignette studies in which people are asked to imagine that robots are making decisions and not on interactions with real-world robots. There are also many nuances to the studies that I cannot do justice to here. For instance, there are some tentative findings suggesting that the more human-like a robot’s actions, and/or the more harm it causes, the more inclined we are to judge it in a human-like way. This might indicate that the asymmetry holds, in part, because we currently dissociate ourselves from robots. 

Nevertheless, I think these findings are suggestive and they do point the way toward a hermeneutic moral effect that the widespread deployment of robots might have. If it becomes common wisdom for us to interpret and understand our own behaviour in a robot-like manner, then we may start to hold ourselves to the same moral standards as machines. In other words, we may start to adopt a more outcome-oriented utilitarian moral framework and start to abandon our obsession with intentions and act-related duties.

Three factors convince me that this is a plausible potential future. First, there is a ready-made community of consequentialist utilitarian activists that would welcome such a moral shift. Utilitarianism has been a popular moral framework since the 1800s and has resonance in corporate and governmental sectors. The effective altruist movement, with its obsessive focus on doing the most good through evidence-based personal decision-making, may also welcome such a shift. Second, there is some initial evidence to suggest that humans to adapt their moral behaviours in response to machines. Studies by Ryan Jackson and colleagues on natural language interfaces, for instance, suggest that if a machine asks a clarificatory question implying a willingness to violate a moral norm, humans are more willing to violate the same norm. So we can imagine that if machines both express and act in way that violates non-consequentialist norms, we may, in turn, be more willing to do the same. Finally, there are now some robot ethicists that encourage us to take the metaphorical flip, i.e. to make human moral behaviour more robot-like and not to make robot moral behaviour more human-like. One interesting example of this comes from Sven Nyholm and Jilles Smids article on the ethics of autonomous vehicles in ‘mixed traffic’ scenarios, i.e. where the machines must interact with human-driven vehicles. A common approach to the design of mixed traffic scenarios is to assume that the machines must adapt to human driving behaviour but Nyholm and Smids argue that sometimes it might be preferable for the adaptation to go the other way. Why? Because machine driving, with its emphasis on harm-minimisation and strict adherence to the rules of the road, might be morally preferable. More precisely, they argue that if automated driving is provably safer than human driving, then humans face a moral choice, either they use automated driving or they adapt to automated driving standards in their own behaviour:

If highly automated driving is indeed safer than non-automated conventional driving, the introduction of automated driving thereby constitutes the introduction of a safer alternative within the context of mixed traffic. So if a driver does not go for this safer option, this should create some moral pressure to take extra safety-precautions when using the older, less safe option even as a new, safer option is introduced. As we see things, then, it can plausibly be claimed that with the introduction of the safer option (viz. switching to automated driving), a new moral imperative is created within this domain [for human drivers]. 
(Nyholm and Smids 2018)


If we get more arguments like this, in more domains of human and robot interaction, then the net effect may be to encourage a shift to a robot-like moral standard. This would complete the hermeneutic moral mediation that I am envisaging.

4. Conclusion

None of this is guaranteed to happen nor is it necessarily a good or bad thing. Once we know about a potential moral transformation we can do something about it: we can either speed it up (if we welcome it) or try to shut it down (if we do not). Nevertheless, speculative though it may be, I do think that the mechanism I have discussed in this article is a plausible one and worth taking seriously: by adopting the robot-as-human metaphor we may be inclined to favour a more consequentialist utilitarian set of moral norms.

* I’m not sure if this paper is publicly accessible. Saetra shared a copy with me prior to the conference.


Tuesday, March 30, 2021

Technology and the Value of Trust: Can we trust technology? Should we?

Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.

1. Can you trust a thing?

Philosophers like to draw a distinction between trust and mere reliance. The distinction is usually parsed like this: trust is something that exists between people; mere reliance can exist between people and things. One person trusts another when they expect the other will act with goodwill towards them and live up to their obligations. Mere reliance involves the expectation that someone or something will follow a predictable pattern of behaviour.

I believe this distinction was first articulated by Annette Baier in her 1986 article ‘Trust and Antitrust’. More recently, Katherine Hawley has made it the centrepiece of her theory of trust. In her article ‘Trust, Distrust and Commitment’ she opens with a section entitled ‘Trust is not Mere Reliance’. Why not? Hawley accepts that this distinction is not one that is respected in ordinary language. Much to the annoyance of philosophers, people do talk about trusting their cars, their appliances and even the ground on which they walk. But Hawley thinks people are wrong to do so. They should make the distinction because trust is a normatively richer concept than mere reliance. 

Here is the core of her argument:

The distinction is important because trust, not mere reliance, is a significant category for normative assessment. Trust, unlike mere reliance, is connected to betrayal. Moreover trustworthiness is clearly distinguished from mere reliability. Trustworthiness is admirable, something to be aspired to and inculcated in our children: it is a virtue in the everyday sense, and perhaps in the richer sense of virtue ethics too. Mere reliability, however, is not. A reliable person is simply predictable: someone who can be relied upon to lose keys, or succumb to shallow rhetoric, is predictable in these respects, but isn't therefore admirable. Even reliability in more welcome respects need not amount to trustworthiness: when you reliably bring too much lunch, you do not demonstrate trustworthiness, and nor would you demonstrate untrustworthiness if you stopped. 
(Hawley 2012, 2)


This is a strange argument. There seem to be two main parts to it. The first is the claim that trust is linked to betrayal while mere reliability is not. I guess that’s true, but that is probably just an artefact of the conceptual vocabulary we use. Betrayal is the flipside or negative of trust: it’s what happens when trust goes bad. There is, presumably, a negative side to reliability too. Unpredictability? Randomness? The second claim is that trustworthiness is admirable and normatively assessable in a way that mere reliability is not. But is that really true? It seems to me that many people think that being 'reliable' is an admirable quality. I often overhear people talking about work colleagues being reliable with the implication being that they exhibit some virtue. It is true that people can be reliably bad, but that doesn’t say much. After all, people can misplace trust in others or their trust can be betrayed. In other words, just as reliability has its ups and downs so too does trust. I can’t help but wonder if the modifier ‘mere’ is doing a lot of the work in this conceptual distinction. If we said ‘mere trust’ instead of ‘trust’ would we have a similarly dismissive attitude?

In any event, neither of these points is particularly pertinent to the issue at hand. Even if there is this important conceptual distinction between trust and mere reliance, it does not follow that you cannot trust a thing. To make that argument, you would have to suggest that there is some condition of trust that is linked to a property that people have but machines or things lack. What might that be?

The typical answer appeals to mental properties. The idea is that trust depends on having a mind. Since things cannot have minds, they cannot be proper objects of trust. Mark Ryan develops this critique in his article ‘In AI We Trust: Ethics, Artificial Intelligence and Reliability’. In the article, Ryan identifies a number of conditions that must be satisfied in order for trust to exist between two entities or parties. They include things like believing that the other party is competent to perform some action or function, having confidence that they will perform those functions, and being vulnerable to them if they do not. Ryan accepts that machines, specifically AIs, can satisfy these three conditions and so a form of ‘rational’ trust in machines (which might be equivalent to what others call ‘reliance’ or ‘confidence’) is possible. But machines cannot satisfy two other critical conditions for the normatively richer form of trust: (i) they cannot be motivated to act towards us out of a sense of goodwill or out of a desire to live up to their moral obligations toward us; and (ii) they cannot betray us.

Without getting too into the details, I think there are problems with the second part of this argument. Ryan’s claim that machines cannot betray appears to be circular. In essence, his position boils down to the claim that you cannot betray someone unless you are a proper object of trust but you cannot be a proper object of trust unless you have the capacity for betrayal. But that just begs the question: how do you become a proper object of trust or develop the capacity for betrayal? 

That leaves the other part of the argument: the claim that machines cannot have the right kinds of motivation or desire for action. What does Ryan say about this? A lot, but here is one critical quote from his paper:

While we may be able to build AI to receive environmental input and stimuli, to detect appropriate responses, and program it to select an appropriate outcome, this does not mean that it is moved by the trust placed in it. While we may be able to program AI to replicate emotional reactions, it is simply a pre-defined and programmed response without possessing the capacity to feel anything towards the trustor. Artificial agents do not have emotions or psychological attitudes for their motives, but instead act on the criteria inputted within their design or the rules outlined during their development [reference omitted] 
(Ryan 2020, p 13)


In other words, we might create machines that look and act they care about us or look and act like they are motivated by reasons similar to our own, but this is all just an illusion. They don’t feel anything or care about us. They are just programmed artifacts; not conscious, caring humans. They have no minds, no intentions, no inner life.

If you have read any of my previous work on ‘ethical behaviourism’ (e.g. here, here, here and here), you will know that I do not like this kind of argument. To me, it smacks of an unwarranted form of human exceptionalism and mysterianism: humans have this special property that cannot be replicated by machines, but how that property is instantiated in humans is both mysterious and never fully specified. My own view is that while there are important differences between humans and machines (particularly as they are currently designed and operated) there is no ‘in principle’ reason why machines cannot be motivated to act toward us with goodwill and moral rectitude. After all, the only reason we have to believe that other humans are so motivated toward us is because of how they look and act. Looking and acting, broadly defined, are the epistemic hinge on which perceptions of mindedness turn. We can rely on the same evidence when it comes to machines. If they look and act the right way, we can trust them. Similarly, the notion that machines are somehow different from us because they act on the basis of ‘criteria inputted within their design or rules outlined during the development’ also strikes me as being misleading and false. Humans have also been manufactured through a process of evolution by natural selection and personal biological development. We are constrained by both processes and we act on the basis of decision rules and heuristics acquired during these developmental processes. We may sophisticated and complicated biological machines, but there is nothing magical about us.

If I’m right, then even on Ryan’s account of trust it is, in principle, possible for us to trust machines. But this assumes that Ryan (and Hawley and Baier) are right in supposing that trust depends on mental properties like goodwill and a desire to do the right thing. What if that is the wrong way to think about trust?

One of the most interesting recent papers on this topic comes from C. Thi Nguyen. It is called ‘Trust as an Unquestioning Attitude’. In it, Nguyen argues that we can have a normatively rich form of trust in objects and things. Indeed, hearkening back to the point made by Hawley, he suggests that reference to this non-interpersonal or non-agential form of trust are common in everyday language. He cites several examples of this, including climbers who talk about ‘trusting’ their climbing ropes, and people who have lived through earthquakes talking about feeling ‘betrayed’ by the ground beneath their feet.

What is it that unites these non-agential forms of trust? Nguyen argues that this form of trust arises when we have an unquestioning attitude toward something. In other words, when we take it for granted that it will act in a certain way and we depend on it do so. In this respect, we all trust the ground beneath our feet. We don’t wake up in the morning and assume that it will suddenly tear apart and swallow us up. We rely upon this assumption to live our lives. It is only in the extreme case of an earthquake that we realise how much trust we place in the ground. Other examples of this form of trust abound in our everyday discourse.

But what about all those philosophers who insist that trust can only exist between people? Nguyen says something about this:

I have found that philosophers who work on trust and testimony think that this use of “trust” is bizarre and unintuitive — especially locutions like “trusting the ground” and feeling “betrayed by the ground”. But it seems to me that, in fact, these expressions are entirely natural and comprehensible, and it is only excess immersion in modern, narrowed philosophical theories of trust that renders these locutions odd to the ear. 
(Nguyen, MS p 10)


There is a general lesson for philosophers here. For instance, I have encountered a similar phenomenon when writing about gratitude. I once tried to publish a paper on whether atheists could be grateful for being alive. It was repeatedly rejected from journals by reviewers who insisted that gratitude is necessarily interpersonal. According to them, it makes no sense to be grateful for things or for some natural state of affairs. You can only be grateful toward other people. This always struck me as bizarre and counterintuitive but, according to these reviewers, I was the outlier. (If you are interested, you can find the unpublished paper here. Before you say anything, I’m sure there are other reasons why it should have been rejected for publication)

Assume Nguyen is right. What is normatively significant about his version of trust? Nguyen sees trust as an unquestioning attitude as something that is integral to our sense of agency. We are cognitively limited beings. We cannot be constantly suspicious and questioning of everything. By accepting that things (cars, climbing ropes, mobile phones) will work in a certain way, or that people (lovers, friends, fellow citizens) will live up to their obligations, we give ourselves the freedom to live more enriched and open lives.

This doesn’t mean that we can never be suspicious of them. This trust can be misplaced and its wobbly foundations can be revealed in certain circumstances (like in the midst of an earthquake). When this happens we may critically interrogate our previous unquestioning attitude. We may search for data to confirm whether we are right to trust this thing or not. Depending on the outcome of this inquiry, we may find our trust restored or we may find that we can no longer take the thing for granted. Either way, trust as an unquestioning attitude is a normatively essential part of what it means to be human. Given our cognitive limitations, we couldn’t get by without it.

I like Nguyen’s theory of trust. I think it captures something important about our relationship to the world around us. We don’t just rely on our friends, or on the ground beneath our feet or on the smartphones in our pockets. We trust them to act or to persist in certain way so that we can get on with the business of living.

2. Technology and The Value(s) of Trust

If Nguyen’s right, then it does make sense to talk about trust in technology. But this raises a deeper question. Everyone talks about the value of trust but what form does this value take. Is trust valuable in and of itself? In other words, is it a good thing to have trusting relationships in our lives, irrespective of their consequences? Or is trust valuable purely for consequential reasons?

There is a common philosophical distinction that is relevant here: the distinction between intrinsic and instrumental value. It is possible to argue that trust has both kinds of value:

The Intrinsic Value of Trust: Trust is valuable in and of itself (irrespective of its consequences) because it expresses an attitude of respect or tolerance toward the object of trust. For example, if you trust another human being you are signalling to them that you recognise and respect their moral status and moral autonomy.


The Instrumental Value of Trust: Trust is valuable because it is practically essential to human life. It allows us to cooperate and coordinate with others, which allows us to innovate and develop and explore more opportunities. A life without trust would be impoverished because it would lack access to other valuable things.


From my reading of the literature, the instrumental value of trust tends to be emphasised more than the intrinsic value of trust . There is a good reason for this. Everyone that writes about trust notes that trust is a double-edged sword. Whenever you trust a person or a thing you cede some control and power to them. When I trust my partner to look after our daughter, I give up my own attempts to manage and control all aspects of childcare. When I trust my calendar app to keep a record of my appointments and meetings, I give up my own attempts to keep a mental record of my appointments and meetings. The irony is that ceding power and control in this manner can actually be empowering. By not having to worry about childcare or scheduling (at least temporarily) you unlock other opportunities and overcome some of your own cognitive and temporal limitations. This is Nguyen’s argument. But ceding power and control can be risky. The trust can be betrayed. My partner might not look after our daughter properly, my calendar app might fail to update or record a meeting. When this happens I may lose, rather than gain, something that I value.

It is because the consequences of misplaced trust can be so terrible that people tend to emphasise the instrumental value of trust. Even if trust has some intrinsic value this can be swamped by its negative consequences. Imagine if my partner, through neglect, causes our daughter to become seriously ill. By trusting her to look after our daughter I will have expressed my respect for her moral status and autonomy, but that will be of little consolation if our daughter is seriously ill. The intrinsic value of trust is present and cannot be denied, but it has been overridden by the negative instrumental value of trust. It is the instrumental value that matters most.

How is this relevant to the debate about trust in technology? Well, if we accept that we can trust technology (and that it is meaningful to talk about such trust), then we can also accept that this form of trust can have significant instrumental value. It can help us to access other values that would be impossible (or very difficult to obtain) without that trust. But the intrinsic value of trust does seem to be absent when it comes to our relationships with technology. If we accept that most technologies as they currently exist lack an independent moral autonomy and moral status, then we cannot express respect or tolerance for technology by trusting it. This means that the value of trust in technology hinges entirely on the consequences of this trust: if the consequences are good, then it has instrumental value; if the consequences are bad, it does not.

There are three counterarguments to this claim that trust in technology lacks intrinsic value. The first is to claim that even if technology currently lacks independent moral autonomy and status, it may someday acquire this. The typical way to run this counterargument is to suggest that sophisticated machines might acquire the mental properties that we typically associate with moral autonomy and status and, once they do, we will be able to express respect and tolerance toward them by trusting them. Given my earlier critique of Mark Ryan’s views on trust in technology, and my defence of ethical behaviourism, I am quite sympathetic to this argument. I’m just not sure that any present technology rises to requisite level of sophistication.

The second counterargument is to claim that entities do not need to posses mental properties in order for them to have a moral status that is worthy of respect. Environmental ethicists, for example, might argue that aspects of the natural world have an independent moral status that is not derived from human enjoyment of or dependence on the natural world. It is, consequently, not absurd to suggest that we can express respect or tolerance toward aspects of the natural world. If that is right, then it may be less of a stretch to say that trust in technology in its current form has some intrinsic value (remembering, at all times, that this intrinsic value can be swamped by the negative consequences of misplaced trust).

The third counterargument is to claim that technology is a product of human moral agency and autonomy and hence it can have a kind of derived moral status. In other words, it makes sense to express respect for the technology because in doing you are expressing respect for its human creator. There may be some plausibility to this argument in certain contexts. For example, I trust the chef at my favourite restaurant not to poison me. As a result, I don’t test the chemical composition of his food every time it comes out to my table. I just eat it. By trusting that the food will be fine I am, in a sense, expressing my respect for him. But whether this reasoning holds up in the case of technology is much less clear. Most technologies are created by teams of humans. You are not singling any one of them out for respect and, arguably, it is just as mistaken to respect an entire group of humans as it is to respect a thing. But even if you can, the value of trusting their product is still only a derived value and it is quite a nebulous and partial one at that.

In conclusion, trust in technology can have instrumental value (or disvalue as the case may be), but it probably lacks the intrinsic value that arises from trust between human beings. That said, the intrinsic value of trust is quite limited and can easily be swamped by the negative consequences of misplaced trust. So to say that trust in technology lacks intrinsic value is not to say all that much.

3. Concluding Thoughts

None of this is to suggest that we ought to trust technology. It is simply to say that it is meaningful to talk about trusting technology and this type of trust can have significant instrumental value in our lives. Whether it does, in fact, have such value depends on the properties and dynamics of the technology. What does it actually do in our lives? Does it empower us? Or does it act against our interests? Does it do more of the former than the latter?

These are the very same questions we should ask about our relationships with other human beings. We shouldn’t trust all humans. That would be a mistake. Whether we should trust them, or not, depends on who they are and what they do to us. If we take an unquestioning attitude toward them, does this unlock other opportunities and goods for us? Or does it leave us exposed to exploitation and abuse?

It is undoubtedly true that many of us trust technology in our daily lives and are rewarded for doing so. Right now, as I write these words, I’m trusting my computer and my word processing software to safely record and save them for later retrieval. I don’t doubt that the files will be there tomorrow morning when I wish to work on them again. Similarly, I trust my car not to breakdown when I drive to collect my daughter this afternoon. I don’t meticulously check the undercarriage or the engine every time I hop into the driver’s seat.

The problem is that this trust is sometimes betrayed. Modern technologies can let us down. Digital technology is vulnerable to security hacks and data leaks. Mass surveillance can compromise our privacy. Apps can work more for their creator’s interests than for those of their human users. To use a trite example, it is in Facebook’s interests to keep you hooked on their newsfeed and clicking on their ads. Whether this is in your interest is much more doubtful. In many cases, assuming that the technology has a benign effect on your life can be mistaken. This is the dark side of trust.

What can we do about this? Efforts to create trustworthy technology can help, but many of these efforts must be understood for what they really are. Sometimes they are not about encouraging or facilitating trust in technology. They are, instead, about making it possible for us to critically scrutinise the technology. In other words, to make it possible for us to take questioning attitude toward it when we feel unsure about its bona fides. This is why there is such a significant emphasis on transparency, accountability and audit trails when it comes to creating trustworthy technology.

These are laudable goals, and once the mechanisms of accountability are put in place people may well slip back into an unquestioning attitude toward technology. Trust could then be restored. But the policy itself is motivated by the belief that the technology in its current form is not trustworthy.

Friday, March 26, 2021

89 - Is Morality All About Cooperation?

What are the origins and dynamics of human morality? Is morality, at root, an attempt to solve basic problems of cooperation? What implications does this have for the future? In this episode, I chat to Dr Oliver Scott Curry about these questions. We discuss, in particular, his theory of morality as cooperation (MAC). Dr Curry is Research Director for Kindlab, at kindness.org. He is also a Research Affiliate at the School of Anthropology and Museum Ethnography, University of Oxford, and a Research Associate at the Centre for Philosophy of Natural and Social Science, at the London School of Economics. He received his PhD from LSE in 2005. Oliver’s academic research investigates the nature, content and structure of human morality. He tackles such questions as: What is morality? How did morality evolve? What psychological mechanisms underpin moral judgments? How are moral values best measured? And how does morality vary across cultures? To answer these questions, he employs a range of techniques from philosophy, experimental and social psychology and comparative anthropology.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).


Show Notes

Topics discussed include:

  • The nature of morality
  • The link between human morality and cooperation
  • The seven types of cooperation 
  • How these seven types of cooperation generate distinctive moral norms
  • The evidence for the theory of morality as cooperation
  • Is the theory underinclusive, reductive and universalist? Is that a problem?
  • Is the theory overinclusive? Could it be falsified?
  • Why Morality as Cooperation is better than Moral Foundations Theory
  • The future of cooperation

Relevant links

Thursday, March 18, 2021

The Importance of Collective Intelligence in a Sustainable Future

[This is the text of a short - 15 minute talk - I delivered to the Viridian Conference on 17th March 2021. The purpose of the conference was to discuss the Viridian Declaration, which advocates for technological and social reform in order to make the environment sustainable. One aspect of the declaration focuses on the importance of resilient and adaptive social institutions. That’s where I focused my energies for my talk. As with all short talks of this nature, the ideas and arguments here are programmatic and provocative. They are not rigorously defended. I’m fully aware that there are some holes in what I have to say, but I hope it provides food for thought nonetheless. I have changed the title and made some minor updates to this text from the version I used during the actual talk].

I am very pleased to be able to talk to you this evening and I would like to thank the organisers for inviting me to participate in this exciting and optimistic event. I am a philosopher and ethicist of technology and it is, sadly, rare for me to participate in an event in which people talk positively about the future.

In the short time I have with you I want to make an argument, or maybe even a plea: that we need to think carefully, systematically and scientifically about how best to harness human and machine intelligence to solve the ecological, sociological and technological problems we currently face. Indeed, I would argue that thinking systematically and scientifically about how to harness human and machine intelligence is the greatest challenge of our times. There are some tentative and fascinating attempts to do this already, but they are insufficient. We need to do much more.

To support this argument there are four propositions I wish to defend in the remainder of my talk.

  • Proposition 1: In order for our societies to have a bright and sustainable future, we need to invest in our problem-solving capacity. Since intelligence is equivalent to problem-solving capacity another way of putting this is to say that we must invest in our intelligence capital. This type of capital, not simply labour or technology or natural resources, is crucial to our long-term survival.

Why is this true? I take my support for this from Joseph Tainter’s argument in The Collapse of Complex Societies. For those who do not know, Tainter’s book tries to explain why complex historical civilisations collapsed. He looks at all the famous examples: the Western Chou Empire, The Egyptian Old Kingdom, The Hittite Empire, the Western Roman Empire, and Lowland Classic Maya. After reviewing and dismissing typical explanations for this collapse — including the caim that environmental damage and destruction is the main culprit — Tainter concludes that insufficient of problem-solving capacity is the root cause of civilisational decline. His argument is relatively simple and consists of five claims:

  • Human societies are organisations that solve basic existential and psychological problems for their members. They thereby generate benefits (B) for their members and, in order to sustain themselves, they must continue to solve problems and generate benefits.

  • Like all organisations, human societies must capture and expend energy (C) in order to sustain their problem-solving capacity. Classically, societies have captured energy by foraging, farming, burning fossil fuels, and also through war and imperial expansion.

  • Therefore, in order to sustain themselves, societies face a basic cost-benefit equation: the benefits of increased energy expenditure on problem-solving capacity must exceed the costs (i.e. B must be > C)

  • Increased investment in problem-solving capacity usually yields higher returns (B ↑) but this often comes at an increasing per capita cost so that, at a certain point, the marginal benefits (mB) of increased investment are outweighed by the marginal costs (mC).

  • If mC exceeds the mB at an increasing rate, societies will collapse (i.e. experience a rapid and significant decline in socio-political complexity: this can encompass war, famine, or partition and fragmentation).

The only way that societies avoid this dilemma is by developing some breakthrough technology (broadly understood) that increases the net benefit or reduces the per capita cost of problem-solving. Past examples of such breakthroughs include new methods of food production, facilitation of open markets and trading, new techniques of bureaucratic management, new forms of energy production and, most recently, computing power. But each of these technological breakthroughs is made possible by human intelligence. Hence it is our intelligence capital, not technology or innovation per se, that should be the main target for investment if we are to sustain our civilisation.

  • Proposition 2: The most effective forms of intelligence are collective in nature, not individualistic. In other words, when investing in intelligence capital we should focus primarily on making teams of humans and machines enhance their collective intelligence, and not solely on making individuals more intelligent.

We all like to celebrate lone geniuses such as Einstein, Newton and Darwin. But the reality is that lone geniuses play a limited role in the sustainability and success of human civilisation. It is collections of individuals, coordinating and cooperating their intelligences that makes this possible. There is a certain amount of common sense behind this argument: look around and note how dependent you are on the intelligence of others to solve your basic existential problems. But there have been some books that defend this common sense idea in more detail. One of my favourites is the psychologist Joseph Henrich’s book The Secret of Our Success, which is a systematic defence of the importance of collective intelligence in human history. Henrich uses an arresting thought experiment to set up his main thesis. Imagine you took a random human and a random chimpanzee and dropped them both in the jungle to fend for themselves. Who would you bet on surviving for more than a few days? Henrich argues that you would bet on the chimpanzee every time. Why? Because individual humans are far too dependent on their cultures, their collaborators and their technologies to get by.

Technologies, including artificial intelligence, play a crucial role in collective intelligence structures (or ‘groupminds’ as we might call them). Humans are a technological species. One of the main products of our intelligence is technology: from stone axes to nuclear reactors, groups of humans create and use technologies to enhance our collective intelligence and solve our existential problems. Artificial intelligence is just the latest product of human collective intelligence. Some worry that it threatens to replace or undermine human intelligence. Some of these fears may be well-founded, others less so. That’s not a debate I wish to get into right now. What I do wish to say, however, is that ideally we shouldn’t think of AI as something separate or alien from human collective intelligence. Ideally, we should view AI as a new partner or assistant to human intelligence — as something that needs be incorporated effectively into our collective intelligence structures. Indeed, doing this could give us the technological breakthrough we need to avoid Tainter’s dilemma.

  • Proposition 3: We can think systematically and scientifically about collective intelligence structures and how best to design them to maximise their benefits relative to their costs.

We know that some collective intelligence structures are more successful than others. We also know that there are many different designs or structures that we could implement. We are now starting to develop a scientific field of inquiry into collective intelligence and it is important that we double-down on this effort if we are to survive and thrive in the future.

There are several disparate fields of inquiry that are relevant to this, from organisational theory and management theory, to human-computer interaction studies. At the moment, most of these inquiries are siloed in their own academic subfields and focused on a narrow set of problems. For example, organisational theory and management theory often measures the success of groups in terms of their efficiency or profitability, not their intelligence per se. What we need to do is unite these fields of inquiry into a common discipline centred on collective intelligence.

The work of Thomas Malone and his colleagues has been particularly insightful and innovative in this regard. They have developed group intelligence tests that replicate individual IQ tests. In initial experiments on this idea they find that are ways to measure collective intelligence and that there are some factors that seem to correlate with increased group intelligence. For example, while individual intelligence (as measured by IQ) is one relevant variable, it is not the only one, nor perhaps even the most important one. Indeed, having many high IQ individuals in a group can hinder rather than help collective intelligence. Other crucial variables include cognitive diversity, social perceptiveness of group members (including capacity for empathy and understanding), equality of participation and proportion of women. Malone cautions against overinterpreting these initial findings. There may be more noise than data in them (though the link between cognitive diversity and collective intelligence seems robust) but this is indicative of the kind of inquiry I think we need.

Linked to this, we need to think carefully about the different types of collective intelligence structure and their distinctive costs of benefits. In principle, there are as many structures as there are groups, but Malone argues that we can bring some order to the apparent chaos. He suggests that there are five main types of collective intelligence structure that humans have used to solve our problems in the past:

  • Hierarchies - These are group structures in which a single individual or small group of individuals solves problems for a larger group. In doing so, they are often supported by a legal- bureaucratic agency (or set of agencies).
  • Democracies - These are group structures in which all defined members of the group have a say in group problem-solving, usually through a formal voting procedure. The group decision or output is the aggregated sum of these votes.  
  • Markets  - These are group structures in which individuals are left to themselves and then have to transact and trade with other individuals to solve problems. The group decision or output is then an emergent property of these individual transactions. Markets usually require some external support from a hierarchy or community to ensure that the transactions and trades are enforced.
  • Communities - These are group structures that lack any formal hierarchy or voting procedure and instead involve individuals informally agreeing upon and enforcing some set of social norms for cooperation and coordination. The group solves problems by coordinating on these norms.
  • Ecosystems (state of nature) - These are not group structures but, rather, a form of anarchical social interaction. There are no norms, no voting procedures, no enforceable market transactions. The most powerful individual typically thrives while others scramble for survival.

Malone argues that each of these structures has different benefits and costs for the members of the groups and different ways of distributing the benefits to members of the group. For example, hierarchies often produce significant group benefits (aggregate social surplus) but don’t distribute those benefits evenly among the members of the society. They also tend to be quite costly to maintain: you need create cultural myths/religions, laws and enforcement agencies to maintain the  hierarchy. Democracies are somewhat similar — insofar as they produce significant group benefits and are costly to maintain — but they usually distribute group benefits more evenly. Markets and communities are different again. They can produce group benefits at lower costs (there is less formal infrastructure needed to maintain the group) but can be more unpredictable in how they distribute those benefits. Technology plays a key role in each type of structure too by both reducing the costs of group decision-making and increasing the benefits of working with the group.

Malone’s framework is just a preliminary one but it shows how we might begin to think carefully, systematically and scientifically about collective intelligence. Other work that is useful in this regard would include the series of studies by my colleague Michael Hogan on using collective intelligence in policy-formulation and political decision-making, as well as Geoff Mulgan’s thoughts on collective intelligence in his book Big Mind.

  • Proposition 4: We should be pluralists, not monists when it comes to investment in and design of collective intelligence structures. 

It is tempting to think that we need to find the ‘best’ collective intelligence structure or to assume that one structure is necessarily better than the others (e.g. that democracy is better than all the rest). But we should avoid this temptation. The reality is that modern societies blend different kinds of structure. To stick with Malone’s framework, even in liberal democratic states it is not the case that we only rely on democratic intelligence structures. Our democratic political institutions include elements of legal-bureaucratic hierarchy, may well depend on informal community norms for their success, and in turn function as a support structure for markets. In other words, we cannot have the benefits of democracies without hierarchies and communities, nor can we have the benefits of markets without hierarchies and democracies. The different structures work together and have different strengths and weaknesses.

Furthermore collective intelligence structures can decay, stagnate over time. This may mean they cease to function as effective problem-solving mechanisms. We will need to adapt and redesign them when this happens. Many people argue that this is happening now to mature democratic states. A democracy’s primary tools for increasing its problem-solving capacity and its perceived legitimacy is to enlarge the voting franchise and increase transparency and accountability. But many mature democracies may have hit the point of declining marginal returns on these mechanisms. Political stalemate and polarisation are manifestations of this. How we can redesign democracies to reinvigorate their problem-solving capacity is, I believe, a theme that David Wood will be taking up in the next talk and another major challenge for our time.

My bottom line is this: different kinds of structure are optimal for different kinds of problem. Figuring out which structure is optimal for the set of challenges we now face — ecological, sociological, and technological — is the project we should focus on. We have started to think about this in a systematic and scientific way, but we need to do much more.

Thank you for your attention.

Tuesday, March 16, 2021

The Ethics of Teacher-Student Relationships

When I was starting out in my academic career, I was assigned a senior colleague as a mentor. This is not an unusual practice. The hope is that the senior colleague can provide advice on how to navigate the thickets of academic life. I remember at one of our meetings the topic of teacher-student relationships came up. This colleague told me, in no uncertain terms, that any kind of sexual or romantic relationship with a student (graduate or undergraduate) was inappropriate and should be avoided.

Sound advice, but a little bit ironic for two reasons. First, this particular colleague was in a long-term (and by all accounts happy and well-functioning) relationship with a former graduate student. Second, the thought of entering into such a relationship had never crossed my mind nor had it been a feature of our conversation prior to that point. I believe the only reason it had come up was because I was unsure of how to deal with a student whose mother was dying. To say that the advice was disconnected from the context would seem to be an understatement.

If I were to characterise the relationships I have had with my students over the years I would say that they are, for the most part, extremely distant. To be fair, this my normative baseline when it comes to all relationships. I have very few close friendships and I am, for the most part, reclusive and solitary. That said, I probably take this reclusive attitude to extremes when it comes to students. For example, I try to avoid all social gatherings with students. This includes socialising at university-related events. I don’t like to attend formal dinners or graduation with students, nor do I like to hang around and talk to them after guest lectures or other events (I will, of course, talk to them after my own lectures on course-related topics). When I hear of colleagues going to student balls or taking groups of students out for informal dinner or drinks, perhaps to celebrate the start or end of term, I balk at the idea. I have, very reluctantly, been dragged to such events in the past. I find them unpleasant and awkward. My intention is never to participate in them again. I prefer to deal with students in a purely professional capacity, talking to them solely about course work or academic issues.

I’m not sure why I adopt this style of interaction with my students. Perhaps, in part, it is to avoid any risks associated with conflating different relationship styles. Perhaps, in part, it is due to my own social awkwardness and anxiety. Perhaps, in part, it is due to some misguided belief that you shouldn’t reveal too much of yourself to other people, especially students. Whatever the answer may be, it does prompt the question: what is the preferred form of relating to students? And, more particularly, is it ever appropriate to interact with students as something other than just students?

I’ve read about this topic at various points over the years. Unsurprisingly, most of the literature deals with the ethics of romantic/sexual relationships with students and/or the ethics of teacher-student friendships. Relatively few articles and books focus on what the ideal relationship should be. But maybe it is possible to triangulate on this by considering the various arguments that have been offered against romantic relationships and friendships?

That’s what I will try to do in the following article. I will start by reviewing some basic concepts pertaining to the ethics of relationships and highlighting some pitfalls that plague our reasoning about them. Then I will look at the standard arguments offered against teacher-student romantic relationships (which now tends to represent the consensus view) and the more tentative arguments for and against teacher-student friendship (which are more contested). I will conclude by seeing whether anything can be learned from this inquiry about the preferred way of relating to a student.

1. How to Think About Relationship Ethics

Let’s begin with a few observations about relationships and the ethical norms that may or may not be associated with them. Obviously, humans have many relationships in their lives. Indeed, virtually all repetitive social interactions can be categorised as relationships of some kind. Some philosophers and social scientists believe that it is within these relationships that the human moral conscience is formed. For example, Stephen Darwall has argued that being able to take the second-person perspective (i.e. the perspective of the other party in the social relationship) is key to moral reasoning. Similarly, the developmental and evolutionary biologist Michael Tomasello has argued that being able to understand the duties associated with different social roles is responsible for the evolution of the human moral sense. Finally, though it is less popular these days, Lawrence Kohlberg’s developmental theory of moral reasoning suggests that it is the capacity to see and empathise with the other side of our social relationships that represents the emergence of true moral reasoning in children. I could go on, but I won’t. The point is that social relationships have an important role to play in our moral and ethical reasoning.

There are some ethical rules that apply to all relationships, irrespective of their precise character. For example, you shouldn’t harm someone unless you have good cause. But other moral rules are specific to certain relationships. Lawyers, for example, have a duty of confidentiality to their clients. Doctors too. The problem is that relationships can take many different forms. Think of the relationship between parent and child, doctor and patient, boss and employee, brother and sister, two friends, two lovers and so on. The teacher-student relationship is just one among many. How can we think about the ethical rules that apply to such a diverse array of relationships?

One simple way to think about the ethics of our social relationships is to focus on the purpose or telos of the relationship and to use that to determine what the respective duties of the parties to the relationship might be. Many relationships have a function or goal associated with them. Think about the relationship between a doctor and their patient. The purpose of this relationship is to improve the health of the patient. To do this effectively, the patient has to supply the doctor with all relevant information concerning their health; the doctor then has to be well-informed about the best options for care. This gives rise to respective duties: the duty of honesty for the patient and the duty of competence for the doctor. Both are obviously connected to the goal of the relationship.

That said not all relationships serve single or obvious goals. Some relationships serve multiple goals. Furthermore, thinking about certain relationships in terms of goals can seem contrary to their ethical character. For instance, it seems wrong to suppose that the relationship between friends is goal-oriented. It is no doubt true that friendships serve a purpose: companionship, support, entertainment and so forth. But thinking about them solely in terms of these purposes can seem instrumentalising and dehumanising. If my friends no longer entertain me, am I entitled to abandon them or ignore them? Surely not. Some relationships can be instrumentalised in this way, but not all.

That complication notwithstanding, it seems fair to say that the teacher-student relationship is one that can be thought about in purposive or teleological terms. It does serve a goal, namely: to educate the student (in a broad sense). A first pass at the ethics of teacher-student relationships is to say that the duties of the parties (and the ideal mode of relating between them) flow from that goal. A teacher should not do something that subverts or undermines it, and nor should a student. That said, as everyone points out, there is usually an asymmetry of power between the teacher and student (similar to that between a doctor and a patient) which typically means that the burdens are higher on the teacher than they are on the student. The teacher must do more to ensure that the goals of the relationship are fulfilled.

There are, however, some problems with this initial take on the ethics of teacher-student relationships. I’ll mention three here as they will recur in the analysis given below:

The purpose is vague: To say that teachers should educate their students isn’t to say much since people disagree about what education is really about. Is it about knowledge transfer? Providing credentials? Developing the capacity for critical thought and self-reflection? Producing better citizens for a democracy? Helping students find themselves? Each of these has been proposed as legitimate goal for education over the years and each of them might warrant a different mode of relating to students. Furthermore, some people have, no doubt in a self-serving way, argued that the eroticisation of the teacher-student relationship is part of the educational mission. I’ll return to an example of this below.


Relationships often overlap or nest: Humans often pursue multiple different kinds of relationships with people and often have different relationships types thrust upon them due to social circumstance or necessity. For example, many people are friends with their work colleagues; it is not uncommon for parents to teach their children (not just in homeschooling but in mainstream schools too); and some university professors teach friends or colleagues (because they enroll in their courses). This nesting or overlapping of relationships makes their ethical analysis more complicated. Is it always wrong to pursue different kinds of relationships with people at the same time? Should (can?) one type of relationships be kept isolated from other types?


Relationship analogies are common: Humans often use analogies between relationships to determine the ethical rules that apply to them. We analogise between friendship and intimate partnership, for example, to figure out how we should relate to friends and lovers, respectively. Of course, analogical reasoning is common in human life, but it creates challenges when it comes to the ethics of relationships. If someone thinks a teacher-student relationship is like the relationship between a parent and a child, then they are likely to reach a different conclusion about how they should relate to their students than someone who thinks it is more like the relationship between a boss and an employee. This isn’t a purely hypothetical example either. I have had colleagues in the past tell me that they view the relationship between teacher and student as being much like the relationship between parent and child, and hence had a very particular view of their role within that relationship.*


There are other complications but these will suffice for now. In practice, the overlapping of different relationship types, and how this might bear on the purpose of the teacher-student relationship, is probably the most problematic issue and the one that has generated most debate in the literature on teacher-student relationship. So let’s consider two examples of overlapping relationships teachers can have (and have had) with students: sexual relationships and friendships.

2. The Problem with Sexual Relationships

As mentioned, the ethics of teacher-student sexual relationships has tended to dominate writing in this area. In an interesting article on this, William Deresiewicz points out the image of the feckless, morally corrupt, professor, who sleeps with his (it’s almost invariably a ‘he’) students is probably one of the most common fictional motifs of the 20th century. You couldn’t even begin to list all the examples of it. But we can trace the origins of the motif back much further. It’s right there in Plato and his depiction of Socrates opining about the ethics of teacher-student sexual relationships.

There seems to be good reason for this cultural and intellectual obsession. Teacher-student sexual relationships are a major problem. Recent revelations of rampant sexual harassment and assault of students by well-heeled professors, coupled with institutional misdeeds in covering up these affairs, highlight how rampant it is. In tandem with the #MeToo movement, and the broader societal activism against the sexual mistreatment of women and children, the academy is having to reckon with its history of abuse and misconduct. No wonder people are opining about the ethics of such relationships.

Sexual harassment and assault are not quite the same things as consensual sexual or romantic relationships between two adults. But there is a fuzzy line between these two things in the case of teacher-student interactions. Clearly, there are some ’successful’ romantic relationships that began in this form. As mentioned in the introduction, I have interacted with such couples in the past and my own knowledge of them suggests that they were generally happy and well-functioning (who knows what goes on behind closed doors). But given the nature of teacher-student relationships, there are some very good reasons for thinking that sexual relationships between these parties are always fraught with risk. They are, consequently, best avoided.

There are three obvious reasons for this.

First, the power asymmetry between the parties casts a shadow over any alleged consent to such a relationship. Teachers are the more powerful parties within such relationships, at least within a certain institutional context. They have some knowledge or skill that the student lacks and is supposed to learn from them. Even if the student is highly competent and intelligent in their own right, the default assumption is that this asymmetry exists. Furthermore, the teacher often has power over the future of the student, both in terms of their testing and evaluation, and their access to future opportunities (e.g. through reference writing). It’s a complicated question as to whether this power-asymmetry necessarily undermines any consent that might given to a sexual relationship. But you certainly could argue that there is a lingering, implicit threat inherent in the relationship. Even if the teacher doesn’t say anything, the implication or assumption might be that they could use their power to make life difficult for the student if the student does not consent to the sex.

Even if this shadow doesn’t place the relationship within the realms of illegality or crime, it may, at the very least, place it within the category of what Ann Cahill has called ‘unjust sex’. I covered this idea in a previous article. Cahill derived this category of sex from a series of interviews that Nicola Gavey conducted for her book Just Sex?. Gavey interviewed several women about their sexual experiences. Many of these women agreed that they had consented to some sexual encounters in the past but had felt that they had done so in conditions in which their choices were limited and, in fact, they only had one viable option. Cahill builds on this idea by arguing that in certain contexts, there are less powerful parties whose sexual agency can be hijacked by more powerful parties (Cahill focuses on male-female interactions within patriarchal societies, but I believe it is possible to extend her analysis to all relationships involving power asymmetries). The result of this hijacking can be subtle and insidious. The weaker party may be encouraged to signal consent and approval of what the more powerful party desires in order to accredit it, even though they themselves appear to have limited choices. Cahill’s point is that these cases of unjust sex are not equivalent to rape or sexual assault but, rather, lie in a gray zone between rape and ethically permissible sex. Their moral character is tainted, even if it is not completely reprehensible. It seems to me that this might capture a basic problem with all teacher-student sexual relationships.

Second, there appears to good evidence to suggest that these relationships are often harmful to the weaker party in the long-term. Fredrik Bondestam and Maja Lundqvist recently published a systematic review of the empirical research on the prevalence and consequences of sexual harassment in higher education. They found that it is linked to a number of harmful outcomes for both students and staff, but particularly students. Here is the key paragraph from their study. You can find links to the papers they cite in this paragraph in the original piece:

Exposure to sexual harassment in higher education leads to physical, psychological and professional consequences for individuals. Examples such as irritation, anger, stress, discomfort, feelings of powerlessness and degradation are recurrent in research literature. Evidence-based research confirms more specifically that sexual harassment in higher education can lead to depression (Martin-Storey and August 2016; Selkie et al. 2015), anxiety (Richman et al. 1999; Schneider, Swan, and Fitzgerald 1997), post-traumatic stress disorder (Henning et al. 2017), physical pain (Chan et al. 2008), unwanted pregnancies and sexually transmitted diseases (Philpart et al. 2009), increased alcohol use (Fedina, Holmes, and Backes 2018; McDonald 2012; Selkie et al. 2015), impaired career opportunities (Henning et al. 2017), reduced job motivation (Barling et al. 1996; Chan et al. 2008; Harned et al. 2002), and more. Specific job-related factors often include absence, decreased job satisfaction, engagement and productivity, decreased self-confidence and self-image, and persons giving notice from their jobs (Lapierre, Spector, and Leck 2005; Willness, Steel, and Lee 2007). Even observing or hearing about a colleague’s exposure to sexual harassment can generate ‘bystander stress’ and also cause conflicts in the work team (McDonald 2012; Willness, Steel, and Lee 2007). 
(Bondestam and Lundqvist 2020, 405)

Of course, you may dispute the relevance of this since it deals with sexual harassment (i.e. unwanted sexual attention etc) and not consensual sexual relationships but I will simply reiterate that the line between the two is often blurred. Indeed earlier studies of apparently consensual relationships between staff and students suggests that they can have similar effects, particularly when the relationships breakdown, as many relationships inevitably do. Belinda Blevins-Knabe, for example, in her review of such studies, notes that many female students end up regretting these relationships in the long-term and suffer from anxiety, depression and self-esteem-related issues as a result (Blevins-Knabe 1992, 157). She also notes that the professors involved in such relationships often view them as being problematic and unhelpful too: in one study only 1/6 of those that engaged in such relationships found them to be beneficial (Blevins-Knabe 1992, 157).

There are some qualifications I would like to make to this second argument. First, although I have no doubt that teacher-student sexual relationships lead to the negative outcomes listed above, I would be curious to see how they fare relative to other broken relationships. I imagine (though I have never experienced it myself) that relationship breakdown is stressful and anxiety inducing outside of the academic context, and that it can lead to the negative outcomes listed by Bondestam and Lundqvist. Second, and relatedly, one interesting aspect of some of these studies is the extent to which people retrospectively reevaluate their relationships. It is an old study, but Glaser and Thorpe (1986) suggest that this is a common feature of how students view their former relationships with professors. To what extent are such reevaluations to be credited? Could they be tainted by subsequent events? For example could shifting social norms concerning such relationships (i.e. the fact that people view them as less acceptable than they once were) or the fact that the subsequent career of the student didn’t pan out affect how they perceived and how harmful they are felt to be? I’m sure this happens to some extent. But, even if it does, given that the prevailing cultural wind is against teacher-student sexual relationships, this still provides a reason to avoid them in the interests of harm reduction.

Third, and finally, overlapping a sexual relationship with a teacher-student relationship often undermines the goal of the latter relationship: the pursuit of education. In her recent analysis of the topic, Amia Srinivasan makes much of this argument. She claims that the main problem with these relationships is not the lack of consent but the betrayal of the teaching mission. With typical bluntness, she argues that the goal of teaching is to educate students not to sleep with them. Adding a sexual dimension to the relationship distracts from this goal. One or both of the parties can become more interested in the sex than in sharing knowledge and developing intellectual skills. Sex and intimacy can also undermine teacher impartiality and objectivity, which is crucial to the evaluation and assessment of students, as well as the management of educational activities. Even if teachers claim that the relationship doesn’t harm their professional judgment, it surely must to some extent. Institutional fixes such as anonymous grading and/or reassignment of supervisees can address this problem to some extent but it won’t eliminate it completely. On top of all this, the intimate relationship can affect the burden of care and responsibility within the relationship. Ordinarily, we think of the teacher as the one that carries the heaviest burden: they must care for and nurture the students’ intellectual pursuits. But a sexual relationship can flip this around, particularly in the case of a male professor and female student. As Srinivasan notes, suddenly the student might be expected to care for the professor, not vice versa.

This third argument seems valid to me. But there is a counterargument to it. As noted in the previous section, the purpose of teacher-student relationships is vague. What does it mean to ‘educate’ someone? Could eroticisation be part of education? Deresiewicz, in his article on ‘Love on Campus’, suggests that there is something necessarily erotic about good teaching:

Eros in the true sense is at the heart of the pedagogical relationship, but the professor isn’t the one who falls in love… Love is a flame, and the good teacher raises in students a burning desire for his or her approval and attention, his or her voice and presence, that is erotic in its urgency and intensity. The professor ignites these feelings just by standing in front of a classroom talking about Shakespeare or anthropology or physics, but the fruits of the mind are that sweet, and intellect has the power to call forth new forces in the soul. 
(Deresiewicz 2007)


Deresiewicz goes on to clarify that students shouldn’t mistake this erotic passion for sexuality, and professors shouldn’t take advantage of any potential confusion, but others have made the case without those qualifications. Srinivasan discusses the case of Jane Gallop, a professor accused of sexual harassment by her graduate students in the 1990s. Gallop did not deny the accusations but went on to argue that the sexual dimension of her relationship with the students was a sign of pedagogical success not failure:

At its most intense—and, I would argue, its most productive—the pedagogical relation between teacher and student is, in fact, a “consensual amorous relation.” And if schools decide to prohibit not only sex but “amorous relations” between teacher and student, the “consensual amorous relation” that will be banned from our campuses might just be teaching itself. 
(Gallop, quoted in Srinivasan 2020, 1120)


Even Srinivasan, who is strongly opposed to sexual relationships between teachers and students, concedes that there might be something to this:

Certainly those of us who ended up as professors almost invariably did so because some teacher aroused in us intense feelings of infatuation, desire and want. 
(Srinivasan 2020, 1120)


WTF? I have to say that reading this kind of thing is like reading an alien language. Maybe my experiences are radically different from those of my colleagues, but I have never once had such a passionate feeling for or infatuation with a teacher or professor, nor do I believe that my becoming an academic was the result of some passion being aroused within me by a particular individual. Indeed, I cannot remember a single teacher that has had any a major influence on me. Perhaps I am the outlier.

All that said, my interpretation of these claims about the erotic element of teaching is that they are examples of the fallacy that comes from using analogies to understand the normative character of different relationships. People are analogising too readily between sexual relationships and teacher-student relationships to reach the conclusion that there is something erotic or quasi-sexual about good teaching. I agree that good teaching should stimulate curiosity and passion for a subject or mode of inquiry, but I don’t see this kind of passion as being similar to an erotic or sexual passion. They are quite different.

In any event, the potential injustice, harm and distraction associated with teacher-student sexual relationships seems to provide reason enough to avoid them. They will almost always undermine the ethical character of the relationship, not accentuate it.

I should say that there is one obvious exception to this argument: the case where the sexual relationship pre-dates the teacher-student relationship. It’s possible, particularly at university level, that someone could end up teaching a current or former partner who enrolled in a class or degree programme. I’ve heard of this happening in the past. I think this does create difficulties in practice, and should probably be avoided if at all possible (e.g. by reassigning the student to another lecturer/professor). That said, because the relationship did not arise out of the teacher-student relationship it doesn’t carry quite the same risks when it comes to consent or harm (I suspect!).

What about relationships that post-date the teacher-student relationship? The French president Emmanuel Macron is, famously, married to his former high school teacher. They got married 13 years after they originally met but I believe they had an on-again-off-again relationship from about the time that he was 18. I personally find this strange, but I guess having a relationship with a former student is not as ethically dubious as having one with a current student. That said, my own sense of it is that the amount of time that has elapsed since the end of the teacher-student relationship makes a difference. Getting into a relationship immediately after someone has graduated or left a class seems suspicious to me, but getting into one with someone a decade after your previous interactions seems much less problematic. Personally, I would be concerned about any lingering asymmetries of power or hero worship that might leak into the relationship, but these might not be a factor in some cases.

3. Is there a case for friendships with students?

What about friendships between teachers and students? On the face of it, these would seem to be less ethically problematic than sexual relationships. Friendships don’t raise the same concerns about consent nor do they hold the same potential for harm. Furthermore, I find that many of the people I work with are willing to entertain the idea of being friends with their students. This is particularly true at the graduate/PhD student level. Some people have even suggested to me that it is natural for PhD students to become friends with their supervisors over time. Indeed, it may be one of the hallmarks of a well-functioning supervisor-supervisee relationship.

I have my concerns about this. But a lot of this depends on how we characterise ‘friendships’. There are many competing philosophical definitions of friendship. The most famous and influential of these comes from the work of Aristotle. He distinguished between three kinds of friendship: pleasure friendships (which are about getting enjoyment and entertainment from one another); utility friendships (which are about achieving some goal or purpose with another person’s assistance); and virtue friendships (which are about sharing a commitment to the good with another person, engaging in mutually beneficial and supportive acts, and appreciating the other as a person in their own right, not just a source of pleasure and utility). As you might imagine from these descriptions, Aristotle saw the virtue friendship as the highest ideal of friendship. It was the form of friendship to which we should all aspire.

What significance does this have for teacher-student friendships? Well, it seems plausible to say that teachers can have, and perhaps even should have, utility friendships with their students, provided the utility in question is associated with the goal of education. The student can learn something and, in many cases, so can the teacher. And even if they don’t learn something, they get to hone their skills as an educator. It’s a win-win. Furthermore, as part of that utility friendship, teachers and students probably should be friendly with one another. That is, they should be civil, pleasant, tolerant and so forth. If there is too much resistance and antagonism between them, it will hamper the educational mission.

But can the friendships ever be more than that? Can they ever aspire to something like the Aristotelian ideal? In a thought-provoking article, Amy Shuffleton argues that although such friendships are fraught with risk, there can be merit to them. Shuffleton’s argument is all the more provocative insofar as she focuses not just on friendships between adult students and adult professors at university but, also, on friendships between child students and adults.

Shuffleton accepts that there are two major risks associated with teacher-student friendships. The first is the problem of impartiality: if a teacher is friends with a student it raises concerns about their fairness and impartiality in both assessing and facilitating the education of other students. We encountered this problem in connection with the ethics of sexual relationships. It rears its head here again, albeit without the sexual dimension. Shuffleton argues that this problem actually has two elements to it: the fact of partiality and the perception of partiality. As a matter of fact, many teachers who happen to be friends with their students are not necessarily biased in their favour. Nor, she argues, do students expect such bias. If the friendship is an honest one — and not a Machiavellian one — the student should wish to be treated and assessed fairly. But that doesn’t eliminate the perception of partiality: for all their protestations to the contrary, other people might assume that the teacher is biased in favour of their friends. But Shuffleton points out that many other factors affect the perception of partiality. People might think a teacher is biased to male students or white students or students their share their faith or religious beliefs. Teachers have to work to manage those perceptions and sometimes friendships with students might work to counteract such biases.

The second problem is that the friendship might interfere with or distract from the educational mission. Again, this is similar to the concern raised in relation to sexual relationships but where the distraction takes a different form. Shuffleton offers some interesting responses to this. First, she suggests that teacher-student friendships might support and complement the educational mission in at least some cases, e.g. making students more receptive to learning or preparing them for what it means to be an adult in a democratic society. Second, and more interestingly, she argues that some students might benefit from having adult friends, perhaps because they are excluded by their own peer groups. Children, in particular, can be cruel and prey on any differences or eccentricities. Having an adult that tolerates and appreciates difference could be beneficial to a student. This may involve a form of teacher-student friendship. Shuffleton cites an example from her own life in support of this: a friendship she had with a younger male student while teaching English in Krakow. They did not socialise together, but would talk after class and they bonded over a mutual love of art and photography. This boy’s peers did not seem to share his interests in these things. She thinks there was some value to their friendship.

Shuffleton’s overall point is that we face plural moral demands and obligations. There is a danger that, as teachers, we become too rigid and attached to a certain conception of our role and the moral demands associated with it. In short, Shuffleton’s argument is that we shouldn’t let the moral demands of being a teacher distract from the moral demands of being human.

There is much to commend in Shuffleton’s sensitive and thoughtful account of teacher-student friendships. It does give me some pause and encourage me to reconsider my own distant approach to students. Still, I can’t help but worry about the perception of bias and favouritism that might arise from such friendships. I also think that the suggested benefits of such friendships — toleration, respect and appreciation for difference — can be achieved without slipping into friendship. Indeed, the example Shuffleton gives of the boy she befriended while teaching in Krakow doesn’t really strike me as a true friendship. She was friendly with him without being a true friend. At least, that’s how I see it.

4. Conclusion

So what kind of relationship should a teacher cultivate with their students? I started this article by outlining my own practice in this regard: a relationship of (somewhat extreme) professional distance. Is there any reason to think this is the wrong approach?

Not really. What I have suggested is that it makes sense to think that the ethical character of teacher-student relationships should be determined by the purpose of that relationship: to educate the student (in the broad sense). The problem with this is that this purpose is vague. There are many potential definitions and conceptualisations of what it means to educate someone. But even if this purpose is vague, it seems clear that sexual or intimate relationships between teachers and students are fraught with risk, and tend to undermine the goal of education. Furthermore, even friendship, particularly in its more meaningful forms, creates perceptions of bias and distracts from the educational mission. One can be friendly with students — open, tolerant, respectful — without being their friend.

That said, I would qualify this approach in two respects. First, given that the purpose of education is unclear, and that teachers may not even be able to help students achieve that purpose if it were clear, there is a reason to think that I should focus more attention on the ongoing dynamics of my interactions with students and less on whether those interactions achieve some vaguely specified goal. This is similar to the argument I made about the purpose of parent-child relationships some time ago. Second, taking onboard Shuffleton’s point, we shouldn’t let the demands of teaching detract from the demands of basic human decency.

* One colleague once told me that I should have children because children are like students that you can follow throughout their whole lives. It was such a bizarre analogy that it has stuck in my head ever since.