Friday, July 31, 2020

Is Morality All About Cooperation?

Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. Is there anything that unites all these moral rules? Is there a single explanatory root for morality as a whole?

According to the theory of Morality As Cooperation (MAC for short), there is. Originally developed by Oliver Scott Curry, the MAC claims that all human moral rules have their origin in attempts to solve problems of cooperation. Since there are many such problems, and many potential solutions to those problems, there are consequently many diverse forms of morality. Nevertheless, despite this diversity, if you unpick the basic logic of all moral rules, you can link them back to an attempt to solve a problem of cooperation.

This is obviously a bold theory. It is highly reductive in the sense that it holds that all of human morality can be reduced to a single underlying phenomenon: cooperation. People will rightly ask if the diverse forms of human morality really are reducible in this way. Does MAC effectively capture the lived reality of human moral systems? Does it simply explain away the diversity and plurality?

These are legitimate questions. Nevertheless, if true, MAC has some exciting implications. It tells us something about the basic structure of all moral systems. It also tells us something about the possible future forms of morality. If a purported moral rule does not ultimately link back to an attempt to resolve a cooperative problem, MAC predicts that it will not be accepted or respected as a moral rule. If a social or technical development threatens or undermines an existing solution to a cooperative problem, it is likely to force us to generate new forms of morality. I find this latter implication particularly exciting since it links the MAC to my own current interests in understanding the moral revolutions of the future.

In the remainder of this article, I want to explain how the MAC works and then consider how the MAC might shed light on the future of morality. I will do this in three stages. First, I will give a basic explanation of the MAC. Second, I will consider a recent amendment to the MAC Scott Smith Curry and his colleagues, suggesting that morality can be understood as a combinatorial system with a finite (but vast) number of possible forms. Finally, I will consider the implications of all this for the future of morality, focusing on some specific technological threats to our cooperative systems and how we might generate new moral systems to resolve those threats. Unfortunately, I won’t be overly precise in this last section of the article. I will be painting with a broad and speculative brush.

1. Morality as Cooperation: The Basic Theory

MAC takes as its starting point the view that human morality is about cooperation. In itself, this is not a particularly ground-breaking insight. Most moral philosophers have thought that morality has something to do with how we interact with other people — with “what we owe each other” in one popular formulation. Scott Curry, in his original paper on the MAC, does a good job reviewing some of the major works in moral philosophy and moral psychology, showing how each of them tends to link morality to cooperation.

Some people might query this and say that certain aspects of human morality don’t seem to be immediately or obviously about cooperation, but one of the claims of MAC is that these seemingly distinctive areas of morality can ultimately be linked back to cooperation. For what it is worth, I am willing to buy the idea that morality is about cooperation as a starting hypothesis. I have some concerns, which I will air below, but even if these concerns are correct I think it is fair to say that morality is, in large part, about cooperation.

As I say, this is not particularly ground-breaking. Where the MAC becomes more interesting is in claiming that there is a finite set of basic cooperative problems faced by human societies, that these problems have been mapped out by evolutionary game theory, and that each of these problems generates a set of solutions. This set of solutions defines the space of possible human moral systems. In other words, at its most abstract level, the MAC can be characterised like this:

Morality as Cooperation: All of human morality — i.e. any rule, virtue, norm (etc) that humans call “moral” — is an attempt to solve a cooperative problem.

A cooperative problem is any non zero sum interaction between humans. Non zero sum interactions are situations in which groups of humans can work together to generate a “win-win” outcome — an outcome in which all people (or certainly a majority of people) can benefit or gain — but in which there is usually some impediment or barrier that must be overcome in order to ensure cooperation.

The MAC can be made more precise by identifying the basic cooperative problems faced by humans and their potential solutions. In his work, Scott Curry claims that there are seven basic cooperative problems, each of which is recurrent in evolutionary history (not just human history) and each of which is linked to a specific manifestation of human morality. I will describe the seven problems in what follows. The first three problems are distinct. The fourth problem breaks down into four distinct sub-types of problem, giving us seven problems in total. They are:

1. Kinship Interaction This is perhaps the most fundamental evolutionary problem of cooperation. Genes have an interest (in a behaviouristic sense) in ensuring that their replicas survive into the future. This means that, within a sexually reproducing species, parents have an interest in ensuring the survival of their children and siblings have an interest in ensuring the survival of their other siblings, and so on, with the interest being proportional to the degree of relatedness. This is a cooperative problem because, in order to ensure the survival of replica genes, people need to be able to identify their kin and must act in a way that helps the survival of their kin. From the perspective of the MAC, this means we would expect moral norms to develop around the protection of kin. Sure enough, in every society, this is what we find. There are strong moral duties associated with parenthood and loyalty to one’s kin.
2. Mutualisms: A mutualism is any scenario in which a group of people, acting together, can achieve some immediate mutual gain. The classic example in human evolutionary history is big game hunting. Individual humans can hunt on their own but they can only hunt for relatively small animals. Working together, they can chase and kill larger animals. This a mutual benefit because the food value of big game is greater than the food value of smaller animals. The cooperative problem arises because you need to ensure that people are aware of the mutual benefit and are able and willing to coordinate their efforts to achieve the mutual benefit. There are a variety of tools and tricks that enable them to do so, e.g. focal points, signalling and communication systems, institutional punishment and so on. From the perspective of the MAC, this means we would expect moral norms to develop around the tools and tricks that enable groups to coordinate on mutualisms. And indeed we do. There are strong moral norms in virtually all human societies that encourage group loyalty, adopting local conventions, forming friendships and alliances, and so on. In fact, solving coordination problems might be the most widely discussed evolutionary origin for morality.
3. Exchange Interaction: An exchange interaction is like a mutualism but with one significant difference. The mutual benefit that is derived from the cooperative action is delayed and hence uncertain. You have to wait for someone else to do their part or to return the favour. Most commercial interactions are of this form. One person supplies a good or benefit first and then waits for the other side to do their bit. Informal exchanges are also common. Neighbours sometimes help one another out in times of need, expecting that the favour will be returned in the future. This is a cooperative problem because it’s not easy to guarantee that the other side will do their bit. They might free ride on or exploit the good will of others. There are a variety of tools for ensuring that they will do their bit including, most notably, various forms of group punishment (including gossiping, ostracising, shunning, shaming and physical assault). From the perspective of the MAC, this means we would once again expect reciprocity and promise-keeping to be duties or virtues in most societies. And they clearly are. Indeed, many cultures share a norm of reciprocity that is sometimes called “The Golden Rule” of morality: do unto others as you would have done unto you.
4. Conflict Resolution: On the face of it, conflict scenarios don’t seem like cooperative problems; they seem like the exact opposite. Conflicts usually arise when people are competing for some scarce or contested resource (food, power, sex, territory). These competitions usually look like win-lose scenarios: one side’s gain is the other side’s loss. But Scott Curry argues that most conflict scenarios include within them a non-zero sum element. Violent resolution of conflicts is costly to all sides. There is usually a way of resolving the conflict without resorting to violence that is less costly and can seem like a win-win (both sides get something of what they want or, at least, don’t end up dead). Can the parties to the conflict cooperate on the less costly resolution? Scott Curry identifies four ways of doing this:
4a. Domination: One ways to resolve conflicts is for some individuals to be recognised as dominant over other individuals. These people are seen, within their societies, to be powerful, brave, physically (and, perhaps more recently, mentally) superior to others. They often entitled to take slightly more of shared resources and they expect deference and loyalty from others. This might not seem, to modern minds, like an ‘ethical’ way of resolving conflicts but it is certainly a practical solution to the problem of conflict. Any society in which people know their place is one in which conflict can be minimised. From the perspective of the MAC, this means we would expect norms and virtues of dominance to be common. For example, we would expect people who are brave, courageous, physically dominant (etc) to be frequently celebrated as morally virtuous. We do see this across most societies. Ancient Greek societies, for example, placed a lot of emphasis on the virtue of physical prowess and bravery. Likewise, there are many societies in which there are norms of honour and social status that support dominance hierarchies.
4b. Submission This is just the flipside of dominantion. Domination cannot work as a conflict resolution strategy if everyone tries to be dominant. Some people have to submit and defer to those that are dominant. Recognising this, the MAC would predict that there will be norms and virtues of deference and submission across many societies. This is indeed true, knowing your place and deferring to your social superiors are seen to be moral duties (and virtues) in many societies. Again, this may not seem like an ‘ethical’ solution to a cooperative problem to modern minds. This is because many of us live in liberal societies which are usually premised on an assumption of moral equality. How did we end up with this assumption? It’s hard to say exactly why but Scott Curry suggests that moral systems built around domination-submission are only sustainable when there are clear power/ability asymmetries in societies. If technology, education and other social reforms remove those asymmetries, then the morality of domination-submission may fade away.
4c. Division: Whenever there is a conflict over a resource that can be divided up into different portions, an obvious conflict resolution strategy is to divide the portions among the competitors. This saves them having to compete for the full resource. From the perspective of the MAC, this means that we would expect norms to develop around the fair division of divisible resources. This could include norms around the division of food and land, for example. Again, it is obvious enough that we do see such norms across most societies. There is, however, a problem here. In game theory, these scenarios are modelled as bargaining problems and there are, in principle, a large number of potential solutions to them. Suppose two people are competing over $100. In principle, any division of that sum of money that exhausts the full $100 (e.g. 20-80; 30-70; 40-60 and so on) is a Nash Equilibrium solution. So we might expect to see high variability in norms of fair division across societies. We do see this, to some extent, nevertheless it is remarkable how many societies tend to gravitate towards roughly equal shares (in the absence of some other norms concerning, say, domination-submission or possession).
4d. Possession: Finally, another way of resolving conflicts about disputed resources is simply to defer to prior possession or ownership. This may not be fair or egalitarian, but it is often a quick and easy way to avoid protracted conflict. From the perspective of the MAC, this means we would expect norms of property and prior possession to emerge across societies. Again, we do see evidence for this, with many societies adopting something like a “finders keepers” rule of thumb when it comes to certain resources.

In summary, the idea behind the MAC is that human moral systems derive from attempts to resolve cooperative problems. There are seven basic cooperative problems and hence seven basic forms of human morality. These are often blended and combined in actual human societies (more on this in a moment), nevertheless you can still see the pure forms of these moral systems in many different societies. The diagram below summarises the model and gives some examples of the ethical norms that derive from the different cooperative problems.

Before I move on, let me say two further things about this basic model of the MAC.

First, let me say something about the evidence in its favour. It may sound plausible enough in theory but is there any good evidence for thinking that all of human morality is, in fact, reducible to an attempt to solve a cooperative problem? This is something that Scott Curry and his colleagues have explored in recent papers. In one particularly interesting study, they conducted a linguistic analysis of the ethnographic record of 60 societies. They selected these societies randomly from an established database of ethnographic records. Using specified keywords and phrases, they then searched for any mention of the seven moral systems outlined above and tried to see whether the behaviours associated with them (e.g. being loyal to your kin; keeping your promises; deferring to social authorities and so on) were positively valenced in those societies. In other words, did people think those behaviours were morally good? The MAC predicted that they would be and, with one exception, this was what they found. In fact, out of 962 recorded observations concerning the moral value of different behaviours, 961 were found to support the MAC. The one exception was among the Chuuk people of Micronesia, where stealing was morally valued, if it was part of a display of dominance. This is a case where one type of cooperative solution (dominance) trumps another (prior possession). So it may not be a true exception. I recommend reading the full study to get a sense of the evidence in support of the MAC.

Second, let me mention some concerns one might have about the MAC. Although I am attracted to its reductive and unifying nature, I am also wary of the attempt to link all moral rules and behaviours back to cooperation. After all some moral rules — e.g. purity rules associated with dress, food consumption, and personal hygiene — that are common in religious traditions, and are often understood to be moral in nature, don’t seem to be obviously linked to cooperation. To be fair, you could argue that they are linked in some distant way. Perhaps adherence to these quirky purity rules is, ultimately, about forging and maintaining a coherent group identity. If you refuse to eat pork, for example, you might be signalling membership of a Jewish or Muslim community and hence solidifying the bonds of that community. But there is a danger that this just distorts reality to fit the theory. I mention this example, incidentally, because purity rules are part of a famous rival to the MAC, Haidt’s “Moral Foundations Theory”. Scott Curry is quite critical of this theory, arguing that the MAC is superior to it in various ways. He might be right about this, but it is beyond the scope of this article to resolve the dispute between these theories.

2. Morality as a Combinatorial System

Despite the problems mentioned above, the MAC is an elegant theory. One of its neat features is its simplicity — all moral rules are explained by a single underlying phenomenon — and its subtle complexity — there are multiple possible solutions to cooperative problems and hence multiple possible moralities. This subtle complexity has been developed in another article by Scott Curry and his colleagues. In this article they argue that morality is a combinatorial system and that the seven basic moralities can combine together in different forms to create a vast number of new moral systems.

What does it mean to say that morality is a combinatorial system? An analogy might be helpful. Think about atomic chemistry. It starts with atoms, which are made up of three basic sub-atomic particles: electrons, protons and neutrons (yes, I know there are other sub-atomic particles!). Different combinations of these three sub-atomic particles give us different chemical elements. Hydrogen is the simplest, consisting of one electron, one proton and one neutron. Other chemical elements add in more of these sub-atomic particles. These elements can themselves combine together to form more complex molecules. For example, two hydrogen atoms combine together with one oxygen molecule to form the molecule we call water (H2O). This a relatively simple molecule. Much more complex molecules exist as well. The crucial point, however, is that from a small set of simple components (three sub-atomic particles), combined together in different ways, we can create all the complexity we see in the world around us.

The claim is that the MAC is has similar combinatorial complexity. You have seven basic moral systems and these can be combined together to form more complex moral molecules. For example, a kinship based morality can combine together with a mutualistic morality to create a group-based moral system that is premised on fictive kinship, e.g. the belief that all members of a tribe are brothers and sisters. This fictive kinship based morality can be sustained through symbols and rituals, even if the actual degree of biological relatedness between the group members is quite limited. Given that all human societies face multiple cooperative problems, and given that human moral systems can be quite complex, it seems plausible to suppose that most of the actual moral systems we see in the world are these more complex moral molecules.

Is there any evidence to support this idea? That’s what Scott Curry and his colleagues set out to determine in their article on moral molecules. They did this by combining pairs of moral systems drawn from the MAC, hypothesising as to what the likely combined moral system would entail, and searching to see whether such combined moral systems are found in human societies. Focusing on twenty one moral molecules initially, they found some evidence to suggest that all twenty one existed in actual human societies. I won’t go through every examples. One of them was the fictive kinship example mentioned above, which certainly can be found in human societies. Another was an honour based morality, which Scott Curry and his colleagues claim emerges from the combination of a dominance-based morality and an exchange-based morality (you display your dominance through retaliation against others). The full list of moral molecules can be found in the original article.

How many moral molecules might there be? One of the advantages of the MAC is that it seems possible to apply the mathematics of combinatorics to answer this question. If there are, indeed, seven basic moral systems, then all we need to know is how many combinations of those seven basic moralities are possible. It’s like asking how many combinations of students can be formed from a group of seven. You might be familiar with this calculation. There are 7 groups of one student; 21 groups of two students; 35 groups of three students; 35 groups of four students… and so on up to 1 group of seven students. The mathematical operation here is: (7 choose 1) + (7 choose 2)… + (7 choose 7). The total number of combinations is 127. So by applying the mathematics of combinatorics to the MAC we reach the conclusion that there are 127 possible moral systems.

Or are there? As Scott Curry and his colleagues argue, the reality is likely to be more complex than this. For starters, there are positive and negative variations of the seven basic moral systems, i.e. it is logically possible for cultures to disvalue norms like ‘be loyal to your family’ or ‘turn the other cheek’. It may not happen very often in reality but it is still a logical possibility. Furthermore, once you start combining basic moral systems together it is more plausible to imagine societies in which one moral system is rejected or deprioritised relative to another. This means there are 127 possible negative moral systems as well and you need to think about how those might combine with their positive variations. This gives at least 2,186 possible moral systems.

In fact, it’s probably even more complex than this. To this point, the calculations have assumed that there is only one type of moral norm or value associated with the seven basic moral systems. In reality, there may be many values and norms associated with each one. These norms can be combined with norms from other moral systems to create additional moral molecules. When you start adding in all those possible molecules you end up with a truly vast space of possible moral systems. Some of these might be very weird or alien to us, but they are at least logically possible.

This might seem like a pessimistic conclusion. The MAC begins in the hope of reducing morality to some simple underlying components. Although it may succeed in that aim, when we start to think about how those simple components combine into more complex moral systems, we end up with a mind-bogglingly vast space of possibility. Still, I think there are some reasons to be optimistic. Unlike other approaches to morality, the MAC places basic constraints on the space of possible moral systems. This is encouraging when we try to think about the future of morality. How might human moral systems change? What moral system will our grandchildren embrace? If the MAC is right, it will have something to do with solving cooperative problems.

3. The MAC and the Future

Let me close with some speculations about our possible moral futures. In particular, let me consider how technology might change our moral systems. According to the MAC, the way to think about this is to think about the impact of technology on the cooperative problems we face. Do they make these problems easier to solve or harder to solve? Do they create new cooperative problems? How might this, in turn, affect our attachment to certain moral norms?

It seems to me that there are at least three major things that technology can do to cooperation:

(a) It can enable humans to form larger cooperative networks: for example transport technology and communications technology allow us to interact with and coordinate our efforts with more, geographically dispersed, people, thereby securing newer and larger forms of mutual benefit. These larger networks can place greater strain on our traditional cooperative moral norms and values. For example, the usual tricks for maintaining cooperation, such as relying on fictive kinship or gossip or social ostracism might not work in a more globalised and anonymous world.
(b) It can help to implement new solutions to cooperative problems: some technologies enable faster and cheaper ways of maintaining cooperation, group loyalty, dominance and so on. Weapons technology, surveillance technology, and behaviour manipulation technology, for example, can help to maintain cohesion and coordination in a similar way to tribal punishment, gossip and ostracism (indeed, social media technology enables a globalised form of the latter). There is something of an arms-race to this though. We are tempted to use these technologies to solve cooperative problems arising from the strains placed on our traditional tools as a result of the larger cooperative networks we have formed.
(c) It can help to create new types of cooperative partner: this one is a bit more outlandish and controversial. The MAC assumes that cooperation involves humans cooperating with one another. But increasingly our technology has its own agency and autonomy (contested though this may be). This means that, at least in some cases, technology becomes a new cooperative partner, one that may not share many human traits or values or emotions. This might make it more or less reliable than a human moral cooperator. If machine cooperators are more reliable and easily controllable than human moral partners, this might make it easier to solve cooperative problems without recourse to moral tools and tricks (i.e. it could enable what Roger Brownsword calls the ‘technological management’ of our normative concerns). If they are less reliable and less easily controlled, then this might create a great deal of moral stresses and strains. A real challenge emerges as to how technological agents integrate into our cooperative moral systems. Are they treated as equal moral partners? Dominants? Submissives? These are questions that are actively debated and need resolution.

In conclusion, technology changes how we interact with ourselves and the world around us and thereby puts stress on our traditional cooperative morality. Some moral norms are no longer fit for purpose. Some need to be expanded to address the new technological reality. We might start to value technological solutions to cooperative problems over traditional human-centric one. Instead of valuing the loyalty and trustworthiness of humans; we start to value the efficiency and reliability of machines. These are themes and ideas already present in the philosophy of technology, but not ones that are explicitly linked back to the cooperative roots of morality.

There is a lot more to be said. But this is at least a start.

Monday, July 27, 2020

78 - Humans and Robots: Ethics, Agency and Anthropomorphism


Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today's guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend of the show, having appeared twice before. In this episode, we are talking about his recent, great, book Humans and Robots: Ethics, Agency and Anthropomorphism

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 

Show Notes:

Topics covered in this episode include:
  • Why did Sven play football with a robot? Who won?
  • What is a robot?
  • What is an agent?
  • Why does it matter if robots are agents?
  • Why does Sven worry about a normative mismatch between humans and robots? What should we do about this normative mismatch?
  • Why are people worried about responsibility gaps arising as a result of the widespread deployment of robots?
  • How should we think about human-robot collaborations?
  • Why should human drivers be more like self-driving cars?
  • Can we be friends with a robot?
  • Why does Sven reject my theory of ethical behaviourism?
  • Should we be pessimistic about the future of roboethics?

Relevant Links


Friday, July 24, 2020

The Mechanics of Moral Change

I’ve recently become fascinated by moral revolutions. As I have explained before, by “moral revolution” I mean a change in social beliefs and practices about rights, wrongs, goods and bads. I don’t mean a change in the overarching moral truth (if such a thing exists). Moral revolutions strike me as an important topic of study because history tells us that our moral beliefs and practices change, at least to some extent, and it is possible that they will do so again in the future. Can we plan for and anticipate future moral revolutions? That's what I am really interested in.

To get a handle on this question, we need to think about the dynamics of moral change. What is changing and how does it change? Recently, I’ve been reading up on the history and psychology of morality and this article is an attempt to distill, from that reading, some models for understanding the dynamics of moral change. Everything I say here is preliminary and tentative but it might be of interest to some readers.

1. The Mechanics of Morality: a Basic Picture

Let’s start at the most abstract level. What is morality? Philosophers will typically tell you that morality consists of two things: (i) a set of claims about what is and is not valuable (i.e. what is good/bad/neutral) and (ii) a set of claims about what is and is not permissible (i.e. what is right, wrong, forbidden, allowed etc).

Values are things we ought to promote and honour through our behaviour. They include things like pleasure, happiness, love, equality, freedom, well-being and so on. The list of things that are deemed valuable can vary from society to society and across different historical eras. For example, Ancient Greek societies, particularly in the Homeric era, placed significant emphasis on the value of battlefield bravery. Modern liberal societies tend to value the individual pursuit of happiness more than bravery on the battlefield. That said, don’t misinterpret this example. There are many shared values across time and space. Oftentimes the changes between societies are subtle, involving different priority rankings over shared values rather than truly different sets of values.

Rights and wrongs are the specific behavioural rules that we ought to follow. They are usually connected to values. Indeed, in some sense, values are the more fundamental moral variable. A society needs to figure out what it values first before it comes up with specific behavioural rules (though it may be possible that following specific rules causes you to change your values). These behavioural rules can also vary from society to society and across different historical eras. To give a controversial example, it seems that sexual relationships between older men and (teenage) boys were permissible, and even celebrated, in Ancient Greece. In modern liberal societies they are deemed impermissible.

So beliefs about what is good/bad and right/wrong are the fundamental moral variables. It follows that moral revolutions must consist, at a minimum, in changes in what people think is good/bad (additions, subtractions and reprioritisations of values) and right/wrong (new permissions, obligations, prohibitions and so on).

2. Our Moral Machinery

How could these things change? To start to answer this question, I suggest we develop a simple model of the human moral machine. By using the term “human moral machine” I mean to refer to the machine that generates our current moral beliefs and practices. How does that machine currently work? It’s only when we can answer this question that we will get a better sense of how things might change in the future. To be clear,I don’t think of this as a machine in the colloquial sense. It’s not like an iPhone or a laptop computer. It is, rather, a complex social-technical-biological mechanism, made up of objects, processes and functions. I hope no one will mind this terminological preference.

At its most fundamental level, the human moral machine is the human brain. The brain, after all, is the thing that generates our moral beliefs and practices. How does this happen? All brains are, in a sense, evaluative systems. They record sensory inputs and then determine the evaluative content of those inputs. Think about the brain of a creature like a slug. It probes the creature’s local environment identifying potential food sources (good), mates (good), toxic substances (bad) and predators (bad). The slug itself may not understand any of this — and it may not share the conceptual labels that we apply to its sensory inputs — but its brain is, nevertheless, constantly evaluating its surroundings. It then uses these evaluations to generate actions and behaviours. It often does this in a predictable way. In short, the brain of the slug generates rules for behaviour in response to evaluative content.

Humans brains are no different. They are also constantly evaluating their surroundings, categorising sensory inputs according to their evaluative content, and generating rules for action in response. Where humans differ from slugs is in the complexity of our evaluations and the diversity of the behavioural rules we follow. Some of our evaluations and rules are programmed into us as basic evolutionary responses; some we learn from our cultures and peers; some we learn through our own life experiences. It is through this process of evaluation and rule generation that we create moral beliefs and practices. This isn’t to say that moral beliefs and practices are simply reducible to brain-generated evaluations and rules. For one thing, not all such evaluations and rules attract the label “moral”. Moral values and rules are rather a subset of these things that take on a particular importance in human social life. They are evaluations and rules that are shared across a society and used as standards against which to criticise and punish conduct.

To say that the basic moral machine is the human brain is not to say that much. What we really want to know is whether the human brain tends to engage in certain kinds of predictable moral evaluation and rule generation. If it does, then there is some hope for developing a general model of moral change. If it doesn’t -- if evaluation and rule-generation is entirely random or too complex to reverse engineer -- then the prospects are pretty dim.

Should we be optimistic or pessimistic on this front? Although there are people who think there is a good deal of randomness and complexity to how our brains learn and adapt to the world, there are plenty of others who disagree and think there are predictable patterns to be discerned. This seems to be true even in the moral realm. Although the diversity of human moral systems is impressive, there is also some remarkable affinity across different cultures. Humans tend to share some very similar values across cultures and this can lead to very similar cross-cultural moral rules.

So I shall be optimistic for the time being and suggest that there are some simple, predictable forces at work in the human moral machine. In particular, I am going to suggest that evolutionary forces have given humans a basic moral conscience — i.e. a basic capacity for generating and adhering to moral norms — and that this moral conscience was an adaptive response to particular challenges faced by human societies in the past. In addition to this, I am going to suggest that this basic moral conscience is, in turn, honed and cultivated in each of our own, individual lives, in response to the cultures we grow up in and the particular experiences we have. The combination of these two things — evolved moral conscience plus individual moral development — is what gives us our current set of moral beliefs and practices and places constraints on our openness to moral change.

In the future, changes to our technologies, cultures and environments are likely to agitate this moral machinery and force it to generate new moral evaluations and rules. This model for understanding human moral change is illustrated below.

For the remainder of this post I will not say much about the future of morality. Instead, I will focus on how our moral consciences might have evolved and how they develop over the course of our own lives.

3. Our Evolved Conscience

I suspect there is no fully satisfactory definition of the term “moral conscience” but the one I prefer defines the conscience as an internalised rule or set of rules that humans believe they ought to follow. In other words, it is our internal sense of right and wrong.

In his book Moral Origins — which I will be referring to several times in what follows — Christopher Boehm argues that our conscience is an “internalised imperative” telling us that we ought to follow a particular rule or else. His claim is that this internalised imperative originally took the form of a conditional rule based on a desire to avoid social punishment:

Original Conscience: I ought to do X [because X is a socially cooperative behaviour and if I fail to do X I will be punished]

What happened over time was that the bit in the square brackets got dropped from how we mentally represent the imperative.

Modern Conscience: I ought to do X because it is the right thing to do.

This modern formulation gives moral rules a special mental flavour. To use the Kantian terminology, moral rules seem to take the form of categorical imperatives — rules that we have to follow — not simply rules that we should follow in order to achieve desirable results. Nevertheless, according to Boehm, the bit in the square brackets of the original formulation is crucial to understanding the evolutionary origins of moral conscience.

Most studies of the evolutionary origins of morality take the human instinct for prosociality and altruism as their starting point. They note that humans are much more altruistic than their closest relatives and try to figure out why. This makes sense. Although there is more to morality than altruism, it is fair to say that valuing the lives and well-being of other humans, and following altruistic norms, is one of the hallmarks of human morality. Boehm’s analysis of the origins of human moral conscience tries to capture this. The bit in the square brackets links moral conscience to our desire to fit in with our societies and cooperate with others.

So what gave rise to this cooperative, altruistic tendency? Presumably, the full answer to this is very complex; the simple answer focuses on two things in particular.

The first is that humans, due to their large brains, faced an evolutionary pressure to form close social bonds. How so? In her book Conscience, the philosopher Patricia Churchland explains it in the following way. She argues that it emerged from an evolutionary tradeoff between endothermy (internal generation of heat), flexible learning and infant dependency. Roughly:

  • Humans evolved to fill the cognitive niche, i.e. our evolutionary success was determined by our ability to use our brains, individually and collectively, to solve complex survival problems in changing environments. This meant that we evolved brains that do not follow lots of pre-programmed behavioural rules (like, for example, turtles) but, rather, brains that learn new behavioural rules in response to experiences.
  • In order to have this capacity for flexible learning, we needed to have big, relatively contentless brains. This meant that we had to be born relatively helpless. We couldn’t have all the know-how we needed to survive programmed into us from birth. We had to use experience to figure things out (obviously this isn’t the full picture but it seems more true of humans than other animals)
  • In addition to being relatively helpless at birth, our big brains were also costly in terms of energy expenditure. We needed a lot of fuel to keep them growing and developing.
  • All of this made humans very dependent on others from birth. In the first instance, this dependency manifested itself in mother-infant relationships, but then social and cultural forces selected for greater community care and investment in infants. Families and tribes all helped out to produce the food, shelter and clothing (and education and technology) needed to ensure the success of our offspring.
  • The net result was a positive evolutionary feedback loop. We were born highly dependent on others, which encouraged us to form close social bonds, and which encouraged others to invest a lot in our success and well-being. A complex set of moral norms concerning cooperation and group sharing emerged as a result.

This was the evolutionary seed for a moral conscience centering on altruism and prosociality.

I like Churchland’s theory because it highlights evolutionary pressures that are often neglected in the story of human morality. In particular, I like how she places biochemical constraints arising from the energy expenditure of the brain at the centre of her story about the origins of our moral conscience. This makes her story somewhat similar to that of Ian Morris, who makes different technologies of energy capture central to his story about the changes in human morality over the past 40,000 years. 

That said, Churchland’s story cannot be the full picture. As anyone will tell you, cooperation can yield great benefits, but it also has its costs. A group of humans working together, with the aid of simple technologies like spears or axes, can hunt for energy-rich food. They can get more of this food working together than they can individually. But cooperative efforts like this can be exploited by free-riders, who take more than they give to the group effort.

Two types of free riders played an important role in human history:

Deceptive Free Riders: People who pretended to cooperate but actually didn’t and yet still received a benefit from the group.
Bullying Free Riders: People who intimidated or violently suppressed others in order to take more than their fair share of the group spoils (e.g. the alpha male dominant in a group).

A lot of attention has been paid to the problem of deceptive free riders over the years, but Christopher Boehm suggests that the bullying free rider was probably a bigger problem in human evolutionary history. 

He derives evidence for this claim from two main sources. First, studies of modern hunter gatherer tribes suggest that members of these groups all seem to have a strong awareness of and sensitivity to bullying behaviour within their groups. They gossip about it and try to stamp it out as soon as they can. Second, a comparison with our ape brethren highlights that they are beset by problems with bullying alpha males who take more than their fair share. This is particularly true of chimpanzee groups. (It is less true, obviously, of bonobo groups where female alliances work to stamp out bullying behaviour. Richard Wrangham explains the differences between bonobos and chimps as being the result of different food and environmental scarcities in their evolutionary environments.)

As Boehm sees it, then, the only way that humans could develop a strong altruistic moral conscience was if they could solve the bully problem. How did they do this? The answer, according to Boehm, is through institutionalised group punishment, specifically group capital punishment of bullies. By themselves, bullies could dominate others. They were usually stronger and more aggressive and could use their physical capacity to get their way. But bullies could not dominate coalitions of others working together, particularly once those coalitions had access to the same basic technologies that enabled big-game hunting. Suddenly the playing field was levelled. If a coalition could credibly threaten to kill a bully, and if they occasionally carried out that threat, the bullies could be stamped out.

Boehm’s thesis, then, is that the capacity for institutionalised capital punishment established a strong social selective pressure in primitive human societies. Bullies could no longer get their way. They had to develop a capacity for self-control, i.e. to avoid expressing their bullying instincts in order to avoid the wrath of the group. They had to start caring about their moral reputations within a group. If they acquired a reputation for cheating or not following the group rules, they risked being ridiculed, ostracised and, ultimately, killed.

It is this capacity for self-control that developed into the moral conscience — the inner imperative telling us not to step out of line. As Boehm puts it:

We moved from being a “dominance obsessed” species that paid a lot of attention to the power of high-ranking others, to one that talked incessantly about the moral reputations of other group members, began to consciously define its more obvious social problems in terms of right and wrong, and as a routine matter began to deal collectively with the deviants in its bands. 
(Boehm, Moral Origins, p 177)

What’s the evidence for thinking that institutionalised punishment was key to developing our moral conscience? Boehm cites several strands of evidence but his most original comes from a cross cultural comparison of human hunter gatherer groups. He created a database of all studied human hunter gatherer groups and noted the incidence and importance of capital punishment in those societies. In short, although modern hunter gatherer groups don’t execute people very often, they do care a lot about moral reputations within groups and most have practiced or continue to practice capital punishment in some form or other.

Richard Wrangham, who is also a supporter of the institutionalised punishment thesis, cites other kinds of evidence for this view. In his book The Goodness Paradox he argues that human morality emerged from a process of self-domestication (akin to the process we see domesticated animals) and that we see evidence for this not just in the behaviour of humans but also in their physiology compared to their chimpanzee cousins (less sexual dimorphism, blunter teeth, less physical strength etc). It’s an interesting argument and he develops it in a very engaging way.

The bottom line for now, however, is that our moral conscience seems to have at least two evolutionary origin points. The first is our big brains and need for flexible learning: this made us dependent on others for long periods of our lives. The second is institutionalised punishment: this created a strong social selective pressure to care about reputation within a group and to favour conformity with group rules.

Understanding these origin points is important because it tells us something about the forces that are likely to alter our moral beliefs and practices in the future. Most humans have a tendency for groupishness, we care about our reputations within our groups and we often try to conform with group expectations. That said, we are not sheep. Our brains often look for loopholes in group rules, trying to exploit things to our advantage. So we are sensitive to the opinions of others and wary of the threat of punishment, but we are willing to break the rules if the cost-benefit ratio is in our favour. This tells us that if we want to change moral beliefs and practices, an obvious way to do this is by manipulating group reputational norms and punishment practices.

4. Our Developed Conscience

So much for the general evolutionary forces shaping our moral conscience. There are obviously some individual differences too. We learn different behavioural rules in different social groups and through different life experiences. We are also, each of us, somewhat different with respect to our personalities and hence our inclinations to follow moral rules.

It would be impossible to review all the forces responsible for these individual differences in this article, but I will mention two important ones in what follows: (i) our basic norm-learning algorithm and (ii) personality types. I base my description of them largely on Patricia Churchland’s discussion in Conscience.

First, let’s talk about how we learn moral rules. Pioneering studies done by the neuroscientists Read Montague and Terry Sejnowski suggest that the human brain follows a basic learning algorithm known as the “reward-prediction-error” algorithm (now popularised as "reinforcement learning" in artificial intelligence research). It works like this (roughly):

  • The brain is constantly active and neurons in the brain have a base rate firing pattern. This base rate firing pattern is essentially telling the brain that nothing unexpected is happening in the world around it.
  • When there is a spike in the firing pattern this is because something unexpectedly good happens (i.e. the brain experiences a “reward”)
  • When there is a drop in the firing pattern this is because something unexpectedly bad happens (i.e. the brain experiences a “punishment”)

This natural variation in firing is exploited by different learning processes. Consider classical conditioning. This is where the brain learns to associate another signal with the presentation of a reward. In the standard example, a dog learns to associate the ringing of a bell with the presentation of food. In classical conditioning, the brain is switching the spike in neural firing from the presentation of the reward to the stimulus that predicts the reward (the ringing of the bell). In other words, the brain links the stimulus with the reward in such a way that it spikes its firing rate in anticipation of the reward. If it makes a mistake, i.e. the spike in firing does not predict the reward, then it learns to dissociate the stimulus with the presentation of the reward. In short, whenever there is a violation of what the brain expects (whenever there is an "error"), there is a change in the brain's firing rate, and this is used to learn new associations.

It turns out that this basic learning algorithm can also help to explain how humans learn moral rules. Our understanding of shared social norms guides our expectations of the social world. We expect people to follow the social norms and when they do not this is surprising. It seems plausible to suppose that we learn new social norms by keeping track of the norms we expected people to follow.

This has been studied experimentally. Xiang, Lohrenz and Montague performed a lab study to see if groups of people playing the Ultimatum Game learned new norms of gameplay by following the reward-prediction-error process. It turns out they did.

The Ultimatum Game is a simple game in which one player (A) is given a sum of money to divide between himself and another player (B). The rule of the game is that player A can propose whatever division of the money he prefers and player B can either accept this division or reject it (in which case both players get nothing). Typically, humans tend to favour a roughly egalitarian split of the money. Indeed, if the first player proposes an unequal split of the money, the second player tends to punish this by rejecting the offer. That said, there is some cross-cultural variation and, under the right conditions, humans can learn to favour a less egalitarian split.

Xiang, Lohrenz and Montague ran the experiment like this:

  • They had two different types of experimental subjects: donors, who would propose different divisions of $20, and responders, who would accept or reject these divisions.
  • They then ran multiple rounds of the Ultimatum game (60 in total). They split responders into two different groups in the process. Group one would run through a sequence of games that started with donors offering very low (inegalitarian) sums but ending up with high (egalitarian) ones. Group two would run through the opposite sequence, starting with high offers and ending with low ones.
  • In other words, responders in group one were trained to expect unequal divisions initially and then for this to change, while those in group two were trained to expect equal divisions and then for this to change.

The researchers found that, under these circumstances, the responders’ brains seemed to follow a learning process similar to that of reward-prediction-error, something they called “norm prediction error”. In this learning process, the violation of a norm is perceived, by the brain, as an error. This can be manipulated in order to train people to adapt to new norms.

One of the particularly interesting features of this experiment was how the different groups of responders perceived the morality of the different divisions. At round 31 of the game, both sets of responders received the exact same offer: nine dollars. Those in group one (the low-to-high offer group) thought that this was great because it was more generous than they were initially trained to expect (bearing in mind their background cultural norms, which were to expect a fair division). Those in group two thought it was not so great since it was less generous than they had been trained to expect.

The important point about this experiment is that it tell us something about how norms shape our expectations and hence affect the changeability of our moral beliefs and practices. We all become habituated to a certain normative baseline in the course of our own lives. Nevertheless, with the right sequence of environmental stimuli it’s possible that, within certain limits, our norms can shift quite rapidly (Churchland argues that fashion norms are a good example of this).

The other point that is worth mentioning now is how individual personality type can affect our moral conscience. Churchland uses the Big Five personality type model (openness, conscientiousness, extroversion-introversion, agreeableness and neuroticism) to explain this. This is commonly used in psychology. She notes that where we fall on the spectrum with respect to these five traits affects how we interact with and respond to moral norms. For example, those who are more extroverted, agreeable and open can be easier to shift from their moral baseline. Those who are more conscientious and neurotic can be harder to shift.

She also offers an interesting hypothesis. She argues that there are two extreme moral personality types:

Psychopaths: These are people that appear to lack a moral conscience. They often know what social morality demands of them but they lack any emotional attachment to the social moral rules. They do not experience them as painful violations of the moral order. These people have an essentially amoral experience of the world (though they can act in what we would call “immoral” ways).
Scrupulants: These are people that have a rigid and inflexible approach to moral rules (possibly rooted in a desire to minimise chaos and uncertainty). They often follow moral rules to their extremes, sometimes neglecting family, friends and themselves in the process. They are almost too moral in their experience of the world. They are overly attached to moral rules.

Identifying these extremes is useful, not only because we sometimes have to deal with psychopaths and scrupulants, but also because we all tend to fall somewhere between these two extremes. Some of us are more attached to existing moral norms than others. Knowing where we all lie on the spectrum is crucial if we are going to understand the dynamics of moral change. (It may also be the case that it is those who lie at the extremes that lead moral revolutions. This is something I suggested in an earlier essay on why we should both hate and love moralists).

5. Conclusion

In summary, moral change is defined by changes in what we value and what we perceive to be right and wrong. The mechanism responsible for this change is, ultimately, the human brain since it is the organ that creates and sustains moral beliefs. But the moral beliefs created and sustained by the human brain are a product of evolution and personal experience.

Evolutionary forces appear to have selected for proscial, groupish tendencies among humans: most of us want to follow social moral norms and, perhaps more crucially, be perceived to be good moral citizens. That said, most of us are also moral opportunists, open to bending and breaking the rules under the right conditions.

Personal experience shapes the exact moral norms we follow. We learn normative baselines from our communities, and we find deviations from these baselines surprising. We can learn new moral norms, but only under the right circumstances. Furthermore, our susceptibility to moral change is determined, in part, by our personalities. Some people are more rigid and emotionally attached to moral rules; some people are more flexible and open to change.

These are all things to keep in mind when we consider the dynamics of moral revolutions.

Monday, July 20, 2020

77 - Should AI be Explainable?

scott robbins

If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). 

Show Notes

Topic covered include:
  • Why do people worry about the opacity of AI?
  • What's the difference between explainability and transparency?
  • What's the moral value or function of explainable AI?
  • Must we distinguish between the ethical value of an explanation and its epistemic value?
  • Why is it so technically difficult to make AI explainable?
  • Will we ever have a technical solution to the explanation problem?
  • Why does Scott think there is Catch 22 involved in insisting on explainable AI?
  • When should we insist on explanations and when are they unnecessary?
  • Should we insist on using boring AI?

Relevant Links


Tuesday, July 14, 2020

The Duty to Rescue (Sample Class)

[What follows is the text for a sample “introduction to law”/“critical thinking and law” class that I sometimes run. It is about the duty of rescue and some of the competing intuitions people have about whether such a duty should be recognised in law. The class is basic and is intended for new students or students thinking about studying law. I typically run this class by getting students to vote on their answers to each of the hypothetical questions, discussing their votes with their peers, and then facilitating a class discussion about these votes. This often ends up with me posing multiple variations on the hypotheticals presented below. The class can be expanded or contracted by increasing/decreasing the number of hypotheticals or case studies and by increasing/decreasing the number of student activities within the class. The minimalist version would just cover the initial hypotheticals and the mock jury/judge exercise]

One of the distinctive features of our legal system — like all legal systems inherited from the United Kingdom — is that it is based on the common law. In a common law system, legal rules are extracted from cases. People come to court with stories. They tell these stories to judges (and sometimes juries). The judges determine what the ruling should be, sometimes creating new rules but more often by basing their judgment on rules derived from older cases. In legal parlance, we call this “following the precedent” (i.e. following the rule set down in older cases). A judge’s ability to apply such old rules depends on whether the new cases are sufficiently similar to the older cases (i.e. are they analogous?)

There is a basic form to all such precedential reasoning. Although it is rarely explicitly stated, what is typically happening here is that judges are following this reasoning process:

  • (Premise) An older case — Case A — stipulates a rule that “if x happens, then legal consequence y should follow”.
  • (Premise) The present case — Case B — is similar to Case A in all important respects.
  • (Conclusion) Therefore, the rule in Case A should apply to case B.

But where do the old rules come from in the first place and what justifies them? Usually, there is some rationale underlying the old rules. There is some reason for thinking that they are a good thing. This is either because they are consistent with basic moral principles or they help to maintain social order or economic prosperity. Oftentimes, judges don’t reflect too much on these underlying rationales. But sometimes they do. Sometimes the cases they are confronted with are not all that similar to the old cases. It would be a stretch to say that they are analogous. In those cases, judges have to decide whether the old rules should be extended to cover the new cases. This is often not a straightforward issue and requires some examination of the rationales underlying the old rules.

In the remainder of this class, we are going to consider how this problem arises in practice by considering some hypothetical and real cases involving the so-called “duty to rescue”.

1. Duty to Rescue - Hypothetical Cases
By a “duty to rescue”, I mean a duty to come to the assistance of someone who is in trouble. Ultimately, we will be considering this duty from the perspective of tort law. This means we will be considering whether or not you have a right to sue someone who fails to come to your rescue. First, however, let’s consider what our basic moral intuitions tell us about the duty to rescue.

Consider, first, the following case:

Case 1: You and your friend are out walking in the park one day. You chance upon a lake. For devilment, you decide to push your friend into the lake. Unfortunately, your friend can’t swim. He starts to flail about in the water and calls for help. You are a good swimmer and could easily come to his rescue. Do you have a duty to rescue your friend in this case?

[Instruction to students: Think about it for a minute, vote on your answer, and, if possible, discuss the reasons for your vote with a peer. This is not a knowledge test. What does your “gut” tell you should happen in this case?]

If you are like most people, you probably think you do have a duty to rescue your friend in this case. Why? Because you are the one who endangered his life and you could easily save him. Letting him die would be dreadful. Presumably, friends drowning in lakes like this are just one example of a more general class of cases in which your actions endanger another person’s life and you could easily eliminate this endangerment. What’s the rule that should apply to all these cases? If you were asked to set this rule down in a legal code how would you do it? Perhaps your answer would look something like this:

Rule 1: If person A, through their actions, causes a potential injury to person B, and if person A could easily rescue person B from that potential injury, then person A has a duty to rescue person B.

That’s a bit cumbersome but get used to it. Legal rules are often formulated in cumbersome ways.

Now consider a second case:

Case 2: You are out walking in the park one day. You chance upon a lake. There is a child drowning in the lake, flailing its arms and asking for help. You are a good swimmer and could easily rescue the child. Do you have a duty to rescue the child?

[Instruction to students: Think about it for a minute, vote on your answer, and, if possible, discuss the reasons for your vote with a peer. Again, it is not a knowledge test. What does your “gut” tell you?]

Views can vary on this case but, in my experience, most people think you probably do have a duty to rescue in this case. True, you didn’t cause the child to get into trouble, but it costs you relatively little to get them out of trouble. You would have to be some kind of moral monster to just walk on by without a care in the world. And, again, presumably rescuing children from lakes is just one instance of a general class of cases where rescuing someone is relatively easy. What’s the rule that should apply to all these cases? It might look something like this:

Rule 2: If person B is in trouble (i.e. there is a serious risk to them of injury or death), and if person A is in a position to rescue them with minimal cost to themselves, then A has a duty to rescue B.

Do you like this rule? If you are like me, you might feel a bit uneasy. This rule covers scenarios like Case 2 but maybe that it covers a lot more too? Consider the following case:

Case 3: You are a doctor. You receive a phone call from a colleague in a neighbouring town. A patient of theirs is suffering from a serious illness. You are one of only a handful of doctors in the region who can perform the surgery that this patient needs to survive. The surgery is relatively simple (from your perspective). It would take 10 minutes and you have performed it thousands of times before. It would take a couple of hours to travel back and forth to the neighbouring town. Your colleague has promised to reimburse you for all your expenses. Do you have a duty to perform the surgery?

[Instruction to students: Think about it for a minute, vote on your answer, and, if possible, discuss the reasons for your vote with a peer]

Hmmm….Not so sure about this? Some of you might resolutely stick with the view that you do have a duty to perform the surgery. I suspect, however, that many of you will not share that view. Surely doctors don’t have a duty to perform surgeries on anyone that needs them? That seems to demand too much. You might be asking yourself some questions about the story too. Would it matter if the town was further away? Or if the surgery was a bit more difficult to perform (e.g. took more time or had a less certain result)? If those things matter, why do they matter? Is it not unprincipled or inconsistent to claim that they do? Maybe we should stick with Rule 2 even if it does demand too much?

Do you see what has happened here? We started with a case in which there seemed to be clearcut duty of rescue. We formulated a rule based on this case then proceeded to a case that was similar, but slightly different. This case also involved a duty to rescue but required a modification to the original rule. We then looked at a third case in which this rule should, by rights, apply but in which we don’t feel comfortable with its application. We now think there should be some limits to the duty to rescue.

We encounter this phenomenon over and over again in common law.

2. Tort Law and the Duty to Rescue
Let’s now consider some law. I mentioned at the outset that I am going to look at the duty to rescue from the perspective of tort law. What does that mean? Tort law is concerned with the right to sue people for compensation. If I visit your house, slip on your wet floor and injure myself, that’s (potentially) a private wrong. Under the rules of tort law, I (might) have the right to sue you for compensation. If I am successful, the compensation I receive should correct the wrong done to me (e.g. pay for my medical bills, give me some financial benefit to ameliorate the physical harm).

Tort law is different from criminal law. Tort law involves legal actions brought by one private individual against another for breach of some duty of care. Criminal law involves legal actions brought by the state (or public) against someone for breach of the criminal law.* Tort law ends with compensation being paid to the victim of the breach of the duty of care. Criminal law ends with the guilty party being punished for their wrongdoing.

Tort law is also different from contract law, though the two are more closely related than either is to criminal law. Contract law involves legal actions being brought by private parties for breach of contract. In order to sue in contract, you must have a contractual relationship with the other party. In tort law, you don’t have to have a contractual relationship with the other side. You just have to have a relationship with them such that they owe you a legally recognised duty of care.

What kind of relationship is that? This turns out to be the critical question. The most famous case in tort law is Donoghue v Stevenson. This is an English case from 1932. The facts of the case are that Mrs Donoghue bought a bottle of ginger beer at a cafe. She downed it and discovered that there was a dead snail at the bottom of the bottle. She then fell ill and sued the manufacturer of the beer, Mr Stevenson. The central issue in the case was whether Mr Stevenson owed her a duty of care.

Previous case law recognised a duty of care in some cases but not in cases like that of Mrs Donoghue. In the House of Lords, Lord Justice Atkin considered this issue at length. He accepted that there had to be limits to how extensive the duty of care was in tort law, otherwise we could all sue each other all the time for failing to make one another’s lives better. On the other hand, he felt that the duty should extend to cases like those of Mrs Donoghue. So, appealing directly to the Biblical parable of the Good Samaritan, he came up with the following ’Neighbour Principle’:

…rules of law arise which limit the range of complainants and the extent of their remedy. The rule that you are to love your neighbour becomes in law, you must not injure your neighbour; and the lawyer's question, Who is my neighbour? receives a restricted reply. You must take reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour. Who, then, in law, is my neighbour? The answer seems to be – persons who are so closely and directly affected by my act that I ought reasonably to have them in contemplation as being so affected when I am directing my mind to the acts or omissions which are called in question.”

The case of Donoghue v Stevenson is, consequently, authority (precedent) for the idea that we have a duty to protect anyone from an injury that is reasonably foreseeable from either an act or an omission.

That said, subsequent case law suggests that the neighbour principle may be too extensive and there have been attempts to add additional restrictions to the concept. Furthermore, the facts of the case do not really line up with our duty of rescue cases. In Donoghue, the beer manufacturer is creating a product that they are selling to the world. They are actively taking steps to bring that product to the market. It stands to reason that they have a duty to check that their product won’t cause injuries to the people that might consume it. What about rescuing someone from some ill fate of their own doing?

There are many tort law cases that seem to explicitly reject the idea that there is a legally recognised duty to rescue someone in distress. Some of these cases are very similar to the hypothetical cases we discussed earlier. Consider the following US examples:

Buch v Amory Manufacturing Co. (1898): The plaintiff was an 8 year old boy who trespassed into the defendant’s mill. While there, his hand was crushed in a machine. The plaintiff claimed that the defendant had a duty to protect him from potential harms while on his property (which could have been discharged, in this case, by simply removing him from the property). The court disagreed. There was no duty to protect the 8 year old trespasser.

Hurley v Eddingfield (1901): The defendant was the plaintiff’s doctor. The plaintiff rang the doctor in serious distress and asked for help. For no apparent reason, the doctor refused to come to his aid. The plaintiff subsequently died and his estate sued the doctor for failing to come to his rescue. The court sided with the doctor. They held that licensed physicians are not under a legal obligation to accept patients in distress. If there is such an obligation, it is a moral one not a legal one.

Osterlind v Hill (1928): The plaintiff hired a canoe from the defendant whilst intoxicated. The canoe capsized and the plaintiff clung to the edge of it calling out for help. The defendant did not come to his rescue. The plaintiff subsequently drowned and his estate sued the defendant. There were several grounds for their claim, including that the defendant should not have hired the canoe to the plaintiff in his condition and that the defendant should have come to his rescue when he was calling out for help. The court rejected them all. The canoe was hired out legally and the defendant was under no obligation to rescue the plaintiff in these circumstances.

These are old cases, and they are from another jurisdiction, but they are nevertheless often cited to support the idea that tort law does not recognise a general duty of rescue. Indeed, whenever confronted with a scenario like this, courts tend to be very reluctant to expand the concept of a duty of care to include a duty to rescue. Why might this be?

3. Against the Duty of Rescue
[If this has not already come up in the discussion of Cases 1, 2 and 2, ask students to come up with arguments against recognising a duty of rescue. This can be done in general, plenary discussion or in breakout groups, depending on time and previous engagement from the group]

There are few reasons to be sceptical about a legally recognised duty of rescue. One of the most common arguments against it is the slippery slope argument. This isn’t so much a specific argument against the duty of rescue as it is a general style of argument against certain policies or rules. It crops up over and over in tort law. Whenever you read judgments in which courts worry about “opening the floodgates” of litigation, you know they are making a slippery slope argument.

How does this argument work? Go back to the earlier hypothetical cases and compare cases 2 and 3. While most people agree that recognising a duty of rescue in Case 2 is desirable the worry is that if you recognise it there, then you also have to recognise it in Case 3, which seems much less desirable. In other words, if we apply Rule 2 in Case 2 then we must, by logical necessity, slide down the slope and apply Rule 2 to Case 3. Since we don’t want to do that, it follows that we shouldn’t apply Rule 2 to Case 2, no matter how desirable it might seem.

It could be that courts, when confronted with cases like Hurley v Eddingfield recognise that although the doctor did something wrong and, in an ideal world, he should be reprimanded for it, if they recognised a right to sue in that case they would have to recognise it in a bunch of other cases where it would be less desirable. So, even if it seems harsh, they should stop the slide down the slippery slope and hold tough on the idea that there is no duty of rescue.

This argument isn’t entirely satisfying. Although slippery slope arguments are common in tort law, they are, by themselves, incomplete. We cannot object to sliding down the slippery slope unless we can something about why the thing that lies at the bottom of the slope is so bad. What is it that worries courts so much? It could be a purely selfish worry. They might worry that if they expanded the scope of tort law they would have more work to do and they wouldn’t be able to cope. But let’s assume their motives are more genuine and they have some reasons for thinking that sliding down the slope would be a bad thing. Here are three reasons that might be motivating them:

The Overdemandingness Reason: They might worry that if they recognised a general duty of rescue (even just in cases where rescue is easy and relatively costless) it would be too demanding. People would be expected to go to great lengths to help others. Doctors would be legally obliged to run to the aid of anyone they could assist and so on. People would buckle under the pressure and life would become very unpleasant.

The Supererogation Reason: There are some things that we are obliged to do, as a matter of duty. There are other things that we are not obliged to do but would be good if we did them. These things are above and beyond the call of duty. In philosophy, these are referred to as “supererogatory” acts (from the Latin for “paying in excess”). Some people argue that we need to leave room for this type of act so that people can demonstrate their moral virtue. Not everything that is morally good ought to be legally obliged. Not recognising a duty of rescue in tort law is one way of leaving room for the supererogratory.

The Freedom Reason: If we recognised a general duty of rescue, it would impinge on people’s freedom. They would find their actions and choices constrained by new legal obligations. In general, we want people to be free to live their lives as they see fit, with minimal intrusions into their freedom. Recognising a duty of rescue would be a step too far.

Are any of these persuasive? I’m not sure but it is worth noting, in relation to the supererogation point, that many jurisdictions, including Ireland, have introduced so-called “Good Samaritan” laws that ensure that people who do come to the rescue of others cannot be sued for their good faith efforts to provide assistance. This kind of law removes any disincentive to be a good samaritan and thus could be seen as an effort to allow for supererogatory acts in social life.

4. Mock Trial Exercise: Yania v Bigan
Let’s close with another class exercise. As noted above, one distinction that is sometimes made is between cases in which a person takes steps that actively endanger another person’s life and cases where no such steps have been taken but the person is, nevertheless, in danger. This is the difference between Case 1 and Case 2 in our original set of hypotheticals. The general consensus seems to be that there is a duty of rescue in the first kind of case but not in the second. The reason for this is that you can create a principled distinction between the two cases that prevents any slide down the slippery slope to a general duty of rescue. The first type of case has to involve some active intervention that endangers another person.

But what does it mean to actively endanger another person? Consider the following (real) case:

Yania v Bigan (1959): The plaintiff and defendant were running rival coal strip-mining operations. Coal strip-mining involves digging trenches to remove coal deposits. The defendant had placed a pump in the bottom of one of his trenches in order to remove water. At the time of the incident in this case, there were several feet of water in the trench. The defendant asked the plaintiff to assist him in removing the water pump. Apparently, he started to taunt the plaintiff and urged him to jump into the trench to remove the pump. The plaintiff succumbed to the taunts, jumped in, and subsequently drowned. The defendant did nothing to help him out. The plaintiff’s estate then sued the defendant for a failure to rescue him.
Imagine that you are the judge or jury in this case. What do you think should happen? Should the court recognise a duty of rescue in this scenario or not?

[Allow students to consider and discuss this for several minutes. Get them to feed back their verdicts to the class as a whole. Discuss for as long as seems appropriate]

So what actually happened in this case? Well, the Pennsylvania Court of Appeal held that there was no duty to rescue. They held that Yania was a competent adult who should have been able to resist Bigan’s insults and taunts. Consequently, his decision to jump in the trench was his own free choice and he had to bear the consequences of it. The verdict might have been different if Yania was a child or suffered from some mental disability.

* This isn’t strictly true. There can be private criminal actions. In general, the attempt to find bright line distinctions between tort law and criminal law is never perfect. The best we can do is say that some actions count as crimes and some actions count as torts and lawyers think they know the difference.