Friday, September 18, 2020

81 - Consumer Credit, Big Tech and AI Crime


In today's episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of 'too big to fail' tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute's Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area.

You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).


Show Notes

Topics discussed include:

  • The digitisation, datafication and disintermediation of consumer credit markets
  • Algorithmic credit scoring
  • The problems of risk and bias in credit scoring
  • How law and regulation can address these problems
  • Tech platforms that are too big to fail
  • What should we do if Facebook fails?
  • The forms of AI crime
  • How to address the problem of AI crime

Relevant Links

Post Block Status & visibility Visibility Public Publish September 18, 2020 1:09 pm Stick to the top of the blog Author John Danaher Enable AMP Move to trash 9 Revisions Permalink Categories Uncategorized Podcast Add New Category Tags Add New Tag Separate with commas or the Enter key. Featured image Excerpt Discussion Open publish panel NotificationsCode editor selected

Thursday, August 13, 2020

80 - Bias, Algorithms and Criminal Justice


Lots of algorithmic tools are now used to support decision-making in the criminal justice system. Many of them are criticised for being biased. What should be done about this? In this episode, I talk to Chelsea Barabas about this very question. Chelsea is a PhD candidate at MIT, where she examines the spread of algorithmic decision making tools in the US criminal legal system. She works with interdisciplinary researchers, government officials and community organizers to unpack and transform mainstream narratives around criminal justice reform and data-driven decision making. She is currently a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly, she was a research scientist for the AI Ethics and Governance Initiative at the MIT Media Lab.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).



Show notes

Topics covered in this show include

  • The history of algorithmic decision-making in criminal justice
  • Modern AI tools in criminal justice
  • The problem of biased decision-making
  • Examples of bias in practice
  • The FAT (Fairness, Accountability and Transparency) approach to bias
  • Can we de-bias algorithms using formal, technical rules?
  • Can we de-bias algorithms through proper review and oversight?
  • Should we be more critical of the data used to build these systems?
  • Problems with pre-trial risk assessment measures
  • The abolitionist perspective on criminal justice reform

Relevant Links


Wednesday, August 5, 2020

79 - Is There A Techno-Responsibility Gap?


Daniel_Tigard 

What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine's actions? That's the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History & Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems. 

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

          

Show Notes


Topics discussed include:

 
  • What is responsibility? Why is it so complex?
  • The three faces of responsibility: attribution, accountability and answerability
  • Why are people so worried about responsibility gaps for autonomous systems?
  • What are some of the alleged solutions to the "gap" problem?
  • Who are the techno-pessimists and who are the techno-optimists?
  • Why does Daniel think that there is no techno-responsibility gap?
  • Is our application of responsibility concepts to machines overly metaphorical?
 

Relevant Links



Friday, July 31, 2020

Is Morality All About Cooperation?



Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. Is there anything that unites all these moral rules? Is there a single explanatory root for morality as a whole?

According to the theory of Morality As Cooperation (MAC for short), there is. Originally developed by Oliver Scott Curry, the MAC claims that all human moral rules have their origin in attempts to solve problems of cooperation. Since there are many such problems, and many potential solutions to those problems, there are consequently many diverse forms of morality. Nevertheless, despite this diversity, if you unpick the basic logic of all moral rules, you can link them back to an attempt to solve a problem of cooperation.

This is obviously a bold theory. It is highly reductive in the sense that it holds that all of human morality can be reduced to a single underlying phenomenon: cooperation. People will rightly ask if the diverse forms of human morality really are reducible in this way. Does MAC effectively capture the lived reality of human moral systems? Does it simply explain away the diversity and plurality?

These are legitimate questions. Nevertheless, if true, MAC has some exciting implications. It tells us something about the basic structure of all moral systems. It also tells us something about the possible future forms of morality. If a purported moral rule does not ultimately link back to an attempt to resolve a cooperative problem, MAC predicts that it will not be accepted or respected as a moral rule. If a social or technical development threatens or undermines an existing solution to a cooperative problem, it is likely to force us to generate new forms of morality. I find this latter implication particularly exciting since it links the MAC to my own current interests in understanding the moral revolutions of the future.

In the remainder of this article, I want to explain how the MAC works and then consider how the MAC might shed light on the future of morality. I will do this in three stages. First, I will give a basic explanation of the MAC. Second, I will consider a recent amendment to the MAC Scott Smith Curry and his colleagues, suggesting that morality can be understood as a combinatorial system with a finite (but vast) number of possible forms. Finally, I will consider the implications of all this for the future of morality, focusing on some specific technological threats to our cooperative systems and how we might generate new moral systems to resolve those threats. Unfortunately, I won’t be overly precise in this last section of the article. I will be painting with a broad and speculative brush.


1. Morality as Cooperation: The Basic Theory

MAC takes as its starting point the view that human morality is about cooperation. In itself, this is not a particularly ground-breaking insight. Most moral philosophers have thought that morality has something to do with how we interact with other people — with “what we owe each other” in one popular formulation. Scott Curry, in his original paper on the MAC, does a good job reviewing some of the major works in moral philosophy and moral psychology, showing how each of them tends to link morality to cooperation.

Some people might query this and say that certain aspects of human morality don’t seem to be immediately or obviously about cooperation, but one of the claims of MAC is that these seemingly distinctive areas of morality can ultimately be linked back to cooperation. For what it is worth, I am willing to buy the idea that morality is about cooperation as a starting hypothesis. I have some concerns, which I will air below, but even if these concerns are correct I think it is fair to say that morality is, in large part, about cooperation.

As I say, this is not particularly ground-breaking. Where the MAC becomes more interesting is in claiming that there is a finite set of basic cooperative problems faced by human societies, that these problems have been mapped out by evolutionary game theory, and that each of these problems generates a set of solutions. This set of solutions defines the space of possible human moral systems. In other words, at its most abstract level, the MAC can be characterised like this:


Morality as Cooperation: All of human morality — i.e. any rule, virtue, norm (etc) that humans call “moral” — is an attempt to solve a cooperative problem.
 

A cooperative problem is any non zero sum interaction between humans. Non zero sum interactions are situations in which groups of humans can work together to generate a “win-win” outcome — an outcome in which all people (or certainly a majority of people) can benefit or gain — but in which there is usually some impediment or barrier that must be overcome in order to ensure cooperation.

The MAC can be made more precise by identifying the basic cooperative problems faced by humans and their potential solutions. In his work, Scott Curry claims that there are seven basic cooperative problems, each of which is recurrent in evolutionary history (not just human history) and each of which is linked to a specific manifestation of human morality. I will describe the seven problems in what follows. The first three problems are distinct. The fourth problem breaks down into four distinct sub-types of problem, giving us seven problems in total. They are:


1. Kinship Interaction This is perhaps the most fundamental evolutionary problem of cooperation. Genes have an interest (in a behaviouristic sense) in ensuring that their replicas survive into the future. This means that, within a sexually reproducing species, parents have an interest in ensuring the survival of their children and siblings have an interest in ensuring the survival of their other siblings, and so on, with the interest being proportional to the degree of relatedness. This is a cooperative problem because, in order to ensure the survival of replica genes, people need to be able to identify their kin and must act in a way that helps the survival of their kin. From the perspective of the MAC, this means we would expect moral norms to develop around the protection of kin. Sure enough, in every society, this is what we find. There are strong moral duties associated with parenthood and loyalty to one’s kin.
 
2. Mutualisms: A mutualism is any scenario in which a group of people, acting together, can achieve some immediate mutual gain. The classic example in human evolutionary history is big game hunting. Individual humans can hunt on their own but they can only hunt for relatively small animals. Working together, they can chase and kill larger animals. This a mutual benefit because the food value of big game is greater than the food value of smaller animals. The cooperative problem arises because you need to ensure that people are aware of the mutual benefit and are able and willing to coordinate their efforts to achieve the mutual benefit. There are a variety of tools and tricks that enable them to do so, e.g. focal points, signalling and communication systems, institutional punishment and so on. From the perspective of the MAC, this means we would expect moral norms to develop around the tools and tricks that enable groups to coordinate on mutualisms. And indeed we do. There are strong moral norms in virtually all human societies that encourage group loyalty, adopting local conventions, forming friendships and alliances, and so on. In fact, solving coordination problems might be the most widely discussed evolutionary origin for morality.
 
3. Exchange Interaction: An exchange interaction is like a mutualism but with one significant difference. The mutual benefit that is derived from the cooperative action is delayed and hence uncertain. You have to wait for someone else to do their part or to return the favour. Most commercial interactions are of this form. One person supplies a good or benefit first and then waits for the other side to do their bit. Informal exchanges are also common. Neighbours sometimes help one another out in times of need, expecting that the favour will be returned in the future. This is a cooperative problem because it’s not easy to guarantee that the other side will do their bit. They might free ride on or exploit the good will of others. There are a variety of tools for ensuring that they will do their bit including, most notably, various forms of group punishment (including gossiping, ostracising, shunning, shaming and physical assault). From the perspective of the MAC, this means we would once again expect reciprocity and promise-keeping to be duties or virtues in most societies. And they clearly are. Indeed, many cultures share a norm of reciprocity that is sometimes called “The Golden Rule” of morality: do unto others as you would have done unto you.
 
4. Conflict Resolution: On the face of it, conflict scenarios don’t seem like cooperative problems; they seem like the exact opposite. Conflicts usually arise when people are competing for some scarce or contested resource (food, power, sex, territory). These competitions usually look like win-lose scenarios: one side’s gain is the other side’s loss. But Scott Curry argues that most conflict scenarios include within them a non-zero sum element. Violent resolution of conflicts is costly to all sides. There is usually a way of resolving the conflict without resorting to violence that is less costly and can seem like a win-win (both sides get something of what they want or, at least, don’t end up dead). Can the parties to the conflict cooperate on the less costly resolution? Scott Curry identifies four ways of doing this:
 
4a. Domination: One ways to resolve conflicts is for some individuals to be recognised as dominant over other individuals. These people are seen, within their societies, to be powerful, brave, physically (and, perhaps more recently, mentally) superior to others. They often entitled to take slightly more of shared resources and they expect deference and loyalty from others. This might not seem, to modern minds, like an ‘ethical’ way of resolving conflicts but it is certainly a practical solution to the problem of conflict. Any society in which people know their place is one in which conflict can be minimised. From the perspective of the MAC, this means we would expect norms and virtues of dominance to be common. For example, we would expect people who are brave, courageous, physically dominant (etc) to be frequently celebrated as morally virtuous. We do see this across most societies. Ancient Greek societies, for example, placed a lot of emphasis on the virtue of physical prowess and bravery. Likewise, there are many societies in which there are norms of honour and social status that support dominance hierarchies.
 
4b. Submission This is just the flipside of dominantion. Domination cannot work as a conflict resolution strategy if everyone tries to be dominant. Some people have to submit and defer to those that are dominant. Recognising this, the MAC would predict that there will be norms and virtues of deference and submission across many societies. This is indeed true, knowing your place and deferring to your social superiors are seen to be moral duties (and virtues) in many societies. Again, this may not seem like an ‘ethical’ solution to a cooperative problem to modern minds. This is because many of us live in liberal societies which are usually premised on an assumption of moral equality. How did we end up with this assumption? It’s hard to say exactly why but Scott Curry suggests that moral systems built around domination-submission are only sustainable when there are clear power/ability asymmetries in societies. If technology, education and other social reforms remove those asymmetries, then the morality of domination-submission may fade away.
 
4c. Division: Whenever there is a conflict over a resource that can be divided up into different portions, an obvious conflict resolution strategy is to divide the portions among the competitors. This saves them having to compete for the full resource. From the perspective of the MAC, this means that we would expect norms to develop around the fair division of divisible resources. This could include norms around the division of food and land, for example. Again, it is obvious enough that we do see such norms across most societies. There is, however, a problem here. In game theory, these scenarios are modelled as bargaining problems and there are, in principle, a large number of potential solutions to them. Suppose two people are competing over $100. In principle, any division of that sum of money that exhausts the full $100 (e.g. 20-80; 30-70; 40-60 and so on) is a Nash Equilibrium solution. So we might expect to see high variability in norms of fair division across societies. We do see this, to some extent, nevertheless it is remarkable how many societies tend to gravitate towards roughly equal shares (in the absence of some other norms concerning, say, domination-submission or possession).
 
4d. Possession: Finally, another way of resolving conflicts about disputed resources is simply to defer to prior possession or ownership. This may not be fair or egalitarian, but it is often a quick and easy way to avoid protracted conflict. From the perspective of the MAC, this means we would expect norms of property and prior possession to emerge across societies. Again, we do see evidence for this, with many societies adopting something like a “finders keepers” rule of thumb when it comes to certain resources.
 

In summary, the idea behind the MAC is that human moral systems derive from attempts to resolve cooperative problems. There are seven basic cooperative problems and hence seven basic forms of human morality. These are often blended and combined in actual human societies (more on this in a moment), nevertheless you can still see the pure forms of these moral systems in many different societies. The diagram below summarises the model and gives some examples of the ethical norms that derive from the different cooperative problems.


Before I move on, let me say two further things about this basic model of the MAC.

First, let me say something about the evidence in its favour. It may sound plausible enough in theory but is there any good evidence for thinking that all of human morality is, in fact, reducible to an attempt to solve a cooperative problem? This is something that Scott Curry and his colleagues have explored in recent papers. In one particularly interesting study, they conducted a linguistic analysis of the ethnographic record of 60 societies. They selected these societies randomly from an established database of ethnographic records. Using specified keywords and phrases, they then searched for any mention of the seven moral systems outlined above and tried to see whether the behaviours associated with them (e.g. being loyal to your kin; keeping your promises; deferring to social authorities and so on) were positively valenced in those societies. In other words, did people think those behaviours were morally good? The MAC predicted that they would be and, with one exception, this was what they found. In fact, out of 962 recorded observations concerning the moral value of different behaviours, 961 were found to support the MAC. The one exception was among the Chuuk people of Micronesia, where stealing was morally valued, if it was part of a display of dominance. This is a case where one type of cooperative solution (dominance) trumps another (prior possession). So it may not be a true exception. I recommend reading the full study to get a sense of the evidence in support of the MAC.

Second, let me mention some concerns one might have about the MAC. Although I am attracted to its reductive and unifying nature, I am also wary of the attempt to link all moral rules and behaviours back to cooperation. After all some moral rules — e.g. purity rules associated with dress, food consumption, and personal hygiene — that are common in religious traditions, and are often understood to be moral in nature, don’t seem to be obviously linked to cooperation. To be fair, you could argue that they are linked in some distant way. Perhaps adherence to these quirky purity rules is, ultimately, about forging and maintaining a coherent group identity. If you refuse to eat pork, for example, you might be signalling membership of a Jewish or Muslim community and hence solidifying the bonds of that community. But there is a danger that this just distorts reality to fit the theory. I mention this example, incidentally, because purity rules are part of a famous rival to the MAC, Haidt’s “Moral Foundations Theory”. Scott Curry is quite critical of this theory, arguing that the MAC is superior to it in various ways. He might be right about this, but it is beyond the scope of this article to resolve the dispute between these theories.


2. Morality as a Combinatorial System

Despite the problems mentioned above, the MAC is an elegant theory. One of its neat features is its simplicity — all moral rules are explained by a single underlying phenomenon — and its subtle complexity — there are multiple possible solutions to cooperative problems and hence multiple possible moralities. This subtle complexity has been developed in another article by Scott Curry and his colleagues. In this article they argue that morality is a combinatorial system and that the seven basic moralities can combine together in different forms to create a vast number of new moral systems.

What does it mean to say that morality is a combinatorial system? An analogy might be helpful. Think about atomic chemistry. It starts with atoms, which are made up of three basic sub-atomic particles: electrons, protons and neutrons (yes, I know there are other sub-atomic particles!). Different combinations of these three sub-atomic particles give us different chemical elements. Hydrogen is the simplest, consisting of one electron, one proton and one neutron. Other chemical elements add in more of these sub-atomic particles. These elements can themselves combine together to form more complex molecules. For example, two hydrogen atoms combine together with one oxygen molecule to form the molecule we call water (H2O). This a relatively simple molecule. Much more complex molecules exist as well. The crucial point, however, is that from a small set of simple components (three sub-atomic particles), combined together in different ways, we can create all the complexity we see in the world around us.

The claim is that the MAC is has similar combinatorial complexity. You have seven basic moral systems and these can be combined together to form more complex moral molecules. For example, a kinship based morality can combine together with a mutualistic morality to create a group-based moral system that is premised on fictive kinship, e.g. the belief that all members of a tribe are brothers and sisters. This fictive kinship based morality can be sustained through symbols and rituals, even if the actual degree of biological relatedness between the group members is quite limited. Given that all human societies face multiple cooperative problems, and given that human moral systems can be quite complex, it seems plausible to suppose that most of the actual moral systems we see in the world are these more complex moral molecules.

Is there any evidence to support this idea? That’s what Scott Curry and his colleagues set out to determine in their article on moral molecules. They did this by combining pairs of moral systems drawn from the MAC, hypothesising as to what the likely combined moral system would entail, and searching to see whether such combined moral systems are found in human societies. Focusing on twenty one moral molecules initially, they found some evidence to suggest that all twenty one existed in actual human societies. I won’t go through every examples. One of them was the fictive kinship example mentioned above, which certainly can be found in human societies. Another was an honour based morality, which Scott Curry and his colleagues claim emerges from the combination of a dominance-based morality and an exchange-based morality (you display your dominance through retaliation against others). The full list of moral molecules can be found in the original article.

How many moral molecules might there be? One of the advantages of the MAC is that it seems possible to apply the mathematics of combinatorics to answer this question. If there are, indeed, seven basic moral systems, then all we need to know is how many combinations of those seven basic moralities are possible. It’s like asking how many combinations of students can be formed from a group of seven. You might be familiar with this calculation. There are 7 groups of one student; 21 groups of two students; 35 groups of three students; 35 groups of four students… and so on up to 1 group of seven students. The mathematical operation here is: (7 choose 1) + (7 choose 2)… + (7 choose 7). The total number of combinations is 127. So by applying the mathematics of combinatorics to the MAC we reach the conclusion that there are 127 possible moral systems.

Or are there? As Scott Curry and his colleagues argue, the reality is likely to be more complex than this. For starters, there are positive and negative variations of the seven basic moral systems, i.e. it is logically possible for cultures to disvalue norms like ‘be loyal to your family’ or ‘turn the other cheek’. It may not happen very often in reality but it is still a logical possibility. Furthermore, once you start combining basic moral systems together it is more plausible to imagine societies in which one moral system is rejected or deprioritised relative to another. This means there are 127 possible negative moral systems as well and you need to think about how those might combine with their positive variations. This gives at least 2,186 possible moral systems.

In fact, it’s probably even more complex than this. To this point, the calculations have assumed that there is only one type of moral norm or value associated with the seven basic moral systems. In reality, there may be many values and norms associated with each one. These norms can be combined with norms from other moral systems to create additional moral molecules. When you start adding in all those possible molecules you end up with a truly vast space of possible moral systems. Some of these might be very weird or alien to us, but they are at least logically possible.

This might seem like a pessimistic conclusion. The MAC begins in the hope of reducing morality to some simple underlying components. Although it may succeed in that aim, when we start to think about how those simple components combine into more complex moral systems, we end up with a mind-bogglingly vast space of possibility. Still, I think there are some reasons to be optimistic. Unlike other approaches to morality, the MAC places basic constraints on the space of possible moral systems. This is encouraging when we try to think about the future of morality. How might human moral systems change? What moral system will our grandchildren embrace? If the MAC is right, it will have something to do with solving cooperative problems.


3. The MAC and the Future

Let me close with some speculations about our possible moral futures. In particular, let me consider how technology might change our moral systems. According to the MAC, the way to think about this is to think about the impact of technology on the cooperative problems we face. Do they make these problems easier to solve or harder to solve? Do they create new cooperative problems? How might this, in turn, affect our attachment to certain moral norms?

It seems to me that there are at least three major things that technology can do to cooperation:


(a) It can enable humans to form larger cooperative networks: for example transport technology and communications technology allow us to interact with and coordinate our efforts with more, geographically dispersed, people, thereby securing newer and larger forms of mutual benefit. These larger networks can place greater strain on our traditional cooperative moral norms and values. For example, the usual tricks for maintaining cooperation, such as relying on fictive kinship or gossip or social ostracism might not work in a more globalised and anonymous world.
 
(b) It can help to implement new solutions to cooperative problems: some technologies enable faster and cheaper ways of maintaining cooperation, group loyalty, dominance and so on. Weapons technology, surveillance technology, and behaviour manipulation technology, for example, can help to maintain cohesion and coordination in a similar way to tribal punishment, gossip and ostracism (indeed, social media technology enables a globalised form of the latter). There is something of an arms-race to this though. We are tempted to use these technologies to solve cooperative problems arising from the strains placed on our traditional tools as a result of the larger cooperative networks we have formed.
 
(c) It can help to create new types of cooperative partner: this one is a bit more outlandish and controversial. The MAC assumes that cooperation involves humans cooperating with one another. But increasingly our technology has its own agency and autonomy (contested though this may be). This means that, at least in some cases, technology becomes a new cooperative partner, one that may not share many human traits or values or emotions. This might make it more or less reliable than a human moral cooperator. If machine cooperators are more reliable and easily controllable than human moral partners, this might make it easier to solve cooperative problems without recourse to moral tools and tricks (i.e. it could enable what Roger Brownsword calls the ‘technological management’ of our normative concerns). If they are less reliable and less easily controlled, then this might create a great deal of moral stresses and strains. A real challenge emerges as to how technological agents integrate into our cooperative moral systems. Are they treated as equal moral partners? Dominants? Submissives? These are questions that are actively debated and need resolution.
 

In conclusion, technology changes how we interact with ourselves and the world around us and thereby puts stress on our traditional cooperative morality. Some moral norms are no longer fit for purpose. Some need to be expanded to address the new technological reality. We might start to value technological solutions to cooperative problems over traditional human-centric one. Instead of valuing the loyalty and trustworthiness of humans; we start to value the efficiency and reliability of machines. These are themes and ideas already present in the philosophy of technology, but not ones that are explicitly linked back to the cooperative roots of morality.

There is a lot more to be said. But this is at least a start.


Monday, July 27, 2020

78 - Humans and Robots: Ethics, Agency and Anthropomorphism


  Sven-Nyholm 

Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today's guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend of the show, having appeared twice before. In this episode, we are talking about his recent, great, book Humans and Robots: Ethics, Agency and Anthropomorphism

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 


Show Notes:

Topics covered in this episode include:
  • Why did Sven play football with a robot? Who won?
  • What is a robot?
  • What is an agent?
  • Why does it matter if robots are agents?
  • Why does Sven worry about a normative mismatch between humans and robots? What should we do about this normative mismatch?
  • Why are people worried about responsibility gaps arising as a result of the widespread deployment of robots?
  • How should we think about human-robot collaborations?
  • Why should human drivers be more like self-driving cars?
  • Can we be friends with a robot?
  • Why does Sven reject my theory of ethical behaviourism?
  • Should we be pessimistic about the future of roboethics?

Relevant Links


 

Friday, July 24, 2020

The Mechanics of Moral Change



I’ve recently become fascinated by moral revolutions. As I have explained before, by “moral revolution” I mean a change in social beliefs and practices about rights, wrongs, goods and bads. I don’t mean a change in the overarching moral truth (if such a thing exists). Moral revolutions strike me as an important topic of study because history tells us that our moral beliefs and practices change, at least to some extent, and it is possible that they will do so again in the future. Can we plan for and anticipate future moral revolutions? That's what I am really interested in.

To get a handle on this question, we need to think about the dynamics of moral change. What is changing and how does it change? Recently, I’ve been reading up on the history and psychology of morality and this article is an attempt to distill, from that reading, some models for understanding the dynamics of moral change. Everything I say here is preliminary and tentative but it might be of interest to some readers.


1. The Mechanics of Morality: a Basic Picture

Let’s start at the most abstract level. What is morality? Philosophers will typically tell you that morality consists of two things: (i) a set of claims about what is and is not valuable (i.e. what is good/bad/neutral) and (ii) a set of claims about what is and is not permissible (i.e. what is right, wrong, forbidden, allowed etc).

Values are things we ought to promote and honour through our behaviour. They include things like pleasure, happiness, love, equality, freedom, well-being and so on. The list of things that are deemed valuable can vary from society to society and across different historical eras. For example, Ancient Greek societies, particularly in the Homeric era, placed significant emphasis on the value of battlefield bravery. Modern liberal societies tend to value the individual pursuit of happiness more than bravery on the battlefield. That said, don’t misinterpret this example. There are many shared values across time and space. Oftentimes the changes between societies are subtle, involving different priority rankings over shared values rather than truly different sets of values.

Rights and wrongs are the specific behavioural rules that we ought to follow. They are usually connected to values. Indeed, in some sense, values are the more fundamental moral variable. A society needs to figure out what it values first before it comes up with specific behavioural rules (though it may be possible that following specific rules causes you to change your values). These behavioural rules can also vary from society to society and across different historical eras. To give a controversial example, it seems that sexual relationships between older men and (teenage) boys were permissible, and even celebrated, in Ancient Greece. In modern liberal societies they are deemed impermissible.

So beliefs about what is good/bad and right/wrong are the fundamental moral variables. It follows that moral revolutions must consist, at a minimum, in changes in what people think is good/bad (additions, subtractions and reprioritisations of values) and right/wrong (new permissions, obligations, prohibitions and so on).


2. Our Moral Machinery

How could these things change? To start to answer this question, I suggest we develop a simple model of the human moral machine. By using the term “human moral machine” I mean to refer to the machine that generates our current moral beliefs and practices. How does that machine currently work? It’s only when we can answer this question that we will get a better sense of how things might change in the future. To be clear,I don’t think of this as a machine in the colloquial sense. It’s not like an iPhone or a laptop computer. It is, rather, a complex social-technical-biological mechanism, made up of objects, processes and functions. I hope no one will mind this terminological preference.

At its most fundamental level, the human moral machine is the human brain. The brain, after all, is the thing that generates our moral beliefs and practices. How does this happen? All brains are, in a sense, evaluative systems. They record sensory inputs and then determine the evaluative content of those inputs. Think about the brain of a creature like a slug. It probes the creature’s local environment identifying potential food sources (good), mates (good), toxic substances (bad) and predators (bad). The slug itself may not understand any of this — and it may not share the conceptual labels that we apply to its sensory inputs — but its brain is, nevertheless, constantly evaluating its surroundings. It then uses these evaluations to generate actions and behaviours. It often does this in a predictable way. In short, the brain of the slug generates rules for behaviour in response to evaluative content.

Humans brains are no different. They are also constantly evaluating their surroundings, categorising sensory inputs according to their evaluative content, and generating rules for action in response. Where humans differ from slugs is in the complexity of our evaluations and the diversity of the behavioural rules we follow. Some of our evaluations and rules are programmed into us as basic evolutionary responses; some we learn from our cultures and peers; some we learn through our own life experiences. It is through this process of evaluation and rule generation that we create moral beliefs and practices. This isn’t to say that moral beliefs and practices are simply reducible to brain-generated evaluations and rules. For one thing, not all such evaluations and rules attract the label “moral”. Moral values and rules are rather a subset of these things that take on a particular importance in human social life. They are evaluations and rules that are shared across a society and used as standards against which to criticise and punish conduct.

To say that the basic moral machine is the human brain is not to say that much. What we really want to know is whether the human brain tends to engage in certain kinds of predictable moral evaluation and rule generation. If it does, then there is some hope for developing a general model of moral change. If it doesn’t -- if evaluation and rule-generation is entirely random or too complex to reverse engineer -- then the prospects are pretty dim.

Should we be optimistic or pessimistic on this front? Although there are people who think there is a good deal of randomness and complexity to how our brains learn and adapt to the world, there are plenty of others who disagree and think there are predictable patterns to be discerned. This seems to be true even in the moral realm. Although the diversity of human moral systems is impressive, there is also some remarkable affinity across different cultures. Humans tend to share some very similar values across cultures and this can lead to very similar cross-cultural moral rules.

So I shall be optimistic for the time being and suggest that there are some simple, predictable forces at work in the human moral machine. In particular, I am going to suggest that evolutionary forces have given humans a basic moral conscience — i.e. a basic capacity for generating and adhering to moral norms — and that this moral conscience was an adaptive response to particular challenges faced by human societies in the past. In addition to this, I am going to suggest that this basic moral conscience is, in turn, honed and cultivated in each of our own, individual lives, in response to the cultures we grow up in and the particular experiences we have. The combination of these two things — evolved moral conscience plus individual moral development — is what gives us our current set of moral beliefs and practices and places constraints on our openness to moral change.

In the future, changes to our technologies, cultures and environments are likely to agitate this moral machinery and force it to generate new moral evaluations and rules. This model for understanding human moral change is illustrated below.


For the remainder of this post I will not say much about the future of morality. Instead, I will focus on how our moral consciences might have evolved and how they develop over the course of our own lives.


3. Our Evolved Conscience

I suspect there is no fully satisfactory definition of the term “moral conscience” but the one I prefer defines the conscience as an internalised rule or set of rules that humans believe they ought to follow. In other words, it is our internal sense of right and wrong.

In his book Moral Origins — which I will be referring to several times in what follows — Christopher Boehm argues that our conscience is an “internalised imperative” telling us that we ought to follow a particular rule or else. His claim is that this internalised imperative originally took the form of a conditional rule based on a desire to avoid social punishment:


Original Conscience: I ought to do X [because X is a socially cooperative behaviour and if I fail to do X I will be punished]
 

What happened over time was that the bit in the square brackets got dropped from how we mentally represent the imperative.


Modern Conscience: I ought to do X because it is the right thing to do.
 

This modern formulation gives moral rules a special mental flavour. To use the Kantian terminology, moral rules seem to take the form of categorical imperatives — rules that we have to follow — not simply rules that we should follow in order to achieve desirable results. Nevertheless, according to Boehm, the bit in the square brackets of the original formulation is crucial to understanding the evolutionary origins of moral conscience.

Most studies of the evolutionary origins of morality take the human instinct for prosociality and altruism as their starting point. They note that humans are much more altruistic than their closest relatives and try to figure out why. This makes sense. Although there is more to morality than altruism, it is fair to say that valuing the lives and well-being of other humans, and following altruistic norms, is one of the hallmarks of human morality. Boehm’s analysis of the origins of human moral conscience tries to capture this. The bit in the square brackets links moral conscience to our desire to fit in with our societies and cooperate with others.

So what gave rise to this cooperative, altruistic tendency? Presumably, the full answer to this is very complex; the simple answer focuses on two things in particular.

The first is that humans, due to their large brains, faced an evolutionary pressure to form close social bonds. How so? In her book Conscience, the philosopher Patricia Churchland explains it in the following way. She argues that it emerged from an evolutionary tradeoff between endothermy (internal generation of heat), flexible learning and infant dependency. Roughly:


  • Humans evolved to fill the cognitive niche, i.e. our evolutionary success was determined by our ability to use our brains, individually and collectively, to solve complex survival problems in changing environments. This meant that we evolved brains that do not follow lots of pre-programmed behavioural rules (like, for example, turtles) but, rather, brains that learn new behavioural rules in response to experiences.
  • In order to have this capacity for flexible learning, we needed to have big, relatively contentless brains. This meant that we had to be born relatively helpless. We couldn’t have all the know-how we needed to survive programmed into us from birth. We had to use experience to figure things out (obviously this isn’t the full picture but it seems more true of humans than other animals)
  • In addition to being relatively helpless at birth, our big brains were also costly in terms of energy expenditure. We needed a lot of fuel to keep them growing and developing.
  • All of this made humans very dependent on others from birth. In the first instance, this dependency manifested itself in mother-infant relationships, but then social and cultural forces selected for greater community care and investment in infants. Families and tribes all helped out to produce the food, shelter and clothing (and education and technology) needed to ensure the success of our offspring.
  • The net result was a positive evolutionary feedback loop. We were born highly dependent on others, which encouraged us to form close social bonds, and which encouraged others to invest a lot in our success and well-being. A complex set of moral norms concerning cooperation and group sharing emerged as a result.


This was the evolutionary seed for a moral conscience centering on altruism and prosociality.

I like Churchland’s theory because it highlights evolutionary pressures that are often neglected in the story of human morality. In particular, I like how she places biochemical constraints arising from the energy expenditure of the brain at the centre of her story about the origins of our moral conscience. This makes her story somewhat similar to that of Ian Morris, who makes different technologies of energy capture central to his story about the changes in human morality over the past 40,000 years. 

That said, Churchland’s story cannot be the full picture. As anyone will tell you, cooperation can yield great benefits, but it also has its costs. A group of humans working together, with the aid of simple technologies like spears or axes, can hunt for energy-rich food. They can get more of this food working together than they can individually. But cooperative efforts like this can be exploited by free-riders, who take more than they give to the group effort.

Two types of free riders played an important role in human history:


Deceptive Free Riders: People who pretended to cooperate but actually didn’t and yet still received a benefit from the group.
 
Bullying Free Riders: People who intimidated or violently suppressed others in order to take more than their fair share of the group spoils (e.g. the alpha male dominant in a group).
 

A lot of attention has been paid to the problem of deceptive free riders over the years, but Christopher Boehm suggests that the bullying free rider was probably a bigger problem in human evolutionary history. 

He derives evidence for this claim from two main sources. First, studies of modern hunter gatherer tribes suggest that members of these groups all seem to have a strong awareness of and sensitivity to bullying behaviour within their groups. They gossip about it and try to stamp it out as soon as they can. Second, a comparison with our ape brethren highlights that they are beset by problems with bullying alpha males who take more than their fair share. This is particularly true of chimpanzee groups. (It is less true, obviously, of bonobo groups where female alliances work to stamp out bullying behaviour. Richard Wrangham explains the differences between bonobos and chimps as being the result of different food and environmental scarcities in their evolutionary environments.)

As Boehm sees it, then, the only way that humans could develop a strong altruistic moral conscience was if they could solve the bully problem. How did they do this? The answer, according to Boehm, is through institutionalised group punishment, specifically group capital punishment of bullies. By themselves, bullies could dominate others. They were usually stronger and more aggressive and could use their physical capacity to get their way. But bullies could not dominate coalitions of others working together, particularly once those coalitions had access to the same basic technologies that enabled big-game hunting. Suddenly the playing field was levelled. If a coalition could credibly threaten to kill a bully, and if they occasionally carried out that threat, the bullies could be stamped out.

Boehm’s thesis, then, is that the capacity for institutionalised capital punishment established a strong social selective pressure in primitive human societies. Bullies could no longer get their way. They had to develop a capacity for self-control, i.e. to avoid expressing their bullying instincts in order to avoid the wrath of the group. They had to start caring about their moral reputations within a group. If they acquired a reputation for cheating or not following the group rules, they risked being ridiculed, ostracised and, ultimately, killed.

It is this capacity for self-control that developed into the moral conscience — the inner imperative telling us not to step out of line. As Boehm puts it:


We moved from being a “dominance obsessed” species that paid a lot of attention to the power of high-ranking others, to one that talked incessantly about the moral reputations of other group members, began to consciously define its more obvious social problems in terms of right and wrong, and as a routine matter began to deal collectively with the deviants in its bands. 
(Boehm, Moral Origins, p 177)
 

What’s the evidence for thinking that institutionalised punishment was key to developing our moral conscience? Boehm cites several strands of evidence but his most original comes from a cross cultural comparison of human hunter gatherer groups. He created a database of all studied human hunter gatherer groups and noted the incidence and importance of capital punishment in those societies. In short, although modern hunter gatherer groups don’t execute people very often, they do care a lot about moral reputations within groups and most have practiced or continue to practice capital punishment in some form or other.

Richard Wrangham, who is also a supporter of the institutionalised punishment thesis, cites other kinds of evidence for this view. In his book The Goodness Paradox he argues that human morality emerged from a process of self-domestication (akin to the process we see domesticated animals) and that we see evidence for this not just in the behaviour of humans but also in their physiology compared to their chimpanzee cousins (less sexual dimorphism, blunter teeth, less physical strength etc). It’s an interesting argument and he develops it in a very engaging way.

The bottom line for now, however, is that our moral conscience seems to have at least two evolutionary origin points. The first is our big brains and need for flexible learning: this made us dependent on others for long periods of our lives. The second is institutionalised punishment: this created a strong social selective pressure to care about reputation within a group and to favour conformity with group rules.

Understanding these origin points is important because it tells us something about the forces that are likely to alter our moral beliefs and practices in the future. Most humans have a tendency for groupishness, we care about our reputations within our groups and we often try to conform with group expectations. That said, we are not sheep. Our brains often look for loopholes in group rules, trying to exploit things to our advantage. So we are sensitive to the opinions of others and wary of the threat of punishment, but we are willing to break the rules if the cost-benefit ratio is in our favour. This tells us that if we want to change moral beliefs and practices, an obvious way to do this is by manipulating group reputational norms and punishment practices.


4. Our Developed Conscience

So much for the general evolutionary forces shaping our moral conscience. There are obviously some individual differences too. We learn different behavioural rules in different social groups and through different life experiences. We are also, each of us, somewhat different with respect to our personalities and hence our inclinations to follow moral rules.

It would be impossible to review all the forces responsible for these individual differences in this article, but I will mention two important ones in what follows: (i) our basic norm-learning algorithm and (ii) personality types. I base my description of them largely on Patricia Churchland’s discussion in Conscience.

First, let’s talk about how we learn moral rules. Pioneering studies done by the neuroscientists Read Montague and Terry Sejnowski suggest that the human brain follows a basic learning algorithm known as the “reward-prediction-error” algorithm (now popularised as "reinforcement learning" in artificial intelligence research). It works like this (roughly):


  • The brain is constantly active and neurons in the brain have a base rate firing pattern. This base rate firing pattern is essentially telling the brain that nothing unexpected is happening in the world around it.
  • When there is a spike in the firing pattern this is because something unexpectedly good happens (i.e. the brain experiences a “reward”)
  • When there is a drop in the firing pattern this is because something unexpectedly bad happens (i.e. the brain experiences a “punishment”)

This natural variation in firing is exploited by different learning processes. Consider classical conditioning. This is where the brain learns to associate another signal with the presentation of a reward. In the standard example, a dog learns to associate the ringing of a bell with the presentation of food. In classical conditioning, the brain is switching the spike in neural firing from the presentation of the reward to the stimulus that predicts the reward (the ringing of the bell). In other words, the brain links the stimulus with the reward in such a way that it spikes its firing rate in anticipation of the reward. If it makes a mistake, i.e. the spike in firing does not predict the reward, then it learns to dissociate the stimulus with the presentation of the reward. In short, whenever there is a violation of what the brain expects (whenever there is an "error"), there is a change in the brain's firing rate, and this is used to learn new associations.

It turns out that this basic learning algorithm can also help to explain how humans learn moral rules. Our understanding of shared social norms guides our expectations of the social world. We expect people to follow the social norms and when they do not this is surprising. It seems plausible to suppose that we learn new social norms by keeping track of the norms we expected people to follow.

This has been studied experimentally. Xiang, Lohrenz and Montague performed a lab study to see if groups of people playing the Ultimatum Game learned new norms of gameplay by following the reward-prediction-error process. It turns out they did.

The Ultimatum Game is a simple game in which one player (A) is given a sum of money to divide between himself and another player (B). The rule of the game is that player A can propose whatever division of the money he prefers and player B can either accept this division or reject it (in which case both players get nothing). Typically, humans tend to favour a roughly egalitarian split of the money. Indeed, if the first player proposes an unequal split of the money, the second player tends to punish this by rejecting the offer. That said, there is some cross-cultural variation and, under the right conditions, humans can learn to favour a less egalitarian split.

Xiang, Lohrenz and Montague ran the experiment like this:


  • They had two different types of experimental subjects: donors, who would propose different divisions of $20, and responders, who would accept or reject these divisions.
  • They then ran multiple rounds of the Ultimatum game (60 in total). They split responders into two different groups in the process. Group one would run through a sequence of games that started with donors offering very low (inegalitarian) sums but ending up with high (egalitarian) ones. Group two would run through the opposite sequence, starting with high offers and ending with low ones.
  • In other words, responders in group one were trained to expect unequal divisions initially and then for this to change, while those in group two were trained to expect equal divisions and then for this to change.

The researchers found that, under these circumstances, the responders’ brains seemed to follow a learning process similar to that of reward-prediction-error, something they called “norm prediction error”. In this learning process, the violation of a norm is perceived, by the brain, as an error. This can be manipulated in order to train people to adapt to new norms.

One of the particularly interesting features of this experiment was how the different groups of responders perceived the morality of the different divisions. At round 31 of the game, both sets of responders received the exact same offer: nine dollars. Those in group one (the low-to-high offer group) thought that this was great because it was more generous than they were initially trained to expect (bearing in mind their background cultural norms, which were to expect a fair division). Those in group two thought it was not so great since it was less generous than they had been trained to expect.

The important point about this experiment is that it tell us something about how norms shape our expectations and hence affect the changeability of our moral beliefs and practices. We all become habituated to a certain normative baseline in the course of our own lives. Nevertheless, with the right sequence of environmental stimuli it’s possible that, within certain limits, our norms can shift quite rapidly (Churchland argues that fashion norms are a good example of this).

The other point that is worth mentioning now is how individual personality type can affect our moral conscience. Churchland uses the Big Five personality type model (openness, conscientiousness, extroversion-introversion, agreeableness and neuroticism) to explain this. This is commonly used in psychology. She notes that where we fall on the spectrum with respect to these five traits affects how we interact with and respond to moral norms. For example, those who are more extroverted, agreeable and open can be easier to shift from their moral baseline. Those who are more conscientious and neurotic can be harder to shift.

She also offers an interesting hypothesis. She argues that there are two extreme moral personality types:


Psychopaths: These are people that appear to lack a moral conscience. They often know what social morality demands of them but they lack any emotional attachment to the social moral rules. They do not experience them as painful violations of the moral order. These people have an essentially amoral experience of the world (though they can act in what we would call “immoral” ways).
 
Scrupulants: These are people that have a rigid and inflexible approach to moral rules (possibly rooted in a desire to minimise chaos and uncertainty). They often follow moral rules to their extremes, sometimes neglecting family, friends and themselves in the process. They are almost too moral in their experience of the world. They are overly attached to moral rules.
 

Identifying these extremes is useful, not only because we sometimes have to deal with psychopaths and scrupulants, but also because we all tend to fall somewhere between these two extremes. Some of us are more attached to existing moral norms than others. Knowing where we all lie on the spectrum is crucial if we are going to understand the dynamics of moral change. (It may also be the case that it is those who lie at the extremes that lead moral revolutions. This is something I suggested in an earlier essay on why we should both hate and love moralists).


5. Conclusion

In summary, moral change is defined by changes in what we value and what we perceive to be right and wrong. The mechanism responsible for this change is, ultimately, the human brain since it is the organ that creates and sustains moral beliefs. But the moral beliefs created and sustained by the human brain are a product of evolution and personal experience.

Evolutionary forces appear to have selected for proscial, groupish tendencies among humans: most of us want to follow social moral norms and, perhaps more crucially, be perceived to be good moral citizens. That said, most of us are also moral opportunists, open to bending and breaking the rules under the right conditions.

Personal experience shapes the exact moral norms we follow. We learn normative baselines from our communities, and we find deviations from these baselines surprising. We can learn new moral norms, but only under the right circumstances. Furthermore, our susceptibility to moral change is determined, in part, by our personalities. Some people are more rigid and emotionally attached to moral rules; some people are more flexible and open to change.

These are all things to keep in mind when we consider the dynamics of moral revolutions.

Monday, July 20, 2020

77 - Should AI be Explainable?


scott robbins

If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). 



Show Notes

Topic covered include:
  • Why do people worry about the opacity of AI?
  • What's the difference between explainability and transparency?
  • What's the moral value or function of explainable AI?
  • Must we distinguish between the ethical value of an explanation and its epistemic value?
  • Why is it so technically difficult to make AI explainable?
  • Will we ever have a technical solution to the explanation problem?
  • Why does Scott think there is Catch 22 involved in insisting on explainable AI?
  • When should we insist on explanations and when are they unnecessary?
  • Should we insist on using boring AI?
 

Relevant Links