Pages

Saturday, May 25, 2019

Discrimination and Fairness in the Design of Robots


[Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. The workshop was organised by Martim Brandão and Masoumeh (Iran) Mansouri. My thanks to both for inviting me to participate - more details here]

I never quite know how to pitch talks of this kind. My tendency is to work with the assumption that everyone is pretty clever, but they may not know anything about what I am talking about. I do this from painful personal experience: I've sat through many talks at conferences like this where I get frustrated because the speaker assumes I know more than I do. I'm sorry if this comes across patronising to some of you; but I'm hoping it will make the talk more useful to more of you.

So, anyway, I am going to talk about discrimination and robotics. More specifically, I am going to talk about the philosophical and legal aspects of discrimination and how they might have some bearing on the design of robots.

Before I get started I want to explain how I approach this problem. I am neither a roboticist nor a computer scientist; I am a philosopher and ethicist. I believe that there are three perspectives from which one can approach the problem of discrimination and fairness in the design and operation of robots. These are illustrated in the diagram below.



The diagram, as you can see, illustrates three kinds of relationships that humans can have with robots. The first, which we can call the 'design relationship', concerns the relationship that the original designers have with the robot they create. Discrimination becomes a worry here because it might leak into that design process and have some effect on how the robot looks and operates. The second relationship, which we can call the 'decision relationship', concerns the decisions the robot makes with respect to its human users. Discrimination becomes a worry here because the robot might express discriminatory attitudes toward those users or unfairly treat users from different groups. The third relationship, which we can call the 'reaction-relationship', concerns the reactions that human users have to the behaviour of the robot. Discrimination becomes a worry here if the humans discriminate against the robot or if they learn and normalise such attitudes from their interactions with the robot and carry them over to humans.

A comprehensive analysis of the problem of discrimination in the design of robots would have to factor in all three of these relationships. I do not have time for a comprehensive analysis in today's talk so, instead, I'm going to focus on the second relationship only. That said, unless the robot is itself a fully autonomous agent, focusing on the second relationship inevitably entails focusing on the first relationship too since the robot's decision algorithms will be created by a team of designers. There is, however, a difference between them. Whereas the first relationship concerns both the look, appearance and general behaviour of the robot; the second relationship is concerned specifically with its decision-making practices and how they might affect human users.

With that caveat out of the way, I want to do three things in the remainder of this talk:


  • (a) I want give a quick overview of how philosophers think about the concepts of fairness and discrimination.
  • (b) I want to look at the debate about algorithmic discrimination/fairness and consider some of the key lessons that have been learned in that debate.
  • (c) I want to make two specific arguments about how we should think about the problem of discrimination in social robotics.


1. A Brief Primer on Fairness and Discrimination
Let's start with the philosophical overview and consider the nature of fairness. Fairness is a property of how social goods (i.e. money, food, jobs, opportunities) get distributed among the members of a population. A common intuition is that a fair distribution is an equal one. But what does that really mean?

To think about this more clearly, it will help if we have a simple model scenario in mind. Consider the image below. It represents a highly stylised social system. On the bottom of the image we have a population of individuals. These individuals are divided into three social groups (you can think of these as 'races' or identities, if you like). In the middle of the image we have what I am somewhat awkwardly calling 'outcome makers'. These are properties attaching to the members of the population that make them more likely to achieve certain socially desirable outcomes. These properties can take many forms. Some might be innate characteristics of the individuals (e.g. race, sex) and some might be more contingent or acquired properties (e.g. income, a good education, good health and nutrition). All that matters, is that they make it more likely that the individuals will achieve relevant outcomes. As you can see from the image, different individuals have different outcome makers and they are not evenly distributed across the population. Finally, on the top of the image, we have the outcomes themselves, i.e. the 'buckets' where the individuals in the population end up. For illustrative purposes, I've imagined that the outcomes are jobs but they could be anything at all (e.g. income, number of friends, access to credit and housing, number of intimate relations, whatever it is you care about). As you can see from the image, different proportions of the three main social groups have ended up in different outcome buckets. In fact, there is something oddly skewed about the outcomes since all the 'blue' members of the population end up in one bucket.



With this simple model in mind, we can explain more clearly some of the different ways in which philosophers think about fairness and equality .

Equality of Outcome: We can start with the concept of "equality of outcome", which is widely touted as a desirable goal for social policies. Following our model, this could mean one of two things. It could mean, in the extreme case, that all members of the population, irrespective of their social group, share the same outcome (in this case, they all have the same job but it could also mean they all have the same number of friends or income or whatever). This understanding is extreme and counterintuitive, at least in the case of jobs -- why would you want to live in a society in which everyone had the exact same job? -- so an alternative interpretation, which is more plausible, is that equality of outcome arises when all social groups are equally or proportionally represented in the different social outcomes. This corresponds to what some people call a principle of fair representation.

Equality of Opportunity: Even though equality of outcome is a popular idea, it is also widely criticised. People worry about a society that forces people into different outcomes in the interests of fairness. So instead of achieving equality of outcome they think we should focus on equality of opportunity. This is function of how the 'outcome makers' get distributed among the population. From our model, equality of opportunity would arise when each member of the population, irrespective of social grouping, is given a mix of outcome makers that enables them to achieve any of the different possible outcomes. This doesn't mean that they all have the exact same mix of outcome makers; it just means that whatever mix they have is such that they each have the same opportunity of achieving the different possible outcomes (the playing field has been levelled between them).

Theories of equality of opportunity are often complicated by the fact that different philosophers take different attitudes toward different outcome makers. A common assumption is that you cannot and should not equalise all outcome makers. For example, you cannot make all people have the same level of physical strength or general intelligence. Nor should you force people to acquire abilities that they don't really want (e.g. forcing everyone to take high-level quantum physics). You have to respect people's autonomy and responsibility for choosing their own path in life. This means that when thinking about equality of opportunity, you should equalise with respect certain kinds of outcome maker, but not all.

[A brief aside: you may notice from this discussion that I don't think much of the distinction between equality of outcome and equality of opportunity. My view is that opportunities are really just outcomes of a particular kind: they are outcomes that are steps on the road to other outcomes. But it would take a bit longer to justify this position, and the distinction between equality of outcome and equality of opportunity is popular one so I am working with it.]

This brings us to our second key topic -- discrimination. To understand how philosophers think about discrimination, we just need to add some details to our model. First, we need to think about how the members of the population access the different possible outcomes. I've assumed that this just a function of the outcome makers they possess, but that's not very realistic. In any real society, there will probably be some set of actors or institutions that decide who gets to access the different outcomes. We can call these actors or institutions 'the gatekeepers'. They act as screeners and sorters, taking members of the population and assigning them to different outcomes. To make it more concrete, and to continue with our example, we can imagine people interviewing candidates for different jobs and deciding who should be assigned to which job. Discrimination is a phenomenon that arises from this gatekeeping function. More precisely, it arises when gatekeepers rely on criteria that we deem to be unjust or unfair in screening and sorting people into different outcomes.

To understand this problem more clearly, we need to add a second complication to the model. This complication concerns the properties of the members of the population who get sorted into the different outcomes. Each member of the population will be a bundle of different characteristics and properties. Some of these characteristics will be 'protected' (e.g. race, age, religion, gender) and others will not be (e.g. income, educational level, IQ). The core idea in discrimination theory and practice is that gatekeepers should not use protected characteristics to sort people into different outcomes. They should only rely on unprotected characteristics.



Actually, it's a bit more complicated than that and we need to introduce several conceptual distinctions in order to think clearly about discrimination. They are:

Direct Discrimination: This arises when gatekeepers explicitly use protected characteristics to guide their decision-making, e.g. an interviewer explicitly refuses to hire women for a job.

Indirect Discrimination: This arises when, even though gatekeepers do not explicitly use protected characteristics to guide their decision-making, they rely on other characteristics (proxies) that have the effect of sorting people according to their protected characteristics, e.g. an interviewer refuses to hire someone with more than one career break (could be a problem if women are known to be more likely to have taken a career break).

Individual Discrimination: This arises when individual gatekeepers act in discriminatory ways (be they direct or indirect).

Structural Discrimination: This arises when social institutions, as opposed to individual gatekeepers, work in such a way that members of some social groups are systematically discriminated against when compared to others. Structural discrimination could arise with or without individual discrimination.

Positive Discrimination: This arises when gatekeepers are incentivised to use protected characteristics in decision-making in order to achieve a fairer representation of different social groups across the possible outcomes. This is usually done to correct for historic unfairness in social sorting (e.g. affirmative action hiring policies).

Impartiality: This is when gatekeepers show no favourability or bias toward certain social groups or individuals in their decision-making. This is often the long-term aim of anti-discrimination policies.

I appreciate that is a lot of conceptual distinctions but they are all important when it comes to understanding the debate about fairness and discrimination.

You might ask: "How do we prove that discrimination has occurred?" That is a good question and it is often difficult. Sometimes we have clear and unambiguous evidence of discriminatory intent, but more often we see that different social groups have been sorted disproportionately into different outcomes and we infer from this that some discrimination might have occurred. A more thorough investigation might confirm this suspicion. Another question you might ask is "how do we decide what counts as a protected characteristic?" This is also a good question and there is no single answer. Different moral considerations apply in different cases. Sometimes we designate something to be a protected characteristic because we believe it has no actual bearing on whether someone would be a good fit for a particular outcome, but people mistakenly think that it does, and we want to stop this from influencing their decision-making; other times it is because we don't want to punish people for characteristics that are outside of their control; sometimes its a combination of factors. There is an interesting phenomenon nowadays of something we might call 'protected characteristic creep', which is the tendency to think that more and more characteristics deserve to be protected against discriminatory decision-making, which often has the net effect of making it more difficult to avoid discrimination.


2. Lessons from the Algorithmic Fairness Debate
With that overview of the philosophy of fairness and discrimination out of the way let's consider the implications for robotics. And let's start by considering the lessons we can learn from the algorithmic fairness debate. As some of you will know, algorithmic decision processes have been used for some time in the public and private sector, for example, in credit scoring, tax auditing, and recidivism risk scoring. This usage has been growing in recent years due to advances in machine learning and big data. This has generated an extensive debate about algorithmic fairness and discrimination. Looking to that debate, is an obvious starting point for anyone who cares about fairness and discrimination in robotics. After all, the decision-making algorithms used by robots are likely to be based on the same underlying technology.

Some of the lessons learned from this debate are important but relatively unsurprising. For example, it is now very clear that decision algorithms can work in biased and discriminatory ways. This may be because they were designed to rely on discriminatory criteria (directly or indirectly) when making decisions, or it may be because they were trained on biased or skewed datasets. Trying to recognise and correct for this problem is an important practical concern. But, as I say, it is relatively unsurprising. I want to focus on two lessons from the algorithmic fairness debate that I think are more surprising and still practically important.

The first lesson is that, except in very rare circumstances, there is no way to design an algorithmic decision process that is perfectly fair and non-discriminatory.

This is a lesson that was first learned by investigating risk-scoring algorithms in the criminal justice system. Some of you will be familiar with this story already, so please forgive me for sharing it again. The story is this. For some years, an algorithm known as 'COMPAS' has been used in the US to rate how likely it is that someone who has been prosecuted for a criminal offence will commit another offence in the future. This rating can then be used to guide decisions regarding the release (on parole) of this person. The COMPAS algorithm is somewhat complex in how it works, but for present purposes, we can say it works like this: a risk score is assigned to a criminal defendant and this score is then used this to sort defendants into two predictive buckets: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket.



A number of years back a group of data journalists based at ProPublica conducted an investigation into how this algorithm worked. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores. This seems like a textbook case of indirect discrimination in action: we infer from the lack of fair representation in outcome classes that the algorithm must be relying on proxies that indirectly discriminate against members of the black population.



Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. It turns out this is true. If you look at the figures in the table given you can see this for yourself. The reason why it doesn't immediately look like that is because there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.

So what is going on here? Is the algorithm fair or not? Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan, which is like a version of Arrow's impossibility theorem for fairness.

The details are important and often glossed over. Kleinberg et al argued that there are three criteria you might want a fair decision procedure to satisfy: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of social group); (ii) you might want it to achieve an equal representation for both groups in false positive ratings (more technically, you might want both groups to have the same average score in the positive class) and (iii) you might want it to achieve an equal representation for both groups in the false negative rating (more technically, you might want both groups to have the same average score in the negative class). They then proved that except in two unusual cases, it is impossible to satisfy all three criteria. The two unusual cases are when the algorithm is a 'perfect deterministic predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black prisoners as there are white prisoners). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true and not just true for cases involving risk scores for prisoners, even though this was the initial test case.

This result has significant practical implications for designers of decision-algorithms. It means they face some hard choices. They can have a system that is well-calibrated or one that achieves a fair representation, but not both. Plausibly, you might want different things in different decision contexts. For example, when deciding who would make a good doctor, you might want an algorithm that is well-calibrated: because you want some confidence that the people who end up becoming doctors are good at what they do and you want to stop people from assuming some people aren't good at what they do for irrelevant reasons. Contrariwise, when deciding who would make a good politician and should be put forward for election, you might want a system that achieves a balanced representation of the different social groups. This is to say nothing of the further complexities that arise from the fact that fairness is just one normative goal of social policy: there are other goals that can compete with it and crowd it out, e.g. security and well-being.

That's the first lesson to be drawn from the algorithmic fairness debate. What about the second? This lesson is that although there is a lot of concern about discrimination in decision algorithms, there is good reason to think that algorithmic decision procedures can be less discriminatory than traditional, human-led decision procedures. There are two reasons for this. The first is that we are prone to status quo bias when it comes to assessing the normative implications of any novel technology. We have a tendency to overemphasise the negative features of any new technology while neglecting the fact the current status quo is even worse. In this regard, I don't think it is controversial to say that human led decision-making systems are prone to bias and discrimination. This is a prevalent and systematic feature of them. This is in part because some people engage in direct discrimination, but also, more significantly, because many people engage in indirect discrimination of which they are completely unaware. We are prone to all manner of subconscious and automatic biases. We can work to counteract these biases, but only if we are aware of them and their effects on outcomes. We often aren't. This leads to the second reason for thinking that algorithmic decision-procedures might be less discriminatory than human-led ones. When designing algorithmic decision procedures we have to be explicit about the tradeoffs and compromises we are making, the datasets we are using, and the outcomes we are trying to achieve. This gives us greater awareness and control of its discriminatory properties. This is the case even though it is also true that certain features of the algorithmic decision process will be relatively opaque to humans [again, Kleinberg and his colleagues have a longish paper setting out a more technical argument in favour of this view - I recommend reading it].


3. Why We Might Want Robots to be Discriminatory
But what does all this mean for robotics? I want to close this talk by making two very brief arguments.

The first argument is that the lessons from the algorithmic fairness debate might mean a lot for robotics. The issues raised in the algorithmic fairness are particularly pertinent when an algorithm is used for general social decision-making. In other words, when the algorithm is expected to make decisions that might affect all members of the general population. This is the expectation for algorithms used to make decisions about credit risk, tax auditing and recidivism risk. To the extent that robots are designed to perform similar gatekeeping functions, they should be subject to the similar normative demands and will therefore face similar practical constraints (i.e. they will not be able to satisfy all possible fairness criteria at the same time). So, for example, imagine a security screening robot at an airport. That robot should be subject to the same demands of fairness and non-discrimination as a human screener (tempered by other policy aims such as security and well-being).

The second argument is that although this is true it might not be the normal case, particularly in the case of social robots. We could, of course, use social robots for general social gatekeeping, but I suspect the demand for this is going to be relatively limited. A lot of this admittedly hinges on how you define a 'social robot' of course, but I see social robots as embodied artificial agents, usually intended to participate in interpersonal social interactions. If you want a general social gatekeeper, then it's not clear why you would want (or need) to embody it in a social robot. This wouldn't be efficient or cost-effective. The embodied form is really only called for when you want a more meaningful, personal interaction between the human and the artefact. This might be the case for personal care robots or personal assistant robots. In those cases of meaningful, personalised interaction, the normative constraints of fairness and non-discrimination may not apply in the same way. This is not to suggest that we want robots to be racist or sexist -- far from it -- but we might not want them to be impartial either.

Think about it like this. Would you want your brother or best friend to be perfectly fair and non-discriminatory in how they interacted with you? No, you would want them to have some bias in your favour. You would expect them to abide by a duty of loyalty (or partiality) to your case. If they didn't, you would quickly question the value of your relationship with them and lose trust in them. If social robots are primarily intended to fulfil similar relationship functions (i.e. to provide companionship, care and so on) then we would probably expect the same from them.

This does not mean, incidentally, that I think robots should be used to fulfil similar relationship functions (i.e. be our friends and companions). I have my own views on this topic, but it is a longer debate, one that I don't have time for now. My only point here is that if they do perform such functions, it is plausible to argue that they should be bound by a duty of partiality or, to put it another way, a duty of positive discrimination toward their primary user.

So, in conclusion, although the algorithmic fairness debate does have some lessons for the robotic fairness debate, there may be important differences between them, particularly in the case of social robots.

Thank you for your attention.




No comments:

Post a Comment