Tal Zarsky’s work has featured on this blog before. He is an expert in the legal aspects of big data and algorithmic decision-making. He recently published a paper entitled “The Trouble with Algorithmic Decision-Making” in which he tries to identify, categorise and respond to some of the leading objections to the use of algorithmic decision-making processes. This is a topic that interests me too, so I was eager to see what he had to say.
This post is my attempt to summarise and comment on some of the key themes from Zarsky’s paper. Its primary aim is to construct a diagram which will categorise the main objections found within Zarsky’s paper. Its secondary aim is to consider Zarsky’s responses to each of these objections. This will not be an exhaustive treatment of the core issues; it will be a high-level summary only. In this respect, it might be useful to people who are new to this debate.
1. What is interesting about algorithmic decision-making?
In one sense, algorithms are a mundane phenomenon: they are simply sets of instructions for taking an input and producing an output. There is probably some trivial sense in which all decision-making is algorithmic. After all, whenever you make a decision — say a decision about what food to buy — you are taking some set of inputs — e.g. information about your level of hunger, financial resources, food preferences and so on — and using them to produce an output — i.e. a decision about what you will actually buy. In most cases, the ruleset that you use to produce the output is implicit, but you could probably reconstruct it if you put enough thought into it. (Note: some people in the philosophy of mind might dispute the claim that all decision-making is algorithmic, but I won’t engage with that point of view in this post).
Given this mundanity and triviality, one may wonder why anyone at all is interested in algorithmic decision-making. The answer, of course, lies in the technology used in the more explicit forms of algorithmic decision-making that now govern our lives. With the rise of surveillance and big data, there are increasing opportunities for computer-coded algorithms to take advantage of large datasets to produce (potentially) socially useful outputs. Recognition of this fact, has led companies and governments to incorporate algorithmic decision-making into their pre-existing decision-making processes. There are so many examples of this nowadays that it is hard to pick just one.
The one Zarsky settles upon in his article is the use of credit-scoring algorithms by banks and other financial services providers. These algorithms use financial (and other) data to construct credit-scores. These scores are supposed to tell the banks the likely credit-risk of any particular customer. The most popular of these systems in the US is the FICO rating system, which relies on a proprietary (i.e. legally protected) algorithm and can be decisive in determining whether or not a person can access credit. Similar scoring systems are used in other countries, many of them also relying on the FICO system (at least in part).
One can make a good case for the use of such algorithms: they are quick, cost-effective ways to take advantage of large swathes of information. There is limited scope for humans to knit together this information in a useful way. Nevertheless, many people are disturbed and think these systems are deeply problematic. Zarsky suggests that these objections fall into two main categories (he admits that these are not exhaustive, but thinks they address the main areas of concern):
Efficiency-Based Objections: These objections target the claims often made on behalf of these systems by their creators, namely that they are more effective and accurate than human decision-makers would be.
Fairness Based Objections: These objections argue that algorithmic decision-making processes are unfair in one or more respects. The unfairness here can be substantive (i.e. concerned with the differential impact of the process on different groups of people) or procedural (i.e. concerned with the way in which the process engages with the people who are ultimately affected).
Of course, these kinds of objections can be levelled against any decision-making system. This raises the question: what is so special about algorithmic decision-making? The answer to that might be “nothing”, but there are two properties of algorithmic decision-making that are alleged to make it unique:
Automation: Algorithmic decisions can sometimes be made with no or limited human input and oversight.
Opacity: Algorithmic decisions can lack the transparency we desire, either because the algorithms are protect by secrecy laws or because of their inherent complexity.
One Zarsky’s goals is to see whether automation and opacity increase the potency of the efficiency and fairness-based objections, and whether transparency can help to address some of the concerns.
Acknowledging all this allows us to construct a diagram of the potential objections to algorithmic decision-making. As you can see below, there are two main branches (efficiency and fairness) which then sub-divide into a number of more specific objections. We’ll work our way through the various branches over the remainder of this post.
2. Efficiency-Based Objections
We start with efficiency-based objections. These are both the easiest to understand and the easiest to analyse. An efficiency-based objection holds that an algorithmic decision-making process is problematic due to inaccuracy. In the case of credit-scoring, the argument would be that the credit-scoring system does not provide an accurate representation of the likely credit-risk of the particular customer. There is some evidence that this is true. The bond-rating conducted by agencies like Fitch, Moody and Standard and Poor prior to the 2008 financial crisis were infamously inaccurate. There is also evidence that some credit-scoring systems draw faulty inferences from certain types of behaviour. I commented on one example — seeking more information about your mortgage being an indicator of credit risk as opposed to prudence — in a previous post.
The particular examples do not matter so much here. What matters is the arguments people adduce in support of the efficiency-based objection. Zarsky suggests that there are two main arguments:
Defective Dataset : The actual dataset upon which the algorithms rely is defective in some respect, i.e. contains inaccurate or misleading information.
Predictive Problems: The systems try to predict future human behaviour but there are often serious practical hurdles to accurate predictions. This can manifest as a tendency to draw misleading conclusions from the data.
Are these criticisms plausible? And how are they linked to the automated and opaque nature of the decision-making systems?
Zarsky suggests that these criticisms are relatively weak. There are three reasons for this. First, the problems with inaccurate data may be corrected over time or at an aggregate level. In other words, misleading information from one source could be cancelled out or swamped by accurate information from other sources. The accuracy of the overall prediction could still be (probabilistically) valid. That said, Zarsky acknowledges the need for ongoing research into this matter. Theoretical possibilities and anecdotal evidence will not be sufficient to either prove or disprove the accuracy of an algorithm.
Second, even if these systems are inaccurate in certain respects, you need to compare their inaccuracy with the accuracy of alternative decision-making systems. For example, it could be that systems which assess credit risk based entirely on the subjective assessment of an individual bank employee are much more inaccurate. In that case, the inaccuracies of the algorithm might be acceptable. There is a good methodological point here: Whenever you assess policy changes you should do so comparatively, i.e. by comparing the policy with the status quo and some reasonable alternatives. When you do so, you might find that it is less objectionable than it first seems.
Third, transparency could be leveraged to improve the accuracy of such systems. For instance, people could be given the legal right to investigate and challenge the information used by the algorithm and, potentially, the source code of the algorithm itself. But Zarsky is not entirely convinced about the success of such transparency initiatives. One reason for this is that many people already have the right to scrutinise the information on their credit scores but don’t exercise those rights. Another is that making these systems more transparent may enable people to ‘game the system’. This is something I discussed in much greater detail in a previous post about Zarsky’s work.
3. Unfair Wealth Transfer Objections
Let’s move on now to fairness-related objections. These are more complex. They break down into three main subgroups. The first of these subgroups is concerned with the impact of algorithmic decision-making on the distribution of wealth (where ‘wealth’ is defined broadly to include social goods and opportunities of all kinds). The objection is based on the belief that algorithmic decision-making systems could result in wealth being unfairly distributed away from those who deserve it to those who really don’t. Zarsky notes three distinct ways in which this could happen:
From Consumers to Firms: Corporate enterprises could take unfair advantage of consumers, resulting in a wealth transfer from the consumers to the enterprises. For instance, a bank could use a credit score as the basis for manufacturing a sophisticated financial product that seems attractive to an at-risk customer but actually favours the bank in the long run. This could result in undeserved hardship to the customer.
Between Consumers: Certain consumers could take unfair advantage of these systems, resulting in a wealth transfer in their favour, to the detriment of others. So, for instance, in the case of credit-scoring and other financial algorithms, wealthy people, with teams of advisers, might be in a better position to game these systems to their advantage. This could result in further inequalities of income and wealth.
Away from Protected Groups: The algorithms could work in such a way that they have a disparate impact on groups with certain characteristics (e.g. gender, race, ethnicity, religion, sexual orientation). In most countries, these groups are explicitly protected from discrimination by law. The concern is that algorithmic decision-making could unfairly target them due to implicit or explicit biases affecting the coding process, or due to some other unknown factor.
How serious are these concerns and what role do automation and opacity have to play? Let’s take them one by one.
In relation to transfers from consumers to firms, there is no doubt that businesses may be incentivised to take advantage of less fortunate customers. The whole sub-prime mortgage crisis is a classic example. The temptation is there irrespective of automation, but there may be ways in which the complexity and opacity of algorithmic systems make it more alluring. Again, the sub-prime mortgage crisis provides some powerful lessons. The complex methods used for weighting and calculating the risk attached to mortgage bonds fueled the speculation that led to the eventual crash. Transparency may reduce the risk, but is probably insufficient by itself. Regulation and strict scrutiny of the systems used by private (and public) bodies may be needed.
In relation to transfers between consumers, this could also certainly happen. We are witnessing a significant recrudescence in wealth inequality. If people like Thomas Piketty and Anthony Atkinson are to be believed — and I believe they are — then we are now returning to levels of inequality not seen since the late 19th century. It seems plausible that wealthy elites will be well-positioned to take advantage of complex and opaque algorithmic decision-making systems, if for no other reason than that they can expend considerable resources trying to get to grips with them.
Transparency could help by levelling the playing field to some extent. But Zarsky is not convinced. Transparency could heighten the advantage of the wealthy elites. One reason why people think budgetary decision-making should be conducted in secret, and all decisions simply announced at one time, is that they worry about elite lobbying groups taking advantage of transparency to push their agendas. Furthermore, Zarsky thinks that the automated and inhuman nature of algorithmic decision-making could actually help to resolve these inequities. Current elites are propped up by a system of implied and explicit biases among human decision-makers. Removing the human element could remove these implicit and explicit biases and result in greater equality.
Finally, when it comes to the impact on protected groups, we need to bear in mind the three different ways in which this could happen: (i) because protected characteristics (like race) are explicitly used by the algorithms when making unfair allocations; (ii) because the implicit biases of the designers results in a system that goes against the interests of the protected group and (iii) because, for some unknown reason, the algorithm has a disparate impact on the protected group when put into practice. If (i) is happening, it should simply be banned: the whole ethos of anti-discrimination law is that you cannot use such characteristics when making allocative decisions. If (ii) is happening, then greater transparency and scrutiny of the coding process is required. And if (iii) is happening, transparency is still necessary but needs to be combined with careful empirical studies of how the systems work. Furthermore, all of this must be balanced against the possibility of using algorithmic decision-making as a way to avoid human biases that disfavour the protected groups.
4. Arbitrariness and Autonomy-Based Objections
The second fairness-based objection has to do with arbitrariness. The concern is that an algorithmic decision could affect a person (negatively) for a seemingly arbitrary reason, i.e. a reason that is unconnected to any factor that should lead to them being legitimately singled out by the algorithm. Take two seemingly identical people, one of whom receives a positive credit score and the other who receives a negative one. As best we can tell, there is nothing in their behaviour or personal data to explain why one should be favoured over the other, but this is what the algorithm does. In such a scenario, the decision would be arbitrary and hence unfair.
You might think this is really an efficiency-based objection, but there is a subtle difference. In the scenario being imagined, the algorithmic decision-making process as a whole could be quite efficient. In other words, in the aggregate, it might be that the process works well and is effective in distinguishing high risk from low risk customers. It is just that in this particular case it seems to have singled someone out for an arbitrary reason.
In such a scenario, it seems pretty clear that the automated and opaque nature of the decision-making process would be partly to blame. It is true that human decision-making systems could also single people out for arbitrary reasons, but in those cases it will usually be easier to figure out where the system broke down. In the case of an automated and opaque algorithmic process, it will be more difficult to conduct the investigation into what went wrong. Faith in the algorithm, despite its flaws, could be tempting. Transparency may help to alleviate this concern, but again its effectiveness may be limited since it may be impossible to deconstruct the algorithm and figure out why the error arose. All that said, the negative impact in one individual case would need to be balanced against the aggregate gains. It could be that the individual is negatively affected on one occasion, but benefits on nearly all others. As a result, the arbitrariness in the one case may be offset.
This brings us to the final fairness-based objection. This one focuses on autonomy-based harms. Here, we switch focus from the fairness of the outcome to the fairness of the procedure itself. The concern is that algorithmic decision-making processes might fail to respect the dignity and autonomy of the individuals affected by their outputs. There are several ways in which this could happen. The system could rely on data that is collected without informed consent, or it may fail to allow for meaningful human participation and scrutiny due to its intrinsic complexity.
Interestingly, Zarsky finds this type of objection to be the most intractable. Transparency could help to mitigate some of the autonomy-based harms, but not all. Procedural due process rights for algorithmic decision-making systems could also help. But, to some extent, “these concerns are inescapable when opting for an (often automated) algorithmic analysis with inherent complexities” (Zarsky 2015, 13). This is something I have spoken about at length in my various ‘threat of algocracy’ posts and talks.
Okay, that’s it. As I said, this was merely intended to provide a high level summary of some of the key debates and issues surrounding algorithmic decision-making systems. For more detailed analyses, as well as potential solutions, you should read the other posts in my series on Algocracy and the Problems of Big Data (LINK).
No comments:
Post a Comment