Pages

Wednesday, April 9, 2014

Equality, Fairness and the Threat of Algocracy: Should we embrace automated predictive data-mining?



I’ve looked at data-mining and predictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.

Today, I want to switch tack and, instead of focusing on the moral and political concerns with these technologies, I want to look at a moral and political argument in their favour. The argument comes from Tal Zarsky. It claims that the increasing use of automated predictive analytics should be welcomed because it can help to the eliminate racial and ethnic biases that permeate our social decision-making processes. It also argues that resistance to this technology could be attributable to a fear amongst the majority that they will lose their comfortable and privileged position within society.

This strikes me as an interesting and provocative argument. I want to give it a fair hearing in this post. To do this, I’ll break my discussion down into three subsections. First, I’ll clarify the nature of the technology under debate. Second, I’ll outline Zarsky’s argument. Third, I’ll look at some potential problems with this argument.

The discussion is based on two articles from Zarsky, which you can find here and here.


1. What exactly are we talking about?

Zarsky’s argument is about the way in which data-mining algorithms can be used to make predictions about individual behaviour. The argument operates in a world dominated by jargon like “data-mining”, “big data”, “predictive analytics” and so forth. This jargon is often ill-defined and poorly understood. Fortunately, Zarsky takes the time out to define some of the key concepts and to specify exactly what his argument is about.

The first key concept is that of “data-mining” which Zarsky defines in the following manner:

Data-Mining: The non-trivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data.

There is a sense in which we all engage in a degree of data-mining, so defined. The difference nowadays comes from the fact that we are living in the era of “big data”, in which vast datasets are available, and which cannot be mined without algorithmic assistance.

As Zarsky notes, there are several different kinds of data-mining. At a first pass, there is a distinction between descriptive and predictive data-mining. The former is used simply to highlight and explain the patterns in existing datasets. For example, data-mining algorithms could be used to identify significant patterns in experimental data, which can in turn be used to confirm or challenge scientific theories. Predictive data-mining is, by way of contrast, used to make predictions about future events on the basis of historical datasets. Classic examples might be the mining of phone records and internet activity to predict who is likely to carry out a terrorist attack, or the mining of historical purchasing decisions to predict future purchasing decisions. It is the predictive kind of data-mining that interests Zarsky (I call this, along with others, “predictive analytics” as it is about analysing datasets to make predictions about the future).

In addition to this, there is a distinction between two different kinds of data “searches”:

Subject-based searches: Search datasets for known/predetermined patterns (typically relating to specific people or events).
Pattern-based searches: Search datasets for unknown/not predetermined patterns.

Zarsky’s argument is concerned with pattern-based searches. These are interesting insofar as they grant a greater degree of “autonomy” to the algorithms sorting through the data. In the case of pattern-based searches, the algorithms find the patterns that human analysts and governmental agents might be interested in; they tell the humans what to look out for.

All of which brings us to the thorny issue of human involvement. Again, as Zarsky notes, humans can be more or less involved in the data-mining process. At present, they are still quite heavily involved, constructing datasets to be mined and defining (broadly) the parameters within which the algorithms work. Furthermore, it is typically the case that humans review the outputs of the algorithms and decide what to do with them. Indeed, in the European Union, this is a legal requirement. Article 15 of Directive 95/46/EC requires human review of any automated data-processing that could have a substantial impact on an individual’s life.

There are, however, exceptions to this requirement and it is certainly technically feasible to create systems that reduce or eliminate human input. Part of the reason for this comes from the existence of two different styles of data-mining process:

Interpretable Processes: This refers to any data-mining process which is based on factors and rationales that can be reduced to human language explanations. In other words, processes which are interpretable and understandable by human beings.
Non-interpretable Processes: This refers to any data-mining process which is not based on factors or rationales that can be reduced to human language explanations. In other words, processes which are not interpretable and understandable by human beings.

The former set of processes allow for significant human involvement, both in terms of setting out the rationales and factors that will be used to guide the data-mining, and in terms of explaining those rationales and factors to a wider audience. The latter set of processes reduce, and may ultimately eliminate, human involvement. This is because in these cases the software makes its decision based on thousands (maybe hundreds of thousands) of variables which are themselves learned through the data analysis process, i.e. they are not set down in advance by human programmers.

In his writings, Zarsky sometimes suggests that interpretable processes are preferable, at least from a transparency perspective. That said, in order for his fairness and equality argument to work, it’s not clear that interpretable processes are required. Indeed, as we are about to see, minimising the ability of humans to interfere with the process seems to be the motivation for that argument. I return to this issue later. For the time being, let’s just look at the argument itself.


2. The Equality and Fairness Argument
To get off the ground, Zarsky’s argument demands that we make an assumption. We must assume that predictive analytics can, as a matter of fact, be useful, i.e. that it can successfully identify likely terrorist suspects, tax evaders, violent criminals, or whatever. If it can’t do that, then there’s really no point in discussing it.

Furthermore, when assessing the merits of predictive analytics we must take care not to consider it in isolation from its alternatives. In other words, we can’t simply focus on the merits and demerits of predictive analytics by itself, without also considering the merits and demerits of policies that are likely to be used in its stead. This is an important point. Governments have legitimate aims in trying to reduce thinks like terrorism, tax evasion and violent crimes. If they are not using predictive analytics to accomplish those aims, they’ll be using something else. The comparators must be factored into the argument. If it turns out that predictive analytics is comparatively better than its alternatives, then it may be more desirable than we think.

But that simply raises the question: what are the comparators? In his most detailed discussion, Zarsky identifies five alternatives. For present purposes, I’m going to simplify and just talk about one: any system in which humans decide who gets targetted. This could actually cover a wide variety of different policies; all that matters is that they share this one feature. This is to be contrasted with an automated system that runs entirely on the basis of predictive data-mining algorithms.

With all this in mind, we can proceed to the argument proper. The argument works from a simple motivating premise: it is morally and politically better if our social decision-making processes do not arbitrarily and unfairly target particular groups of people. Consider the profiling debate in relation to anti-terrorism and crime-prevention. One major concern with profiling is that it is used to arbitrarily target and discriminate against certain racial and ethnic minorities. That is something that we could do without. If people are going to be targetted by such measures, they need to be targetted on legitimate grounds (i.e because they are genuinely more likely to be terrorist or to commit crimes).

Working from that motivating premise, Zarsky then adds the comparative claim that automated predictive analytics will do a better job of eliminating arbitrary prejudices and biases from the process. That gives us the following argument:


  • (1) It is better, ceteris paribus, if our social decision-making processes do not arbitrarily and unfairly target particular groups of people.
  • (2) Social decision-making processes that are guided by automated predictive analytics are less likely to do this than processes that are guided by human beings.
  • (3) Therefore, it would be better, ceteris paribus, to have social decision-making processes that are guided by automated predictive analytics.


Let’s probe premise (2) in a little more depth. Why exactly is this likely to be true? To back it up, Zarsky delves into the literature on implicit and unconscious biases. Those who are familiar with this literature will know that a variety of experiments in social psychology reveal that even when decision-makers don’t think they are being racially or ethnically prejudiced, they often are. This is because they subconsciously and implicitly associate people from certain racial and ethnic backgrounds with other negative traits. If you like, you can perform an implicit association test (IAT) on yourself to see whether you exhibit such biases.

Zarsky’s point is simply that the algorithms at the heart of predictive analytical programmes will not be susceptible to the same kinds of hidden bias, especially if they are automated and the capacity of human beings to override them is limited. As he himself puts it:

[A]utomation introduces a surprising benefit. By limiting the role of human discretion and intuition and relying upon computer-driven decisions this process protects minorities and other weaker groups. 
(Zarsky, 2012, pg. 35)

Zarsky builds upon this by suggesting that one of the sources of opposition to automated, algorithm-based decision-making could be the privileged majorities who benefit from the current system. They may actually fear the indiscriminate nature of the automated process. If the process is guided by a human, then the majorities can appeal to human prejudices in order to secure more favourable, less intrusive outcomes. If the process is guided by a computer, they won’t be able to do this. Consequently, some of the burden of enforcement and prevention mechanisms will be shifted onto them, and away from the minorities who currently bear their brunt.


3. Problems and Conclusions
That’s the argument in outline form. The next question is whether it is persuasive. That’s a difficult question to answer in the space of a blog post like this, and it is one I am still pondering. Nevertheless, there are a few obvious, general, points of criticism.

The first is that premise (2) might actually be wrong. It may be that predictive analytics is just as biased and prejudiced as human decision-making. This could arise for any number of reasons, some of which Zarsky acknowledges. For example, the datasets that are fed into the algorithms could themselves be the products of biased human policies on data collection. Likewise, the sorting algorithms might have built in biases that we can’t fully understand or protect against. This is something that could be exacerbated if the whole process is non-interpretable.

All of which brings me to another obvious point of criticism. The “ceteris paribus” clause in the first premise is significant. While it is indeed true that — all else being equal — we prefer to have unbiased and unprejudiced decision-making systems, all else may not be equal here. Elsewhere on this blog, I have outlined something I call the “threat of algocracy”. This is a threat to the legitimacy of our social decision-making processes that is posed by the incomprehensibility, non-interpretability and opacity of certain kinds of algorithmic control. The threat is important because, according to most theories of procedural justice, any public procedure that issues coercive judgments should be understandable by those who are affected by it. The problem is that this may not be the case if we hand control over to the automated processes recommended by Zarsky.

He himself acknowledges this point by highlighting how we prefer to have human decision-makers because at least we can engage with them at a human level of rational thought and argumentation: we can identify their assumptions and spot their faulty logic (if indeed it is faulty). But Zarsky has a response to this worry. He can fall back on the desirability of interpretable predictive analytics. In other words, he can argue that we can have the best of both worlds: unbiased decision-making, coupled with human comprehensibility. All we have to do is make sure that the rationales and factors underlying the automated predictive algorithms can be explained to human beings.

That might be a satisfactory solution, but I’m not entirely convinced. One reason for this is that I think having interpretable processes might re-open the door to the kinds of biased human decision-making that originally motivated Zarsky’s argument. The more humans can understand and shape the process, the more scope there is for their unconscious biases to affect its outputs. So perhaps the lack of bias and the degree of comprehensibility are in tension with one another. Perhaps additional solutions are needed to get the best of both worlds (e.g. moral enhancement)?

I think that question is a nice point on which to end.

No comments:

Post a Comment