Thursday, July 12, 2018

Episode #41 - Binns on Fairness in Algorithmic Decision-Making

Reuben Binns.jpg

In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).


 

Show notes

  • 0:00 - Introduction
  • 1:46 - What is algorithmic decision-making?
  • 4:20 - Isn't all decision-making algorithmic?
  • 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate
  • 12:02 - Limitations of the COMPAS debate
  • 15:22 - Other examples of unfairness in algorithmic decision-making
  • 17:00 - What is discrimination in decision-making?
  • 19:45 - The mental state theory of discrimination
  • 25:20 - Statistical discrimination and the problem of generalisation
  • 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination
  • 34:40 - Algorithmic typecasting: Could we all end up like William Shatner?
  • 39:02 - Egalitarianism and algorithmic decision-making
  • 43:07 - The role that luck and desert play in our understanding of fairness
  • 49:38 - Deontic justice and historical discrimination in algorithmic decision-making
  • 53:36 - Fair distribution vs Fair recognition
  • 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?

 

Relevant Links

  • 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm
  • 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same)



Tuesday, July 10, 2018

Tranhumanism as Utopianism: A Critical Analysis




The poem ‘The Land of Cockaygne’ is one of the original works of utopian literature. A satire, written in Ireland in the 14th century, the poem describes a corrupt community of monks living in the mythical land of Cockaigne (different spellings for different dialogues). Cockaigne is a medieval idyll. It is the land of plenty, where no one goes without, where no one has to work, and there is an abundance of food and gluttony.

But there is something quaint about this medieval idyll. As Steven Pinker points out in his book Enlightenment Now, if Cockaigne represents the ideal society, then arguably we’ve managed to create it in Western countries in the 21st century:

Hungry Europeans [once] titillated themselves with food pornography, such as tales of Cockaigne, a country where pancakes grew on trees, the streets were paved with pastry, roasted pigs wandered around with knives in their backs for easy carving, and cooked fish jumped out of the water and landed at one’s feet. Today we live in Cockaigne, and our problem is not too few calories but too many. As the comedian Chris Rock observed, “This is the first society in history where the poor people are fat.” 
(Pinker 2018, 69)
It’s an attractive idea, but I think Pinker is wrong. While we in the West (and elsewhere) may have created a world of relative food abundance, it’s clear that the mythical land of Cockaigne wasn’t just about food. It was also about sex, health, entertainment and politics. Cockaigne was a place where there was no illness or disease, an abundance of sexual pleasure and entertainment, and where the traditional power elites were taken down a peg or two. It was a ‘flat’ society, where everyone had what they wanted when they wanted it. I don’t think we have reached that quite yet.

Whether we should want to is another question. Utopias are, in principle, ideal societies. They radically improve upon our current lot. But what would actually count as a utopia is a matter of some debate. I don’t have any particular interest in whether Cockaigne represents the best that humanity can hope for — I suspect it does not — but I do have an interest in modern day utopian projects. Are they feasible? Are they desirable?

Transhumanism would seem to be the quintessential modern day utopian project. Transhumanists are trying to create a world of technological abundance and perfection. A world where we can control everything (our intelligence, our happiness, our lifespans) through technology. It’s a contemporary Cockaignian fantasy, updated for the technologically advanced age in which we now live.

But transhumanism can be criticised for these utopian leanings. Michael Hauskeller, in his article ‘Reinventing Cockaigne: Utopian themes in Transhumanist Thought’ claims that transhumanist philosophy — for all its sophisticated arguments and principles — is contaminated by its implicit and foundational utopianism. In this post, I want to try to understand Hauskeller’s argument. Is it true that Transhumanism is a utopianist philosophy? Does this really have disastrous consequences? Let’s see.


1. Is Transhumanism Utopian?
Hauskeller presents two main arguments in his article. The first is that there are direct analogues between transhumanism and classic works of utopian literature and, as a result, it is right and proper to refer to transhumanism as a form of utopianism. The second is that this utopianism has a contaminating effect on other transhumanist arguments. I’m far more interested in the second argument. Indeed, I’d be willing to simply concede the first argument just so we can get to the second. But Hauskeller spends far more time on the first argument and he does say some interesting things about the kind of utopianism you can find in the transhumanist literature. Let’s review them briefly.

First, some transhumanist works wear their utopian leanings on their sleeves. Nick Bostrom, for example, has written a ‘Letter from Utopia’ which is in an imaginary letter from a resident of a future transhumanist society to those of us living the early-21st century. In this imagined future there is an end to all suffering, ageing and disease. There is also an abundance of pleasure. Some of the things Bostrom says in this letter could be taken almost directly from the medieval myth of Cockaigne. For example:

Pleasure! A few grains of this magic ingredient are dearer than a king’s treasure, and we have plenty of it here in Utopia. It pervades everything we do and everything we experience. We sprinkle it in our tea. 
(Bostrom 2010)

David Pearce is another transhumanist who shares this Cockaignian outlook. He wants to eliminate suffering and ensure that we can experience sublime happiness all the time. He refers to this project, variously, as ‘paradise engineering’ and the ‘naturalisation of heaven’. The parallels between the work of both authors and the medieval myth of Cockaigne are, as Hauskeller points out, quite striking.

Second, Hauskeller argues that transhumanists share some of the utopian myths that you find among 16th century alchemists. Alchemy is popularly understood as the attempt to convert base metals into gold, but Hauskeller argues that this popular conception only scratches the surface of what the alchemists were trying to do. They were trying to unlock the secrets of the universe and attain a utopian existence. Breaking down the ontological barriers between different substances — base metals and gold — was the way to do this. It would allow them exercise perfect control over the natural order and attain the Elixir of Life. To do this, alchemists searched for a magical device — the Philosopher’s Stone — that would provide the means to their utopian ends.

Hauskeller argues that there are obvious parallels with the transhumanist project. Transhumanists are also trying to exercise perfect control over nature (specifically their own bodies and brains) and to find the Elixir of Life. They see technology, particular biotech and nanotech, as the means to do this. Thus, technology takes on a similar role to that of the Philosopher’s Stone:

Biotechnology promises to be the real Philosopher’s Stone, that elusive device that the alchemists so desperately tried to find and which would finally give them the power to reinvent the world so that it would match their desires. 
(Hauskeller 2012, 7)

Finally, Hauskeller argues that transhumanism conceives of the ideal form of existence not as a fixed endpoint but, rather, as an continual upward cycle of improvement. Through technology, we can constantly improve and enhance ourselves and our societies. This is not something that has to be brought to a halt. In other words, they echo and adopt one of the key shifts in modern day utopian thought away from ‘blueprint’ models of utopia to ‘horizonal’ models. I discussed this distinction previously, but the essence of it is nicely summed up in this quote from HG Wells (which Hauskeller uses):

The Modern Utopia must not be static but kinetic, must shape not as a permanent state but as a hopeful stage leading to a long ascent of stages. 
(Wells, A Modern Utopia)

Parallels and analogies of this sort lead Hauskeller to conclude that transhumanism is utopian to its core. It is a direct descendant of classic utopianism and it carries the torch of utopianism into the future. As I said, I’m happy to concede this point to Hauskeller. The deeper question is: does it matter? Does it undermine the transhumanist project in some way?


2. Does it matter? The Contamination Argument
There are two reasons for thinking that it doesn’t. The first is simply that utopianism is a good thing. It is a good thing that people articulate and defend possible ideal societies. We shouldn’t rest on our laurels and assume that our current way of life is the best. We should be open to the possibility of radical improvement. Transhumanism is a breath of fresh air in this regard. There are plenty of techno-pessimists and morose social critics out there. They all lament the state of humanity. Isn’t it nice to have people defend a more positive and hopeful outlook? So what if transhumanism is laced with utopian language and ideals.

The second reason is possibly more important. Even if it turns out that utopianism is not such a positive thing, there is still the fact that transhumanists have independent arguments for each of their pet projects. In other words, there are specific reasons why they think that, say, cognitive enhancement is a good thing, or that life extension is a good thing, or that happiness engineering is a good thing. Indeed, some of their arguments have become extremely elaborate over the years as they have responded to critics. Those arguments ultimately stand and fall on their own merits. Whether they are undergirded by a generally utopian outlook or leaning is, strictly speaking, irrelevant to their critical assessment. Call this the independence hypothesis:

Independence Hypothesis: The arguments for specific transhumanist projects (cognitive enhancement, genetic engineering, life extension, happiness engineering etc.) stand and fall on their own merits, i.e. they are independent of any underlying utopianism.

Hauskeller rejects both of these reasons. He think that utopianism is problematic and that transhumanist arguments are not independent from it. He favours what I would call a ‘contamination argument’ against transhumanism. He doesn’t set it out in formal terms, but I will make an attempt to do so here:


  • (1) If a set of arguments (A1…An) in favour of a set of conclusions (C1…Cn) is (a) motivated by an underlying theory/ideology; (b) that ideology is flawed or problematic and (c) those flaws carry over into or get reflected in the premises of the arguments, then those arguments are contaminated by that theory/ideology.

  • (2) The arguments that transhumanists offer in support of their projects are (a) motivated by an underlying theory/ideology of utopianism, (b) that ideology is flawed and problematic and (c) these flaws get reflected in the premises of the arguments.

  • (3) Therefore, transhumanism is contaminated by utopianism.


I don’t know that Hauskeller would agree with this formalisation, but I think it captures what he is trying to do. Consider the following quote from his article:

Utopian ideas and images do not merely serve as motivational aids to get people to support the radical enhancement agenda, they also affect the very arguments that are proposed in favour of human self-transformation and in particular in support of the claim that it is our moral duty to develop and use technologies that make this happen. As philosophical arguments they appear to be self-contained, but in truth utopian ideas form the fertile soil from which those arguments grow, so that without them they would wither and die. 
(Hauskeller 2012, 11)

That sounds like a contamination argument if ever I saw one. Following my formalisation, for the contamination argument to succeed, Hauskeller will need to show that transhumanist arguments are (a) motivated by utopianism, (b) that utopianism is flawed and problematic, and (c) that these flaws carry over into the premises of transhumanist arguments. He thinks he can do this. Since I have already, effectively, conceded the first of these points, that leaves us with the other two.

As best I can tell, Hauskeller offers three main arguments in favour of (b) and (c). The first is that utopian visions or ideals tend to be incompletely sketched out. So a utopianist will come along and paint a seemingly pleasant picture — pleasure being sprinkled in our tea, lives being extended indefinitely, cognition being enhanced to an extreme — and extoll all the benefits of this utopian existence, but they won’t think it all the way through. They won’t consider all the unintended side effects of realising these utopian aims. What if we spend all day drinking endless cups of pleasure-infused tea, never lifting a finger to do great things? What if in our lust for life extension we become excessively risk averse and never take the risks needed to innovate and make things even better? Thinking things through is important. Utopian projects are laced with uncertainty. We don’t know exactly how things will pan out if we pursue them, and these unintended side effects might be pretty bad (even if they have a low probability of materialising). We cannot make do with the incomplete sketches of the utopian.

The claim then is this incompleteness carries over to the arguments in favour of transhumanist projects. Defenders of these arguments don’t think everything through. Is this true? Hauskeller gives some examples from the transhumanist literature and I think he makes a reasonable case. But I don’t think it is as significant as he lets on. Philosophical arguments are rarely complete in their initial presentation. It is part and parcel of the ordinary scholarly process that objections are formulated and replied to by the original defenders -- thrust and parry, argument and objection, example and counterexample. Through the constant iteration of the scholarly back-and-forth the arguments can be refined and strengthened. It was probably fair to say that transhumanist arguments were once guilty of incomplete specification, and naive utopianism, but I think if you follow the scholarly conversation through to the present day, you find that they are much less so. At least, that’s my sense of the current state of play. This does mean, however, that the arguments may have lost some of their utopian lustre. They may be more modest as a result of refinement. But that’s not necessarily a bad thing. I have long favoured what I would call a 'modest' form of transhumanism.

Hauskeller’s second argument is that utopianists often present their views with an air of inevitability. Social progression or human evolution is supposedly tending towards their utopian idyll. If we just let the cosmic dance play out to the end we will arrive at the utopian paradise. There is an element of this in Marxism and Hegelianism. There is also an element of it in transhumanist argumentation. Although transhumanists do offer arguments in favour of their projects, they often presuppose within the premises of those arguments the notion that the project is part of humanity’s destiny and/or that resistance is, in some sense, futile. This is problematic because it obscures the fact that we have a choice. Things are not inevitable. We must actively choose to pursue the transhumanist project, not simply sit back and enjoy the ride.

There are, of course, deep metaphysical questions at play here. Maybe there is some ultimate destiny to the universe? Maybe a particular future is inevitable? It would take too long to properly probe the metaphysical depths here. Nevertheless, I am willing to concede to Hauskeller that this tendency toward fatalism is a bit of a problem for transhumanists. It is often a way of avoiding hard argumentative work. The transhumanist will say — and I have been guilty of this myself — that ‘sure, you could object to X, but X is going to happen anyway so you may as well get used to it!’ Unless there are very good reasons for thinking that X is going to happen anyway, I think this move should be avoided. Strong independent reasons for thinking that X is desirable should be articulated.

Hauskeller’s third argument is that utopianism tends to make the better the enemy of the good. In other words, utopians are so busy imagining and planning for some wonderful future that they overlook or ignore what is good about our current form of existence. Indeed, they go further. In advocating for their utopian vision, they often denigrate or criticise what we currently have. They need to get people enthusiastic about the future and one way to do this is breed dissatisfaction with the present.

Again, I think there some element of truth to this. For example, I don’t think transhumanists should oversell the idea of life extension or digital ‘immortality’. Although I think it would be, on balance, a good thing if we could radically extend the human lifespan, I suspect that (a) this isn’t going to happen any time soon and (b) we are going to have to embrace death at some point. Fixating on the idea that death is a great evil that could be overcome if we only reprioritised our R&D seems naive to me. We have to live with our mortality. That said, I’m not convinced that the overselling of the future is always present in transhumanist arguments and I have, in some published work, challenged Hauskeller for assuming that transhumanist projects (specifically radical enhancement projets) necessarily entail making the (future) better the enemy of the (present) good. Indeed, I think that certain transhumanist aims are about recognising and conserving what is good about our current existence.


3. Conclusion
In sum, Hauskeller criticises transhumanism for its latent or implicit utopianism. In this post I have conceded that transhumanism may be utopianist in its leanings, but pushed back against the notion that this is a major problem. Although Hauskeller makes some reasonable critiques of transhumanist rhetoric, the more careful, extensively developed, philosophical arguments for transhumanist projects can, I think, escape any charge of contamination.

I want to close with one final point. Even though I conceded the utopian leanings of transhumanism to Hauskeller, there are some well-known transhumanists who resist this idea. Stefan Lorenz Sorgner, for example, has recently written a defence of an anti-utopian transhumanism. I recommend checking it out if you are interested in this debate.




Sunday, July 8, 2018

Building Better Sex Robots: Lessons from Feminist Pornography







Here's another new paper. This one looks at the ever-popular topic of sex robots through the lens of feminist pornography. This is a draft of a book chapter that is set to appear in an edited collection entitled AI Love You: Developments on Human-Robot Intimate Relations, which is edited by Youfang Zhou and Martin Fischer, and will be coming out with Springer, sometime in the future. I provide a link to an OA version of the draft below.

Title: Building better sex robots: Lessons from Feminist Pornography
Book: AI Love You: Developments on Human-Robot Intimate Relations
Links: Philpapers
Abstract: How should we react to the development of sexbot technology? Taking their cue from anti-porn feminism, several academic critics lament the development of sexbot technology, arguing that it objectifies and subordinates women, is likely to promote misogynistic attitudes toward sex, and may need to be banned or restricted. In this chapter I argue for an alternative response. Taking my cue from the sex positive ‘feminist porn’ movement, I argue that the best response to the development of ‘bad’ sexbots is to make better ones. This will require changes to the content, process and context of sexbot development. Doing so will acknowledge the valuable role that technology can play in human sexuality, and allow us to challenge gendered norms and assumptions about male and female sexual desire. This will not be a panacea to the social problems that could arise from sexbot development, but it offers a more realistic and hopeful vision for the future of this technology in a pluralistic and progressive society.   




Friday, July 6, 2018

Towards an Ethics of AI Assistants: An Initial Framework




I have new paper in the journal Philosophy of Technology. It's called 'Towards an Ethics of AI Assistants'. It looks at some of the leading ethical objections to the personal use of AI assistants and tries to develop some principles that could be of use to both the users and designers of this technology. Details and links to OA versions are below.

Title: Towards an Ethics of AI Assistants: an Initial Framework
Journal: Philosophy of Technology
Links: Official; Philpapers; Academia; Researchgate 
Abstract: Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling, and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a ‘smart’ algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex in the sense that there are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating some of the most typical objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.


This paper has been picked up by a few people already, including Wessel Reijers (a philosopher based at Dublin City University), who kindly said the following about it:

   






Friday, June 29, 2018

Episode #40: Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars

Sven-Nyholm.jpg

In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more. You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).



Show Notes:

  • 0:00 - Introduction
  • 1:22 - What is a self-driving car?
  • 3:00 - Fatal crashes involving self-driving cars
  • 5:10 - Could self-driving cars ever be completely safe?
  • 8:14 - Limitations of the Trolley Problem
  • 11:22 - What kinds of accident scenarios do we need to plan for?
  • 17:18 - Who should decide which ethical rules a self-driving car follows?
  • 23:47 - Why not randomise the ethical rules?
  • 25:18 - Experimental findings on people's preferences with self-driving cars
  • 29:16 - Is this just another typical applied ethical debate?
  • 31:27 - What would a utilitarian self-driving car do?
  • 36:30 - What would a Kantian self-driving car do?
  • 39:33 - A contractualist approach to the ethics of self-driving cars
  • 43:54 - The responsibility gap problem
  • 46:12 - Scepticism of the responsibility gap: can self-driving cars be agents?
  • 53:17 - A collaborative agency approach to self-driving cars
  • 58:18 - So who should we blame if something goes wrong?
  • 1:03:40 - Is there a duty to hand over driving to machines?
  • 1:07:30 - Must self-driving cars be programmed to kill?

Relevant Links




Monday, June 25, 2018

Radical Algorithmic Domination: How algorithms exploit and manipulate




(Related post: Algorithmic Micro-Domination)

In a recent post, I argued that the republican concept of ‘domination’ could be usefully deployed to understand the challenge that algorithmic governance systems pose to individual freedom. To be more precise, I argued that a modification of that concept — to include instances of ‘micro-domination’ — provided both a descriptively accurate and normatively appropriate basis for understanding the challenge.

In making this argument, I was working with a conception of domination that tries to shed light on what it means to be a free citizen. This is the ‘freedom as non-domination’ idea that has been popularised by Philip Pettit. But Pettit’s conception of domination has been challenged by other republican theorists. Michael Thompson, for example, has recently written a paper entitled ‘Two Faces of Domination in Republican Political Theory’ that argues that Pettit’s view is too narrow and fails to address the forms of domination that plague modern societies. Thompson favours a ‘radical’ conception of domination that focuses less on freedom, and more on inequality of power in capitalist societies. He claims that this conception is more in keeping with the views of republican writers like Machiavelli and Rousseau, and more attuned to the realities of our age.

In this post, I want to argue that Thompson’s radical republican take on domination can also be usefully deployed to make sense of the challenges posed by algorithmic governance. In doing so, I hope to provide further support for the claim that the concept of domination provides a unifying theoretical framework for understanding and addressing this phenomenon.

Thompson’s ‘radical republicanism’ focuses on two specific forms of domination that play a critical role in the organisation of modern societies. They are: (i) extractive domination; and (ii) constitutive domination. In what follows, I will explain both of these forms of domination, trying to stay true to Thompson’s original presentation, and then outline the mechanisms through which algorithmic governance technologies facilitate or enable them. This gist of my argument is that algorithmic governance technologies are particularly good at doing this. I will close by addressing some objections to my view.


1. Extractive Algorithmic Domination
Domination is a kind of power. It arises from asymmetrical relationships between two or more individuals or groups of individuals. The classic example of such an asymmetrical relationship is that between a slave and his/her master. Indeed, this relationship is the example that Pettit uses to illustrate and explain his conception freedom as non-domination. His claim is one of the key properties of this relationship is that the slave can never be free no matter how kind or benevolent the master is. The reason for this is that you cannot be free if you live subject to the arbitrary will of another.

The problem with Pettit’s view is that it overlooks other manifestations and effects of asymmetrical relationships. Pettit sees domination as an analytical concept that sheds light on the nature of free choice. But there is surely more to it than that? To live in a state of domination is not simply to have one’s freedom undermined. It is also to have one’s labour/work exploited and mind controlled by a set of values and norms. Pettit hints at these things in his work but never makes them central to his analysis. Thompson’s radical republicanism does.

The first way it does this is through the idea of extractive domination:

Extractive Domination: Arises when A is in a structural relation with B whose purpose is to enable A to extract a surplus benefit from B, where:
‘a structural relation’ = a relationship defined by social roles and social norms; and
‘a surplus benefit’ = a benefit (usually in the form of physical, cognitive and emotional labour) that would otherwise have benefited B or the wider community but instead flows to A for A’s benefit.


The master-slave relation provides an example of this: it is a relationship defined by social roles and norms, and those norms enable masters to extract surplus benefit (physical labour) from the slave. Other examples would include capitalist-worker relations (in a capitalist society) and male-female relations (under conditions of patriarchy). Male-female relations provide an interesting, if controversial, example. What it means to be ‘male’ or ‘female’ is defined (at least in part)* by social norms, expectations, and values. Thus a relationship between a man and a woman is (at least in part) constituted by those norms, expectations and values. Under conditions of patriarchy, these norms, expectations and values enable men to extract surplus benefits from women, particularly in the form of sexual labour and domestic labour. In the case of sex, the norms and values are directed at male sexual pleasure as opposed to female sexual pleasure; in the case of domestic labour, this provides the foundation for the man to live a ‘successful’ life. Of course, this take on male-female relations is actively resisted by some, and I’ve only provided a simplistic sketch of what it means to live under conditions of patriarchy. Nevertheless, I hope it gives a clearer sense of what is meant by extractive domination. We can return to the problem with simplistic sketches of social systems later.

What I want to do now is to argue that algorithmic governance technologies enable extractive domination. Indeed, that they are, in many ways, the ideal technology for facilitating extractive domination. I don’t think this is a particularly difficult case to make. Contemporary algorithmic governance technologies track, monitor, nudge and incentivise our behaviour. The vast majority of these technologies do so by adopting the ‘Surveillance Capitalist’ business model (see Zuboff 2015 for more on the idea of surveillance capitalism, or read my summary of her work if you prefer). The algorithmic service is often provided to us for ‘free’. I can use Facebook for ‘free’; I can read online media for ‘free’; I can download the vast majority of health and fitness apps for ‘free’ or minimal cost. But, of course, I pay in other ways. These services make their money by extracting data from my behaviour and then by monetising this in various ways, most commonly by selling it to advertisers.

The net result is a system of extractive domination par excellence. The owners and controllers of the algorithmic ecosystem gain a significant surplus benefit from the labour of their users/content providers. Just look at the market capitalisation of companies like Facebook, Amazon and Google, and the net worth of their founders. All of these individuals are, from what I have read, hard-working and fiercely determined, and they also displayed considerable ingenuity and creativity in creating their digital platforms. Still, it is difficult to deny that since they got up and running these digital platforms have effectively functioned to extract rents from the (often unpaid) labour of others. In many ways, the system is more extractive than that which existed under traditional capitalist-worker relations. At least under that system, the workers received some economic benefit for their work, however minimal it may have been, and through legal and regulatory reform, they often received considerable protections and insurances against the depredations of their employers. But under surveillance capitalism the people from who the surplus benefits are extracted are (often) no longer classified as ‘workers’; they are service users or self-employed gig workers. They must fend for themselves or accept the Faustian bargain in availing of free services.

That’s not to say that users receive no benefits or that there isn’t some value-added by the ingenuity of the technological innovators. Arguably, the value of my individual data isn’t particularly high in and of itself. It is only when it is aggregated together with the data of many others that it becomes valuable. You could, consequently, argue that the surveillance capitalists are not, strictly speaking, extracting a surplus benefit because without their technology there would be no benefits at all. But I don’t think is quite right. It is often the case that certain behaviours or skills lack value before a market for them is created — e.g. being an expert in digital marketing wouldn’t have been a particularly valuable skill 100 years ago — but that doesn’t mean that they don’t have value once the market has been established, and that it is impossible for people to extract a surplus benefit from them. Individual data clearly has some value and it seems obvious that a disproportionate share of that value flows towards the owners and controllers of digital platforms. Jaron Lanier’s book Who Owns the Future? looks into this problem in quite some detail and argues in favour of a system of micro-payments to reward us for our tracked behaviours. But that’s all by-the-by. The important point here is that algorithmic governance technologies enable a pervasive and powerful form of extractive domination.


2. Constitutive Algorithmic Domination
So much for extractive domination. What about constitutive domination? To understand this concept, we need to go back, for a moment, to Pettit’s idea of freedom as non-domination. As you’ll recall, the essence of this idea is that to be free you must be free from the arbitrary will of another. I haven’t made much of the ‘arbitrariness’ condition in my discussions so far, but it is in fact crucial to Pettit’s theory. Pettit (like most people) accepts that there can be some legitimate authorities in society (e.g. the state). What differentiates legitimate authorities from illegitimate ones is their lack of arbitrariness. A legitimate authority could, in some possible world, interfere with your choices, but it would do so in a non-arbitrary way. What it means to be non-arbitrary is a matter of some controversy. Pettit argues that potential interferences that are contrary to your ‘avowed interests’ are arbitrary. If you have an avowed interest in X, then any potential interference in X is arbitrary. Consequently, he seems to favour a non-moral theory of arbitrariness: what you have an avowed interest in may or may not be morally acceptable. But he has been criticised for this. Some argue that there must be some moralised understanding of arbitrariness if we are going to reconcile republicanism with democracy, which is something Pettit is keen to do.

Fortunately, we do not have to follow this debate down the rabbit hole. All that matters here is that Pettit’s theory exempts ‘legitimate’ authorities from the charge of domination. Thompson, like many others before him, finds this problematic. He thinks that, in many ways, the ultimate expression of domination is when the dominator gets their subjects to accept their authority as legitimate. In other words, when they get their subjects to see the dominating power as something that is keeping with their avowed interests. In such a state, the subject has so internalised the norms and values of domination that they no longer perceive it as an arbitrary exercise of power. It is just part of the natural order; the correct way of doing things. This is the essence of constitutive domination:

Constitutive Domination: Arises when A has internalised the norms and values that legitimates B’s domination; i.e. thinking outside of the current hierarchical order becomes inconceivable for A.


This is the Marxist ideal of ‘false consciousness’ in another guise, and Thompson uses that terminology explicitly in his analysis (indeed, if it wasn’t already obvious, it should by now be obvious that ‘radical republicanism’ is closely allied to Marxism). Now, I have some problems with the idea of false consciousness. I think it is often used in a sloppy way. I think we have to internalise some set of norms and values. From birth, we are trained and habituated to a certain view of life. We have all been brainwashed into becoming the insiders to some normative system. There is no perfectly neutral, outside view. And yet people think that you can critique a system of norms and values merely by pointing out that it has been foisted upon us. That is often how ‘false consciousness’ gets used in everyday conversations and debates (though, to be fair, it doesn’t get used in that many of my everyday conversations). But if all normative systems are foisted upon us, then merely pointing this out is insufficient. You need to do something more to encourage someone to see this set of norms and values as ‘false’. Fortunately, Thompson does this. He doesn’t take issue with all the possible normative systems that might be foisted upon us; he only takes issue with the ones that legitimate hierarchical social orders, specifically those that include relationships of extractive domination. This narrowing focus is key to the idea constitutive domination.

Do algorithmic governance technologies enable constitutive domination? Let’s think about what that might mean in the present context. In keeping with Thompson’s view, I take it that it must mean that the technologies train or habituate us to a set of norms and values that legitimate the extractive relations of surveillance capitalism. Is that true? And if so what might the training mechanisms be?
Well, I have to be modest here. I can’t say that it is true. This is something that would require empirical research. But I suspect that it could be true and that there are a few different mechanisms through which it occurs:

Attention capture/distraction: Algorithmic governance technologies are designed to capture and direct our attention (time spent on device/app is a major metric of success for the companies creating these technologies). Once attention is captured, it is possible to fill people’s minds with content that either explicitly or implicitly reinforces the norms of surveillance capitalism, or that distract us away from anything that might call those norms into question.

Personalisation and reward: Related to the above, algorithmic governance technologies try to customise themselves to an individual’s preference and reward system. This makes repeat engagement with the technologies as rewarding as possible for the individual, but the repeat engagement itself helps to further empower the system of surveillance capitalism. The degree of personalisation made possible by algorithmic governance technologies could be one of the things that makes them particularly adept at constitutive domination.

Learned helplessness: Because algorithmic governance technologies are rewarding and convenient, and because they often do enable people to achieve goals and satisfy preferences, people feel they have to just accept the conveniences of the system and the compromises it requires, e.g. they cannot have privacy and automated convenience at the same time. They must choose one or the other. They cannot resist the system all by themselves. In extreme form, this learned helplessness may translate into full-throated embrace of the compromises (e.g. cheerleading for a ‘post-privacy’ society).


Again, this is all somewhat speculative, but I think that through a combination of attention capture/distraction, personalisation and reward, and learned helplessness, algorithmic governance technologies could enable constitutive domination. In a similar vein, Brett Frischmann and Evan Selinger argue, in their recent book Re-engineering humanity, argue that digital technologies are ‘programming’ to be unthinking, and unreflective machines. They use digital contracting as one of the main examples of this, arguing that people just click and accept the terms of these contracts without ever really thinking about what they are doing. Programming us to not-think might be another way in which algorithmic governance technologies facilitate constitutive domination. The subjects of algorithmic domination have either been trained not to care about what is going on, or start to see it as a welcome, benign framework in which to they can live their lives. This masks the underlying domination and extraction that is taking place.


3. Objections and Replies
What are the objections to all this? In addition to the objections discussed in the previous post, I can think of several, not all of which I will be able to address here, and there probably many more of which I have not thought. I am happy to hear about them in the comments section. Nevertheless, allow me to address a few of the more obvious ones.

First, one could object to the radical republican theory itself. Is it really necessary? Don’t we already have perfectly good theoretical frameworks and concepts for understanding the phenomena that it purports to explain? For example, doesn’t the Marxist concept of exploitation adequately capture the problem of extractive domination? And don’t the concepts of false consciousness, or governmentality or Lukes’s third face of power all capture the problem of constitutive domination?

I have no doubt that this is true. There are often overlaps between different normative and political theories. But I think there is still some value to the domination framework. For one thing, I think it provides a useful, unifying conceptual label for the problems that would otherwise be labelled as ‘exploitation’, ‘false consciousness’ and so on. It suggests that these problems are all rooted in the same basic problem: domination. Furthermore, because of the way in which domination has been used to understand freedom, it is possible to tie these ‘radical’ concerns into more mainstream liberal debates about freedom and autonomy. I find this to be theoretically attractive and theoretically virtuous (see the previous post on micro-domination for more). Finally, because republicanism is a rich political tradition, with a fairly standardised package of preferred rules and policies, it is possible to use the domination framework to guide normative practice.

Second, one could argue that I have overstated the case when it comes to the algorithmic mechanisms of domination. The problems are not as severe as I claim. The interactions/transactions between users and surveillance capitalist companies are not ‘extractive’; they are win-win (as any good economist would argue). There are many other sources of constitutive domination and they may be far more effective than the algorithmic mechanisms to which I appeal; and there is a significant ‘status quo’ bias underlying the entire argument. The algorithmic mechanisms don’t threaten anything particularly problematic; they are just old problems in a new technological guise.

I am sympathetic to each of these claims. I have some intuitions that lead me to think the algorithmic mechanisms of domination might be particularly bad. For example, the degree of personalisation and customisation might enable far more effective forms of constitutive domination; and the ‘superstar’ nature of network economies might make the relationships more extractive than would be the case in a typical market transaction. But I think empirical work is needed to see whether the problems are as severe or serious as I seem to be suggesting.

Third, one could argue that the entire ‘radical’ framework rests upon an overly-simplified, binary view of society. The assumption driving my argument seems to be that the entire system is set up to follow the surveillance capitalist logic; that there is a dominant and univocal system of norms that reinforces that logic; and that you are either a dominator or a dominated, a master or a slave. Surely this is not accurate? Society is more multi-faceted than that. People flit in and out of different roles. Systems of norms and values are multivalent and often inconsistent. Some technologies empower; some disempower; some do a bit of both. You commit a fatal error if you assume it’s all-or-none, one or the other.

This is probably the objection to which I am most sympathetic. It seems to me that radical theorists often have a single ideological enemy (patriarchy; capitalism; neo-liberalism) and they interpret everything that happens through the lens of that ideological conflict. Anything that seems to be going wrong is traced back to the ideological enemy. It’s like a conspiracy-theory view of social order. This seems very disconnected from how I experience and understand the world. Nevertheless, there’s definitely a sense in which the arguments I have put forward in this post see algorithmic governance technologies through the lens of single ideological enemy (surveillance capitalism) and assume that the technologies always serve that ideology. This could well be wrong. I think there are tendencies or intrinsic features of the technological infrastructure that favour that ideology (e.g. see Kevin Kelly’s arguments in his book The Inevitable), but there is more to it. The technology can be used to dismantle relationships of power too. Tracking and surveillance technologies, for example, have been used to document abuses of power and generate support for political projects that challenge dominant institutions. I just worry that these positive uses of technologies are overwhelmed by those that reinforce algorithmic domination.

Anyway, that brings me to the end of this post. I have tried to argue that Thompson’s radical republicanism, with its concepts of extractive and constitutive domination can shed light on the challenges posed by algorithmic governance technologies. Combining the arguments in this post with the arguments in the previous post about algorithmic micro-domination suggests that the concept of domination can provide a useful, unifying framework for understanding the concerns people have about this technology. It gives us a common name for a common enemy.

* I include this qualification in recognition of the fact that there is some biological basis to those categories as well, and that this too sets boundaries on the nature of male-female relations.




Monday, June 18, 2018

Algorithmic Micro-Domination: Living with Algocracy



In April 2017, Siddhartha Mukherjee wrote an interesting article in the New Yorker. Titled ‘AI versus MD’ the article discussed the future of automated medicine. Automation is already rampant in medicine. There are algorithms for detecting and diagnosing disease, there are robotic arms and tools for helping with surgery, and there some attempts at fully automated services. Mukherjee’s article pondered the future possibilities: Will machines ever completely replace doctors? Is that a welcome idea?

The whole article is worth reading, but one section of it, in particular, resonated with me. Mukherjee spoke to Sebastian Thrun, founder of Google X, who now dedicates his energies to automated diagnosis. Thrun’s mother died from metastatic breast cancer. She, like many others, was diagnosed too late. He became obsessed with creating technologies that would allow us to catch and diagnose diseases earlier — before it was too late. His motivations are completely understandable and, in their direct intention, completely admirable. But what would the world look like if we really went all-in on early, automated, disease detection? Mukherjee paints a haunting picture:

Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

And of course disease diagnosis is just the tip of the iceberg. So many of our activities can now be tracked and surveilled by smart devices. There is a vast ecosystem of apps out there for tracking our purchases, hours of work, physical activity, calories consumed, words read, and so on. If you can think of it, there is probably an app for tracking it. Some of these apps are voluntarily adopted; some of them are imposed upon us by employers and governments. Some of them simply track and log our behaviour; others try to go further and change our behaviour. We are not quite at the total digital panopticon yet. But we are not too far away.

How should we understand this emerging reality? Is it something we should fear? Prima facie, I can see much to welcome in Thrun’s world of diagnostic surveillance: it would surely be a good thing if we could detect diseases earlier and thereby increase the chances of recovery. But, of course, there is a dark side. Who controls the surveillance infrastructure? How much power will it or they have over our lives? Could the system be abused? What about those who want to be ‘offline’ — who don’t want to spend their lives shuttling from ‘the grasp of one algorithm to the next’?

In this post, I want to argue that the concept of domination (a concept taken from republican political theory) provides a useful way of understanding and confronting the challenge of the digital panopticon. This is not a wholly original idea. Indeed, I previously looked at an argument from two political theorists — Hoye and Monaghan — that made this very case. The originality of this post comes from an attempted modification/expansion of the concept of domination that I think sheds better light on the unique nature of algorithmic governance. This is the concept of ‘micro-domination’ that I adopt from some recent work done on disability and domination.

In what follows, I will explain what is meant by ‘micro-domination’, consider how it sheds light on the peculiar features of algorithmic governance, and then look at some criticisms of the idea. I’ll try to be brief. My goal in this post is to introduce an idea; not to provide a fully-rounded defence of it.


1. Non-Domination and Micro-Domination
First, some necessary background. Republicanism is a rich political and philosophical tradition. Its essential ideas date back to the ancient world, and can be found in the writings of Machiavelli and Rousseau. It has undergone something of a rebirth in the past half century thanks to the work of Quentin Skinner and Philip Pettit.

The central concept in republicanism is domination. Domination is the great evil that must be avoided in society. In its broad outline, domination describes a situation in which one individual or group of individuals exercises control over another. This leaves plenty of room for conceptual disagreement. Michael Thompson has recently argued for a ‘radical’ conception of domination that focuses on problems associated with hierarchical and unequal societies. He claims that this conception of domination is better able to confront the problems with power in capitalist societies. Better able than what? Better than the narrower conception of domination favoured by Pettit and Skinner that looks to domination to shed light on the nature of freedom. While I have some sympathy for Thompson’s view, and I hope to cover his radical conception of domination in a later piece, I’ll stick with the narrower, freedom-focused, conception of domination for the time being.

According to that conception, freedom is best understood as non-domination. An individual can be said to be free if he or she is not living under the arbitrary will of another, i.e. is not subject to their good graces or answerable to them. This conception of freedom is usually contrasted with the more popular liberal ideal of freedom as non-interference. According to this view, an individual can be said to be free if he or she is not being interfered with by another. Republicans like Pettit criticise this because they think it fails to capture all the relevant forms of unfreedom.

They usually make their case through simple thought experiments. One of Pettit’s favourites is the ‘Happy Slave’ thought experiment. He asks us to imagine a slave: someone who is legally owned and controlled by a slave-master. Suppose, however, that the slave-master is benevolent and the slave is happy to conform to their wishes. This means that they are not being interfered with: no one is cracking the whip or threatening them with violence if they step out of line. Are they free? Pettit says ‘no’ — of course they aren’t free. Their existence is the epitome of unfreedom, but their lack of freedom has nothing to do with the presence of interference. It has to do with the presence of domination. The master is ever present and could step in and impose their will on the slave at any moment.

A more philosophical way of putting this is to say that republicanism places a modal condition on freedom. It’s not enough for you to live an unmolested life in this actual world; you must live an unmolested life in a range of close, possible worlds. If you constantly live with the fear that someone might arbitrarily step in and impose their will on you, you can never really be free.

That’s the basic idea of freedom as non-domination. What about micro-domination? This is a concept I take from the work of Tom O’Shea. He has written a couple of papers that use the republican theory of freedom to analyse how different institutional and personal circumstances affect people with disabilities. All of what he has written is interesting and valuable, but I want to hone-in on one aspect of it. One of the arguments that he makes is that people with disabilities often suffer from many small scale instances of domination. In other words, there are many choices they have to make in their lives which are subject to the arbitrary will of another. If they live in some institutional setting, or are heavily reliant on care and assistance from others, then large swathes of their daily lives may be dependent on the good will of others: when they wake up, when they go to the bathroom, when they eat, when they go outside, and so on. Taken individually, these cases may not seem all that serious, but aggregated together, they start to look like a more significant threat to freedom:

The result is often a phenomenon I shall call ‘micro-domination’: the capacity for decisions to be arbitrarily imposed on someone, which, individually, are too minor to be contested in a court or a tribunal, but which cumulatively have a major impact on their life.
(O’ Shea 2018, 136)

O’Shea’s work continues from this to look at ways to resolve the problems of domination faced by persons with disabilities. I’m not going to go there. I want to turn to consider how the concept of micro-domination can shed light on the phenomenon of algorithmic governance. To do this I want to sharpen the concept of micro-domination by offering a more detailed definition/characterisation.

Micro-domination: Many small-scale, seemingly trivial, instances of domination where:
(a) Each instance is a genuine case of domination, i.e. it involves some subordination to the arbitrary will of another and some potential threat of their intervening if you step out of line (i.e. fail to conform with what they prefer).
(b) The aggregative effect of many such instances of micro-domination is significant, i.e. it is what results in a significant threat to individual freedom.

With this more detailed characterisation in mind, the question then becomes: does algorithmic governance involve micro-domination?


2. Algorithmic Micro-Domination
Let’s start by clarifying what is meant by algorithmic governance. I gave some sense of what this means in the introduction, but there is obviously more to it. In most of my writings and talks, I define algorithmic governance as the ‘state of being governed by algorithmically-controlled smart devices’. This algorithmic governance can come in many forms. Algorithms can recommend, nudge, manipulate, intervene and, in some cases, take over from individual behaviour.

You can probably think of many examples from your everyday life. Just this morning I was awoken by my sleep monitoring system. I use it every night to record my sleep patterns. Based on its observations, it sets an alarm that wakes me at the optimal time. When I reach my work desk, I quickly checked my social media feeds where I was fed a stream of information that has been tailored to my preferences and interests. I was also encouraged to post an update to the people who follow me (“the 1000 people who follow you on Facebook haven’t heard from you in awhile”). As I was settling into work, my phone buzzed with a reminder from one of my health and fitness apps to tell me that it was time to go for a run. Later in the day, when I was driving to a meeting across town, I used Google maps to plot my route. Sometimes, when I got off track, it recalculated and sent me in a new direction. I dutifully followed its recommendations. Whenever possible, I used the autopilot software on my car to save me some effort, but every now and then it prompted me to take control of the car because some obstacle appeared that it was not programmed to deal with.

I could multiply the examples, but you get the idea. Many small-scale, arguably trivial, choices in our everyday lives are now subject to algorithmic governance: what route to drive, who to talk to, when to exercise and so on. A network of devices monitors and tracks our behaviour and sends us prompts and reminders. This provides the infrastructure for a system of algorithmic micro-domination. Although we may not fully appreciate it, we are now the ‘subjects’ of many algorithmic masters. They surveil our lives and create a space of permissible/acceptable behaviour. Everything is fine if we stay within this space. We can live happy and productive lives (perhaps happier and more productive than our predecessors thanks to the algorithmic nudging), and to all intents and purposes, these lives may appear to be free. But if we step out of line we may be quick to realise the presence of the algorithmic masters.

‘Wait a minute’, I hear you say, ‘surely things aren’t that bad?’ It’s true that some of us voluntarily submit ourselves to algorithmic masters, but not all of us do. The description of my day suggests I am someone who is uniquely immersed in a system of algorithmic governance. My experiences are not representative. We have the option of switching off and disentangling ourselves from the web of algorithmic control.

Maybe so. I certainly wouldn’t want us to develop a narrative of helplessnes around the scope and strength of algorithmic governance, but I think people who argue that we have the option of switching off may underestimate the pervasiveness of algorithmic control. Janet Vertesi’s experiences in trying to ‘hide’ her pregnancy from Big Data systems seems to provide a clear illustration of what can happen if you do opt out. Vertesi, an expert in Big Data, knew that online marketers and advertisers really like to know if women are pregnant. Writing in 2014, she noted that an average person’s marketing data is worth about 10 cents whereas a pregnant person’s data is worth about $1.50. She decided to conduct an experiment in which she would hide her own pregnancy from the online data miners. This turned out to be exceptionally difficult. She had to avoid all credit card transactions for pregnancy-related shopping. She had to implore her family and friends to avoid mentioning or announcing her pregnancy on social media. When her uncle breached this request by sending her a private message on Facebook, she deleted his messages and unfriended him (she spoke to him in private to explain why). In the end, her attempt to avoid algorithmic governance led to her behaviour being flagged as potentially criminal:

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”
It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.
(Vertesi 2014)

The analogy with Pettit’s ‘Happy Slave’ thought experiment is direct and obvious. Vertesi wouldn’t have had any problems if she had lived her life within the space of permissible activity created by the system of algorithmically-controlled commerce. She wouldn’t have been interfered with or overtly sanctioned. By stepping outside that space, she opened herself up to interference. She was no longer tolerated by the system.

We can learn from her experience. Many of us may be happy to go along with the system as currently constituted, but that doesn’t mean that we are free. We are, in fact, subject to its algorithmic micro-domination.


3. Some Objections and Replies.
So the argument to this point is that modern systems of algorithmic governance give rise to algorithmic micro-domination. I think this is a useful way of understanding how these systems work and how they impact on our lives. But I’m sure that there are many criticisms to be made of this idea. For example, someone could argue that I am making too much of Vertesi’s experiences in trying to opt out. She is just one case study. I would need many more to prove that micro-domination is a widespread phenomenon. This is probably right, though my sense is that Vertesi’s experiences are indicative of a broader phenomenon (e.g. in academic hiring I would be extremely doubtful of any candidate that doesn’t have an considerable online presence). There are also two other objections that I think are worth raising here.

First, one could argue that algorithmic micro-domination is either misnamed or, alternatively, not a real instance of domination. One could argue that it is misnamed on the grounds that the domination is not really ‘algorithmic’ in nature. The algorithms are simply tools by which humans or human institutions exert control over the lives of others. It’s not the algorithms per se; it’s Facebook/Mark Zuckerberg (and others) that are the masters. There is certainly something to this, but the tools of domination are often just as important as the agents. The tools are what makes the domination possible and dictate its scope and strength. Algorithmic tools could give rise to new forms domination. That is, indeed, the argument I am making by appealing to the notion of algorithmic ‘micro-domination’. That said, I think there is also something to the idea that algorithmic tools have a life of their own, i.e. are not fully under the control of their human creators. This is what Hoye and Monaghan argued in their original defence of algorithmic domination. They claimed that Big Data systems of governance were ‘functionally agentless’, i.e. it would be difficult to trace what they do to the instructions or actions of an individual human agent (or group of human agents). They felt that this created problems for the republican theory since domination is usually viewed as a human-to-human phenomenon. So if we accept that algorithmic governance systems can be functionally agentless we will need to expand the concept of domination to cover cases in which humans are not the masters. I don;t have a problem with that, but conceptual purists might.

Second, one could have doubts about the wisdom of expanding the concept of domination to cover ‘micro-domination’. Why get hung up on the small things? This is a criticism that is sometimes levelled at the analogous concept of a ‘micro-aggression’. A micro-aggression is a smallscale, everyday, verbal or behavioural act that communicates hostility towards minorities. It is often often viewed as a clear manifestation of structural or institutional racism/discrimination. Examples of micro-aggressions include things like telling a person of colour that their English is very good, or asking them where they come from, or clutching your bag tightly when you walk past them, and so on. They are not cases of overt or explicit discrimination. But taken together they add up to something significant: they tell the person from the minority group that they are not welcome/they do not belong. Critics of the idea of micro-aggressions argue that it breeds hypersensitivity, involves an overintrepretation of behaviour, and can often be used to silence or shut down legitimate speech. This latter criticism is particularly prominent in ongoing debates about free speech on college campuses. I don’t want to wade into the debate about micro-aggressions. All I am interested in is whether similar criticisms could be levelled at the idea of a micro-domination. I guess that they could. But I think the strength of such criticisms will depend heavily on whether there is something valuable that is lost through hypersensitivity to algorithmic domination. In the case of micro-aggressions, critics point to the value of free speech as something that is lost through hypersensitivty to certain behaviours. What is lost through hypersensitivity to algorithmic domination? Presumably, it is the efficiency and productivity that the algorithmic systems enable. Is the loss of freedom sufficient to outweigh those gains? I don’t have an answer right now, but it’s a question worth pursuing.

That’s where I shall leave it for now. As mentioned at the outset, my goal was to introduce an idea, not to provide a compelling defence of it. I’m interested in getting some feedback. Is the idea of algorithmic micro-domination compelling or useful? Are there other important criticisms of the idea? I’d be happy to hear about them in the comments section.




Monday, June 11, 2018

Legal Loopholes and Voting Paradoxes: A Theory



Nick Freeman is a well-known British lawyer. He rose to fame in the 1990s when he successfully defended a number of celebrity clients from dangerous driving prosecutions. He was particularly popular among footballers. His clients included Paul Ince, David Beckham and, perhaps most famously, Alex Ferguson. The case with Ferguson was notorious because of its somewhat scatalogical fact-pattern, and because Ferguson was the most high-profile football manager in the world at the time.

Ferguson was summonsed for speeding along the hard-shoulder of a clogged motorway. His excuse was that he desperately needed to use the bathroom due to an upset stomach he had been nursing from the previous day. He was stopped by the police and charged with an offence. He was in a tricky predicament since he already had a number of penalty points on his licence and being found guilty once more would put him off the road for a number of months.

Enter Freeman. Freeman knew that it was illegal to drive on the hard shoulder of a motorway, unless there was a medical emergency that justified doing so. Now, having a dodgy tummy might not be top of the list of justifying medical emergencies, and we might not look favourably on Ferguson if he set off on his journey knowing that there was a risk that this emergency might arise. But Freeman’s genius, such as it is, lay in arguing that Ferguson’s impending diarrhoea was indeed a justifying medical emergency and that Ferguson was not to be blamed for its sudden onset when he was stuck in the traffic jam. Freeman presented his case with such vigour that he eventually succeeded in getting Ferguson off.

This is typical of Freeman’s modus operandi. He uses an encyclopaedic knowledge of road traffic offences and criminal procedure to find obscure, relatively untested, arguments that benefit his clients. In other words, he finds ‘loopholes’ in the law. Indeed, so successful is he in doing this that he has been christened ‘Mr Loophole’ by the British tabloid press, a moniker he eventually, and somewhat reluctantly, took on for himself. His 2012 book The Art of the Loophole is a guidebook for anyone who wants to follow in his footsteps.

I’m not overly interested in Freeman and his practice, but I am interested in the general phenomenon of legal loopholes and why they arise. Anyone who has studied the law will know that they are pervasive and that the working life of the lawyer is often taken up in trying to find loopholes that work in favour of their clients. But the concept of a loophole is not well-defined, nor the reason for their persistence well-understood. Furthermore, the ethics of exploiting loopholes is hotly contested among lawyers and academics. I doubt I can resolve all those issues in this blogpost, but what I can do is share a theory of loopholes that has been defended by Leo Katz. I find Katz’s theory very interesting. It’s quite complex, relying as it does on an analogy between legal loopholes and voting paradoxes, but once you understand how it works it is quite illuminating. I hope to show why in what follows.


1. What is a Legal Loophole?
A legal loophole is one of those “you know when you see it” phenomena. It’s difficult to offer a precise definition. If I were to try, I would say that a loophole is some vagueness or ambiguity in a rule, or conflict between two legal rules, that can be used to benefit someone in a seemingly perverse or counterintuitive way (in a way that violates the ‘spirit’ if not the ‘letter’ of the law). But this definition is problematic since it seems quite value-laden. It seems to presuppose that exploiting a loophole is unethical since it involves using the law to perverse ends. But oftentimes people who make use of loopholes don’t see it that way. They often think they are using the law to a legitimate end. Take the Alex Ferguson case as an example. You could argue — and I’m sure he and Nick Freeman would argue — that he was making a perfectly legitimate use of the medical exemption rule.

This value-ladenness is something that Katz tries to avoid in his theory of loopholes. As we will see below, he thinks that loopholes are inherent to the logical structure of legal doctrines. Specifically, he claims that they emerge from the fact that legal doctrines try to balance occasionally conflicting principles (e.g. people should obey the rules of the road; there should be some leeway for medical emergencies). He argues that they do not arise simply from a mismatch between the law’s purpose/rationale and its linguistic formulation. It’ll be easier to understand this if we have some working examples. Katz uses about half a dozen in his analysis. I will focus on just three:

Asset Protection: James is a well-to-do doctor who has made a number of misguided business investments. He fears that he will have to declare personal bankruptcy, which will mean that the majority of his personal assets can be seized and sold off by his creditors. However, there is a legal rule stating that certain types of asset are ‘exempt’ from personal bankruptcy rules and cannot be seized by creditors. These are assets that are deemed essential/necessary to life and include things like a family home, pension and insurance. James knows this so he uses his remaining wealth to purchase these exempt assets. This last-minute flurry of purchases triggers his bankruptcy, but he doesn’t mind as his assets are protected.

Contrived Self Defence: Samson’s wife and children were brutally assaulted in a home invasion by three armed robbers. Samson vows revenge. He tracks the three armed robbers and confronts them late at night in a park. They do not know who he is but he provokes them into attacking him with seemingly lethal force. Samson then fights back and ends up fatally wounding one of the attackers, while the other two flee. Samson’s lawyer successfully argues at trial that his client acted in self-defence. (Something akin to this happens in the Death Wish movies from the 1970s)

Political Asylum: Ivan has emigrated to the United States. He wants to be granted an immigrant visa as soon as possible. He could go through the ordinary channels but has been told that these are slow and he is unlikely to succeed. Someone tells him that the fastest route is to be granted political asylum, but this requires proof that one is a political refugee. Upon learning this, Ivan quickly uploads a series of videos to Youtube in which he is critical of the political leadership in his home country. The videos go viral. It is widely known that people who have made similar statements in the past have been executed or assassinated by the regime. Ivan uses this to fast track his immigration visa.

Each of these cases involves someone using legal rules to their advantage, but in a way that doesn’t quite sit right with us. They are classic examples of loophole exploitation. They are, of course, highly stylised and simplified. Lawyers will no doubt be quick to point out that legal systems have additional rules and qualifications that address these scenarios. This is indeed true. Courts and legislatures frequently try to prevent people abusing the law by adding new laws. For example, they might add an extra qualification to the rule about political asylum to state that the reason for seeking political asylum have to arise before you land in the country in which you are seeking aslyum, and/or that they have to come from a sincere political conviction. But qualifications like this are often themselves subject to further loophole exploitation, and it can be difficult to implement them successfully. So there is often a continuous arms race between the law-makers and the would-be exploiters. The deeper question is why does this keep happening?



2. The Voting Analogy
The answer, according to Katz, is that legal doctrines are subject to the same kinds of ‘paradoxes’ as voting systems. It’s long been known that voting systems are subject to all kinds of perverse and counterintuitive manipulations. A ‘voting system’ can be defined as any system that tries to aggregate individual preferences over options into a collective or group preference over the same option set. Suppose three friends have to choose between one of two activities to perform for the weekend: fishing or skydiving. They decide to vote. Each expresses their preference for fishing or skydiving and they go with whatever the majority preference happens to be. That’s a classic voting system in action.

But once you go beyond the confines a simple majority vote on two options, you run into lots of problems. How you structure the voting system — Is it broken down into ‘rounds’? Do people vote on one preference or do they rank their preferences? — can make a big difference to the group outcome, often in ways that seem counterintuitive or perverse. Consider the following example, taken directly from Katz’s book:


Law School: Not too long ago, a certain law school had a problem with professors not marking their exam scripts on time. This meant that students weren’t getting their results on time and it was feared that it would have a knock-on impact on their ability to graduate. A group within the law school decided to do something about it. They introduced a proposal for a €100-a-day fine to be imposed on any professor who failed to submit their marks on time. A vote was to be taken on the proposal at the next faculty meeting. From informal conversations, it seemed that least two-thirds of the faculty approved the fine, but there was one individual — the worst procrastinator in the group — who was resolutely opposed to it. Before the meeting, he talked to everybody and realised that there were three equally-sized coalitions/groups in the faculty:

Radicals: Wanted to impose a €1000-a-day fine, but would be satisfied with a €100-a-day fine.
Moderates: Wanted to impose a €100-a-day fine but would be opposed to anything higher (i.e. would prefer the status quo to what the Radicals wanted most)
Conservatives: Didn’t want to impose any fine, but felt that if a fine was to be imposed then the fine should be really high, i.e. at least €1000-a-day, in order to be maximally effective.

The opposer organised the preference rankings of the groups into the table below.
 

He then realised that there was a way in which he could block the introduction of the €100 fine. Using a procedural rule in the Law School’s by-laws, he proposed a vote first be taken on amending the proposal to raise the fine from €100 to €1000 and then that a vote be taken on whether or not to introduce the fine. The rest of the school agreed. On the first vote, the Radicals and Conservatives formed a two-thirds majority and approved the increased amount in the proposal. On the second vote, the Moderates and Conservatives forms a two-thirds majority and rejected the introduction of the fine. The opposer got his way.

This is an example of a very famous voting paradox, first identified by the Marquis de Condorcet in the 18th century. If we label the three options facing the law faculty, we can begin to see the paradox more clearly. Call the introduction of a €100 fine ‘option A’; call the introduction of a €1000 fine ‘option B’; and call the status quo (i.e. no fine) ‘option C’. An ordinary ‘rule’ or ‘axiom’ of individual decision-making is that our preferences should be transitive, i.e. they should form a logically consistent hierarchy. If we prefer A to C and B to A then we should also, by logical inference, prefer B to C. If we turned around and said that we preferred C to B, then there would be something odd or inconsistent about our preferences. They would be intransitive. And yet this is exactly what is happening in the case of the Law School. Each individual has a logically consistent preference hierarchy, but the group as a whole does not. The group preferences are intransitive. We can see from the breakdown of the faculty preferences in the table above that there are (different) majority coalitions that prefer both A to C, B to A and C to B. It is this group intransitivity that can be exploited by our wily resolute opposer. He can manipulate the voting procedure so as to introduce a seemingly irrelevant third option (the €1000 fine) into the agenda and thereby unseat the majority coalition that favoured introducing the €100 fine.

Of course, this paradox arises from the vagaries of the particular voting system adopted by the Law School. You might think that another voting system would not be vulnerable to this problem. This is true, but only up to a point. There is another famous theorem from voting theory — Arrow’s impossibility theorem — which shows that any democratic voting system we might hope to create will be vulnerable to one or more paradoxes of this sort. The only voting system that completely avoids paradoxes is a dictatorship (where the preferences of one individual dictate the group preference), which of course is not really a voting system, except in some strict logical sense. You might like to know more about Arrow’s theorem. If so, I’d recommend reading Amartya Sen’s recent explanation of it, or indeed Katz’s simplified presentation of it in his book. I won’t go into it here because it is too complex and, in any event, I don’t think it is strictly necessary. If you understand the paradox that arises in the Law School example then you have pretty much everything you need to understand Katz’s theory of loopholes.


3. How Voting Paradoxes Explain Legal Loopholes
Katz’s theory claims that legal loopholes arise for the same reason that voting paradoxes arise. To accept Katz’s theory you need to accept three propositions. I’ll go through each of them in some detail.

Proposition One: Multi-criterial decision-making systems are like voting systems.

This is the critical first step in the argument. It requires some unpacking. Recall the earlier definition of a voting system: it is something that aggregates the preference rankings of individuals into a group preference ranking. How is that like a multi-criterial decision-making system? Well, first, think in more detail about a multi-criterial decision. Suppose you have to decide whether to take up a new job or stick with your old job. How would you make that decision? If you are like me, then you would use multiple criteria to help you decide. You would focus on the salary offer, the likely working conditions, the commuting time, the work-life balance made possible by the job, and so on. Each of these criteria can be used to rank the options before you. The salary criterion might rank the new job above the old job; the work-life balance criterion might rank the old job above the new job; and so on. Once you have established the ranking orders for each criterion, you’ll have to aggregate them together into a single choice. This is directly analogous to what happens in a voting system. The criteria are like voters: they each have their own preference ranking. The decision is like the group preference: it is what emerges from the amalgamation and aggregation of the individual preference rankings.

Of course, the analogy isn’t perfect. We often assign different weights to different criteria whereas in democratic voting systems we usually stick to a one-person-one-vote principle (though weighting is common in voting systems more generally). Furthermore, as Katz notes, decision-making criteria aren’t strategic whereas voters (sometimes) are. In other words, criteria don’t change their preference ranking in order to manipulate the final decision. But voters often do this because they anticipate and pre-empt the voting behaviour of others. Nevertheless these disanalogies don’t upset the argument that much. Indeed, Arrow himself developed a multi-criterial decision-making version of his impossibility theorem around the same time that he came up with the voting version. So the connection between the two phenomena has long been recognised.

This brings us to the second proposition:

Proposition 2: Legal rules/doctrines are like multicriterial decision-making systems.

This means that individual legal rules or doctrines often try to aggregate multiple decision-making criteria. Specifically, they try to aggregate different ethical criteria or policy criteria. Consider some of the rules/doctrines from the examples given earlier in this post. The self-defence rule, for example, has a number of elements to it. It entitles you to use lethal force to repel a seemingly lethal attack, but there are usually limitations to its use. The force has to be proportionate/necessary. We don’t want people killing each other willy-nilly. If less force could be used to repel the attack, or if you could avoid the attack completely by retreating, we usually prefer it if you do so. At the same time, we recognise that people have a right to defend their own rights: to stand their ground and protect themselves if someone else is brutally attacking them. The self-defence rule has to balance these two ethical principles. It has to allow people the right to defend themselves (and therefore respect the ‘rights principle’) and it has to make sure people don’t abuse this right by applying excessive/disproportionate punishment (and therefore respect the ‘proportionality principle’). Something similar is true in the case of the Asset Protection example given above. The relevant legal doctrine has to balance the right for creditors to be repaid what they are owed against the right/desirability of not depriving people of assets that are essential to their well-being. These principles can, on occasion, rank different actions in different ways. The job of the legal rule/doctrine is to help us to aggregate the rankings together and come up with the correct legal decision.

We now have everything we need to complete Katz’s argument:

Proposition 3: Because legal rules/doctrines are like multicriterial decision-making systems, and because multicriterial decision-making systems are like voting systems, they are vulnerable to the same kinds of paradoxes or perverse manipulations. These are what we call ‘legal loopholes’.

How do we get from the first two propositions to this? The gist of the argument is simply that multi-criterial decision-making systems are vulnerable to the same kinds of manipulative acts as voting systems. Go back to the earlier example of the Law School Vote. We saw there how one resolute procrastinator was able to defy the majority preference for some kind of fine to be introduced by manipulating the agenda of the vote. He did this by introducing a seemingly irrelevant third alternative (the €1000 a day fine) into the voting system. We should, of course, be cautious about how we use the term ‘irrelevant’ in this context. The term is adopted from decision theory and does not necessarily track with ordinary usage. In one sense, the introduction of the €1000-a-day option is very relevant: some people prefer it to the €100 a day option. But in another sense it is irrelevant: if group preferences were transitive, you wouldn’t expect its introduction to alter the relative ranking of the €100 a day fine and the status quo. And yet it does. By manipulating the agenda of the vote, the resolute procrastinator can ensure that it makes an absolutely critical difference. It flips the relative ranking of those two options, allowing the status quo to win out. Katz argues that this really shows that seemingly ‘irrelevant’ alternatives are actually much more relevant than initially suspected.

The question is whether something similar can happen with legal doctrines. Katz argues that it does. Sometimes, if we can introduce a seemingly irrelevant alternative into the picture, they can alter the decision. The self defence doctrine is a good illustration of this. In some cases of self defence, you don’t have the opportunity to safely retreat from the lethal attack. In these cases, you basically have two options, either you stay and be killed by your attacker; or you stay and fight back, killing your attacker. According to the law, both options are equally acceptable (i.e. both are legally permissible) from your perspective (what the attacker is doing to you may be legally impermissible but that is a separate question). Another way of putting it is that in this case, the proportionality principle and the rights principle point to the same legal evaluation. In other self-defence cases, you may have, in addition to the option of staying and being killed and staying and killing, the option of reasonable retreat. In these cases, the legal evaluation of the options is very different. Suddenly, the once legally permissible option of staying and killing your attacker might seem legally impermissible. Why didn’t you retreat when you had the chance? Katz argues that what is happening in this case is that the principles underlying the self defence doctrine rank the options differently: the rights principle says that standing your ground is permissible; the proportionality principle does not. We need to break the deadlock between them — to aggregate the different rankings into a legal decision — and so we (or, rather, most jurisdictions) allow the proportionality principle to win the day when the option of reasonable retreat is on the table.



The claim is that this is directly analogous to what happens in a voting system. Someone who wants to use the law to suit their purposes can manipulate contexts so that certain options are on the table (or not) and thus take advantage of the different rankings assigned to those options by the different underlying doctrines. That is what Samson is doing in the contrived self-defence case: by confronting his attackers in a park late at night he is taking reasonable retreat off the table. That is what James is doing in the asset protection case: by purchasing the exempt assets he is taking the option of seizing his assets and selling them off less reasonable. And that is what Ivan is doing in the political asylum case: by making his videos and speaking out against the regime in his home country, he is taking the option of returning to his home country and living an unmolested life off the table.

Clever lawyers can help individuals manipulate the agenda of legal decision-making in similar ways by advising them on how to limit or open up new options, or by providing evidence to support claims to the effect that certain options were or were not available to them.What’s more, following Arrow’s insights into voting, it would seem to follow that loophole exploitation of this sort is inevitable if the law is trying to aggregate different ethical/policy criteria. You can never completely eliminate loopholes from the law; they are inherent to the logic of legal decision-making.


4. Conclusion
That brings us to the end of this post. To briefly recap, loopholes are common and persistent phenomena in the law. The job of the lawyer is often conceived in terms of exploiting loopholes on behalf of their clients. I’ve been outlining Leo Katz’s theory of legal loopholes. This theory argues that legal loopholes are directly analogous to voting paradoxes. Just as voting paradoxes arise when we try to aggregate individual preference rankings into a group preference ranking; so too do legal loopholes arise when we try to aggregate the rankings assigned by different underlying ethical or policy principles into a single legal evaluation.

I like Katz’s theory because it draws an interesting connection between two seemingly disparate areas of social life (voting and legal decision-making). Intertheoretic unification of this sort is usually thought to be a virtue. That said, I am also drawn to it because it is quite elaborate and theoretically sophisticated. But neither of these things are necessarily virtues. One could argue that Katz’s theory is too clever by half and that a much simpler explanation of loopholes is possible. Also, I certainly haven’t tested to see whether it explains every putative case of a legal loophole. Indeed, I would worry that in the end it may not explain loopholes so much as redefine them (maybe in part because loopholes are not particularly well-defined in the first place).

Alas, I’ll have to leave those issues unresolved. I offer Katz’s theory for your consideration and leave you to play around with the details. If you would like to learn more, I would recommend reading Katz’s full explanation of his theory. It fleshes out the analogy between legal decision-making and voting in far more detail than I provided here.