Pages

Monday, June 18, 2018

Algorithmic Micro-Domination: Living with Algocracy



In April 2017, Siddhartha Mukherjee wrote an interesting article in the New Yorker. Titled ‘AI versus MD’ the article discussed the future of automated medicine. Automation is already rampant in medicine. There are algorithms for detecting and diagnosing disease, there are robotic arms and tools for helping with surgery, and there some attempts at fully automated services. Mukherjee’s article pondered the future possibilities: Will machines ever completely replace doctors? Is that a welcome idea?

The whole article is worth reading, but one section of it, in particular, resonated with me. Mukherjee spoke to Sebastian Thrun, founder of Google X, who now dedicates his energies to automated diagnosis. Thrun’s mother died from metastatic breast cancer. She, like many others, was diagnosed too late. He became obsessed with creating technologies that would allow us to catch and diagnose diseases earlier — before it was too late. His motivations are completely understandable and, in their direct intention, completely admirable. But what would the world look like if we really went all-in on early, automated, disease detection? Mukherjee paints a haunting picture:

Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

And of course disease diagnosis is just the tip of the iceberg. So many of our activities can now be tracked and surveilled by smart devices. There is a vast ecosystem of apps out there for tracking our purchases, hours of work, physical activity, calories consumed, words read, and so on. If you can think of it, there is probably an app for tracking it. Some of these apps are voluntarily adopted; some of them are imposed upon us by employers and governments. Some of them simply track and log our behaviour; others try to go further and change our behaviour. We are not quite at the total digital panopticon yet. But we are not too far away.

How should we understand this emerging reality? Is it something we should fear? Prima facie, I can see much to welcome in Thrun’s world of diagnostic surveillance: it would surely be a good thing if we could detect diseases earlier and thereby increase the chances of recovery. But, of course, there is a dark side. Who controls the surveillance infrastructure? How much power will it or they have over our lives? Could the system be abused? What about those who want to be ‘offline’ — who don’t want to spend their lives shuttling from ‘the grasp of one algorithm to the next’?

In this post, I want to argue that the concept of domination (a concept taken from republican political theory) provides a useful way of understanding and confronting the challenge of the digital panopticon. This is not a wholly original idea. Indeed, I previously looked at an argument from two political theorists — Hoye and Monaghan — that made this very case. The originality of this post comes from an attempted modification/expansion of the concept of domination that I think sheds better light on the unique nature of algorithmic governance. This is the concept of ‘micro-domination’ that I adopt from some recent work done on disability and domination.

In what follows, I will explain what is meant by ‘micro-domination’, consider how it sheds light on the peculiar features of algorithmic governance, and then look at some criticisms of the idea. I’ll try to be brief. My goal in this post is to introduce an idea; not to provide a fully-rounded defence of it.


1. Non-Domination and Micro-Domination
First, some necessary background. Republicanism is a rich political and philosophical tradition. Its essential ideas date back to the ancient world, and can be found in the writings of Machiavelli and Rousseau. It has undergone something of a rebirth in the past half century thanks to the work of Quentin Skinner and Philip Pettit.

The central concept in republicanism is domination. Domination is the great evil that must be avoided in society. In its broad outline, domination describes a situation in which one individual or group of individuals exercises control over another. This leaves plenty of room for conceptual disagreement. Michael Thompson has recently argued for a ‘radical’ conception of domination that focuses on problems associated with hierarchical and unequal societies. He claims that this conception of domination is better able to confront the problems with power in capitalist societies. Better able than what? Better than the narrower conception of domination favoured by Pettit and Skinner that looks to domination to shed light on the nature of freedom. While I have some sympathy for Thompson’s view, and I hope to cover his radical conception of domination in a later piece, I’ll stick with the narrower, freedom-focused, conception of domination for the time being.

According to that conception, freedom is best understood as non-domination. An individual can be said to be free if he or she is not living under the arbitrary will of another, i.e. is not subject to their good graces or answerable to them. This conception of freedom is usually contrasted with the more popular liberal ideal of freedom as non-interference. According to this view, an individual can be said to be free if he or she is not being interfered with by another. Republicans like Pettit criticise this because they think it fails to capture all the relevant forms of unfreedom.

They usually make their case through simple thought experiments. One of Pettit’s favourites is the ‘Happy Slave’ thought experiment. He asks us to imagine a slave: someone who is legally owned and controlled by a slave-master. Suppose, however, that the slave-master is benevolent and the slave is happy to conform to their wishes. This means that they are not being interfered with: no one is cracking the whip or threatening them with violence if they step out of line. Are they free? Pettit says ‘no’ — of course they aren’t free. Their existence is the epitome of unfreedom, but their lack of freedom has nothing to do with the presence of interference. It has to do with the presence of domination. The master is ever present and could step in and impose their will on the slave at any moment.

A more philosophical way of putting this is to say that republicanism places a modal condition on freedom. It’s not enough for you to live an unmolested life in this actual world; you must live an unmolested life in a range of close, possible worlds. If you constantly live with the fear that someone might arbitrarily step in and impose their will on you, you can never really be free.

That’s the basic idea of freedom as non-domination. What about micro-domination? This is a concept I take from the work of Tom O’Shea. He has written a couple of papers that use the republican theory of freedom to analyse how different institutional and personal circumstances affect people with disabilities. All of what he has written is interesting and valuable, but I want to hone-in on one aspect of it. One of the arguments that he makes is that people with disabilities often suffer from many small scale instances of domination. In other words, there are many choices they have to make in their lives which are subject to the arbitrary will of another. If they live in some institutional setting, or are heavily reliant on care and assistance from others, then large swathes of their daily lives may be dependent on the good will of others: when they wake up, when they go to the bathroom, when they eat, when they go outside, and so on. Taken individually, these cases may not seem all that serious, but aggregated together, they start to look like a more significant threat to freedom:

The result is often a phenomenon I shall call ‘micro-domination’: the capacity for decisions to be arbitrarily imposed on someone, which, individually, are too minor to be contested in a court or a tribunal, but which cumulatively have a major impact on their life.
(O’ Shea 2018, 136)

O’Shea’s work continues from this to look at ways to resolve the problems of domination faced by persons with disabilities. I’m not going to go there. I want to turn to consider how the concept of micro-domination can shed light on the phenomenon of algorithmic governance. To do this I want to sharpen the concept of micro-domination by offering a more detailed definition/characterisation.

Micro-domination: Many small-scale, seemingly trivial, instances of domination where:
(a) Each instance is a genuine case of domination, i.e. it involves some subordination to the arbitrary will of another and some potential threat of their intervening if you step out of line (i.e. fail to conform with what they prefer).
(b) The aggregative effect of many such instances of micro-domination is significant, i.e. it is what results in a significant threat to individual freedom.

With this more detailed characterisation in mind, the question then becomes: does algorithmic governance involve micro-domination?


2. Algorithmic Micro-Domination
Let’s start by clarifying what is meant by algorithmic governance. I gave some sense of what this means in the introduction, but there is obviously more to it. In most of my writings and talks, I define algorithmic governance as the ‘state of being governed by algorithmically-controlled smart devices’. This algorithmic governance can come in many forms. Algorithms can recommend, nudge, manipulate, intervene and, in some cases, take over from individual behaviour.

You can probably think of many examples from your everyday life. Just this morning I was awoken by my sleep monitoring system. I use it every night to record my sleep patterns. Based on its observations, it sets an alarm that wakes me at the optimal time. When I reach my work desk, I quickly checked my social media feeds where I was fed a stream of information that has been tailored to my preferences and interests. I was also encouraged to post an update to the people who follow me (“the 1000 people who follow you on Facebook haven’t heard from you in awhile”). As I was settling into work, my phone buzzed with a reminder from one of my health and fitness apps to tell me that it was time to go for a run. Later in the day, when I was driving to a meeting across town, I used Google maps to plot my route. Sometimes, when I got off track, it recalculated and sent me in a new direction. I dutifully followed its recommendations. Whenever possible, I used the autopilot software on my car to save me some effort, but every now and then it prompted me to take control of the car because some obstacle appeared that it was not programmed to deal with.

I could multiply the examples, but you get the idea. Many small-scale, arguably trivial, choices in our everyday lives are now subject to algorithmic governance: what route to drive, who to talk to, when to exercise and so on. A network of devices monitors and tracks our behaviour and sends us prompts and reminders. This provides the infrastructure for a system of algorithmic micro-domination. Although we may not fully appreciate it, we are now the ‘subjects’ of many algorithmic masters. They surveil our lives and create a space of permissible/acceptable behaviour. Everything is fine if we stay within this space. We can live happy and productive lives (perhaps happier and more productive than our predecessors thanks to the algorithmic nudging), and to all intents and purposes, these lives may appear to be free. But if we step out of line we may be quick to realise the presence of the algorithmic masters.

‘Wait a minute’, I hear you say, ‘surely things aren’t that bad?’ It’s true that some of us voluntarily submit ourselves to algorithmic masters, but not all of us do. The description of my day suggests I am someone who is uniquely immersed in a system of algorithmic governance. My experiences are not representative. We have the option of switching off and disentangling ourselves from the web of algorithmic control.

Maybe so. I certainly wouldn’t want us to develop a narrative of helplessnes around the scope and strength of algorithmic governance, but I think people who argue that we have the option of switching off may underestimate the pervasiveness of algorithmic control. Janet Vertesi’s experiences in trying to ‘hide’ her pregnancy from Big Data systems seems to provide a clear illustration of what can happen if you do opt out. Vertesi, an expert in Big Data, knew that online marketers and advertisers really like to know if women are pregnant. Writing in 2014, she noted that an average person’s marketing data is worth about 10 cents whereas a pregnant person’s data is worth about $1.50. She decided to conduct an experiment in which she would hide her own pregnancy from the online data miners. This turned out to be exceptionally difficult. She had to avoid all credit card transactions for pregnancy-related shopping. She had to implore her family and friends to avoid mentioning or announcing her pregnancy on social media. When her uncle breached this request by sending her a private message on Facebook, she deleted his messages and unfriended him (she spoke to him in private to explain why). In the end, her attempt to avoid algorithmic governance led to her behaviour being flagged as potentially criminal:

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”
It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.
(Vertesi 2014)

The analogy with Pettit’s ‘Happy Slave’ thought experiment is direct and obvious. Vertesi wouldn’t have had any problems if she had lived her life within the space of permissible activity created by the system of algorithmically-controlled commerce. She wouldn’t have been interfered with or overtly sanctioned. By stepping outside that space, she opened herself up to interference. She was no longer tolerated by the system.

We can learn from her experience. Many of us may be happy to go along with the system as currently constituted, but that doesn’t mean that we are free. We are, in fact, subject to its algorithmic micro-domination.


3. Some Objections and Replies.
So the argument to this point is that modern systems of algorithmic governance give rise to algorithmic micro-domination. I think this is a useful way of understanding how these systems work and how they impact on our lives. But I’m sure that there are many criticisms to be made of this idea. For example, someone could argue that I am making too much of Vertesi’s experiences in trying to opt out. She is just one case study. I would need many more to prove that micro-domination is a widespread phenomenon. This is probably right, though my sense is that Vertesi’s experiences are indicative of a broader phenomenon (e.g. in academic hiring I would be extremely doubtful of any candidate that doesn’t have an considerable online presence). There are also two other objections that I think are worth raising here.

First, one could argue that algorithmic micro-domination is either misnamed or, alternatively, not a real instance of domination. One could argue that it is misnamed on the grounds that the domination is not really ‘algorithmic’ in nature. The algorithms are simply tools by which humans or human institutions exert control over the lives of others. It’s not the algorithms per se; it’s Facebook/Mark Zuckerberg (and others) that are the masters. There is certainly something to this, but the tools of domination are often just as important as the agents. The tools are what makes the domination possible and dictate its scope and strength. Algorithmic tools could give rise to new forms domination. That is, indeed, the argument I am making by appealing to the notion of algorithmic ‘micro-domination’. That said, I think there is also something to the idea that algorithmic tools have a life of their own, i.e. are not fully under the control of their human creators. This is what Hoye and Monaghan argued in their original defence of algorithmic domination. They claimed that Big Data systems of governance were ‘functionally agentless’, i.e. it would be difficult to trace what they do to the instructions or actions of an individual human agent (or group of human agents). They felt that this created problems for the republican theory since domination is usually viewed as a human-to-human phenomenon. So if we accept that algorithmic governance systems can be functionally agentless we will need to expand the concept of domination to cover cases in which humans are not the masters. I don;t have a problem with that, but conceptual purists might.

Second, one could have doubts about the wisdom of expanding the concept of domination to cover ‘micro-domination’. Why get hung up on the small things? This is a criticism that is sometimes levelled at the analogous concept of a ‘micro-aggression’. A micro-aggression is a smallscale, everyday, verbal or behavioural act that communicates hostility towards minorities. It is often often viewed as a clear manifestation of structural or institutional racism/discrimination. Examples of micro-aggressions include things like telling a person of colour that their English is very good, or asking them where they come from, or clutching your bag tightly when you walk past them, and so on. They are not cases of overt or explicit discrimination. But taken together they add up to something significant: they tell the person from the minority group that they are not welcome/they do not belong. Critics of the idea of micro-aggressions argue that it breeds hypersensitivity, involves an overintrepretation of behaviour, and can often be used to silence or shut down legitimate speech. This latter criticism is particularly prominent in ongoing debates about free speech on college campuses. I don’t want to wade into the debate about micro-aggressions. All I am interested in is whether similar criticisms could be levelled at the idea of a micro-domination. I guess that they could. But I think the strength of such criticisms will depend heavily on whether there is something valuable that is lost through hypersensitivity to algorithmic domination. In the case of micro-aggressions, critics point to the value of free speech as something that is lost through hypersensitivty to certain behaviours. What is lost through hypersensitivity to algorithmic domination? Presumably, it is the efficiency and productivity that the algorithmic systems enable. Is the loss of freedom sufficient to outweigh those gains? I don’t have an answer right now, but it’s a question worth pursuing.

That’s where I shall leave it for now. As mentioned at the outset, my goal was to introduce an idea, not to provide a compelling defence of it. I’m interested in getting some feedback. Is the idea of algorithmic micro-domination compelling or useful? Are there other important criticisms of the idea? I’d be happy to hear about them in the comments section.




1 comment:

  1. Hi! Im curious: Cant we say social norms/mores are forms of "micro-domination"? And since Im not a philosopher but like to play one in my own mind :) this whole concept I would have thought has already been well-worked-out? I know my 14yr old son would totally think he is "micro-dominated" every day by society (of course mostly thru his parents) telling him to brush his teeth, comb his hair, deodorant, etc!

    (Aside: Assuming "micro-dominance" has anything useful to say, I wonder what Pettit would think? Wouldnt he have to concluded we are continually dominated and the only way out is not actually have a society?)

    ReplyDelete