(Related post: Algorithmic Micro-Domination)
In a recent post, I argued that the republican concept of ‘domination’ could be usefully deployed to understand the challenge that algorithmic governance systems pose to individual freedom. To be more precise, I argued that a modification of that concept — to include instances of ‘micro-domination’ — provided both a descriptively accurate and normatively appropriate basis for understanding the challenge.
In making this argument, I was working with a conception of domination that tries to shed light on what it means to be a free citizen. This is the ‘freedom as non-domination’ idea that has been popularised by Philip Pettit. But Pettit’s conception of domination has been challenged by other republican theorists. Michael Thompson, for example, has recently written a paper entitled ‘Two Faces of Domination in Republican Political Theory’ that argues that Pettit’s view is too narrow and fails to address the forms of domination that plague modern societies. Thompson favours a ‘radical’ conception of domination that focuses less on freedom, and more on inequality of power in capitalist societies. He claims that this conception is more in keeping with the views of republican writers like Machiavelli and Rousseau, and more attuned to the realities of our age.
In this post, I want to argue that Thompson’s radical republican take on domination can also be usefully deployed to make sense of the challenges posed by algorithmic governance. In doing so, I hope to provide further support for the claim that the concept of domination provides a unifying theoretical framework for understanding and addressing this phenomenon.
Thompson’s ‘radical republicanism’ focuses on two specific forms of domination that play a critical role in the organisation of modern societies. They are: (i) extractive domination; and (ii) constitutive domination. In what follows, I will explain both of these forms of domination, trying to stay true to Thompson’s original presentation, and then outline the mechanisms through which algorithmic governance technologies facilitate or enable them. This gist of my argument is that algorithmic governance technologies are particularly good at doing this. I will close by addressing some objections to my view.
1. Extractive Algorithmic Domination
Domination is a kind of power. It arises from asymmetrical relationships between two or more individuals or groups of individuals. The classic example of such an asymmetrical relationship is that between a slave and his/her master. Indeed, this relationship is the example that Pettit uses to illustrate and explain his conception freedom as non-domination. His claim is one of the key properties of this relationship is that the slave can never be free no matter how kind or benevolent the master is. The reason for this is that you cannot be free if you live subject to the arbitrary will of another.
The problem with Pettit’s view is that it overlooks other manifestations and effects of asymmetrical relationships. Pettit sees domination as an analytical concept that sheds light on the nature of free choice. But there is surely more to it than that? To live in a state of domination is not simply to have one’s freedom undermined. It is also to have one’s labour/work exploited and mind controlled by a set of values and norms. Pettit hints at these things in his work but never makes them central to his analysis. Thompson’s radical republicanism does.
The first way it does this is through the idea of extractive domination:
Extractive Domination: Arises when A is in a structural relation with B whose purpose is to enable A to extract a surplus benefit from B, where:
‘a structural relation’ = a relationship defined by social roles and social norms; and‘a surplus benefit’ = a benefit (usually in the form of physical, cognitive and emotional labour) that would otherwise have benefited B or the wider community but instead flows to A for A’s benefit.
The master-slave relation provides an example of this: it is a relationship defined by social roles and norms, and those norms enable masters to extract surplus benefit (physical labour) from the slave. Other examples would include capitalist-worker relations (in a capitalist society) and male-female relations (under conditions of patriarchy). Male-female relations provide an interesting, if controversial, example. What it means to be ‘male’ or ‘female’ is defined (at least in part)* by social norms, expectations, and values. Thus a relationship between a man and a woman is (at least in part) constituted by those norms, expectations and values. Under conditions of patriarchy, these norms, expectations and values enable men to extract surplus benefits from women, particularly in the form of sexual labour and domestic labour. In the case of sex, the norms and values are directed at male sexual pleasure as opposed to female sexual pleasure; in the case of domestic labour, this provides the foundation for the man to live a ‘successful’ life. Of course, this take on male-female relations is actively resisted by some, and I’ve only provided a simplistic sketch of what it means to live under conditions of patriarchy. Nevertheless, I hope it gives a clearer sense of what is meant by extractive domination. We can return to the problem with simplistic sketches of social systems later.
What I want to do now is to argue that algorithmic governance technologies enable extractive domination. Indeed, that they are, in many ways, the ideal technology for facilitating extractive domination. I don’t think this is a particularly difficult case to make. Contemporary algorithmic governance technologies track, monitor, nudge and incentivise our behaviour. The vast majority of these technologies do so by adopting the ‘Surveillance Capitalist’ business model (see Zuboff 2015 for more on the idea of surveillance capitalism, or read my summary of her work if you prefer). The algorithmic service is often provided to us for ‘free’. I can use Facebook for ‘free’; I can read online media for ‘free’; I can download the vast majority of health and fitness apps for ‘free’ or minimal cost. But, of course, I pay in other ways. These services make their money by extracting data from my behaviour and then by monetising this in various ways, most commonly by selling it to advertisers.
The net result is a system of extractive domination par excellence. The owners and controllers of the algorithmic ecosystem gain a significant surplus benefit from the labour of their users/content providers. Just look at the market capitalisation of companies like Facebook, Amazon and Google, and the net worth of their founders. All of these individuals are, from what I have read, hard-working and fiercely determined, and they also displayed considerable ingenuity and creativity in creating their digital platforms. Still, it is difficult to deny that since they got up and running these digital platforms have effectively functioned to extract rents from the (often unpaid) labour of others. In many ways, the system is more extractive than that which existed under traditional capitalist-worker relations. At least under that system, the workers received some economic benefit for their work, however minimal it may have been, and through legal and regulatory reform, they often received considerable protections and insurances against the depredations of their employers. But under surveillance capitalism the people from who the surplus benefits are extracted are (often) no longer classified as ‘workers’; they are service users or self-employed gig workers. They must fend for themselves or accept the Faustian bargain in availing of free services.
That’s not to say that users receive no benefits or that there isn’t some value-added by the ingenuity of the technological innovators. Arguably, the value of my individual data isn’t particularly high in and of itself. It is only when it is aggregated together with the data of many others that it becomes valuable. You could, consequently, argue that the surveillance capitalists are not, strictly speaking, extracting a surplus benefit because without their technology there would be no benefits at all. But I don’t think is quite right. It is often the case that certain behaviours or skills lack value before a market for them is created — e.g. being an expert in digital marketing wouldn’t have been a particularly valuable skill 100 years ago — but that doesn’t mean that they don’t have value once the market has been established, and that it is impossible for people to extract a surplus benefit from them. Individual data clearly has some value and it seems obvious that a disproportionate share of that value flows towards the owners and controllers of digital platforms. Jaron Lanier’s book Who Owns the Future? looks into this problem in quite some detail and argues in favour of a system of micro-payments to reward us for our tracked behaviours. But that’s all by-the-by. The important point here is that algorithmic governance technologies enable a pervasive and powerful form of extractive domination.
2. Constitutive Algorithmic Domination
So much for extractive domination. What about constitutive domination? To understand this concept, we need to go back, for a moment, to Pettit’s idea of freedom as non-domination. As you’ll recall, the essence of this idea is that to be free you must be free from the arbitrary will of another. I haven’t made much of the ‘arbitrariness’ condition in my discussions so far, but it is in fact crucial to Pettit’s theory. Pettit (like most people) accepts that there can be some legitimate authorities in society (e.g. the state). What differentiates legitimate authorities from illegitimate ones is their lack of arbitrariness. A legitimate authority could, in some possible world, interfere with your choices, but it would do so in a non-arbitrary way. What it means to be non-arbitrary is a matter of some controversy. Pettit argues that potential interferences that are contrary to your ‘avowed interests’ are arbitrary. If you have an avowed interest in X, then any potential interference in X is arbitrary. Consequently, he seems to favour a non-moral theory of arbitrariness: what you have an avowed interest in may or may not be morally acceptable. But he has been criticised for this. Some argue that there must be some moralised understanding of arbitrariness if we are going to reconcile republicanism with democracy, which is something Pettit is keen to do.
Fortunately, we do not have to follow this debate down the rabbit hole. All that matters here is that Pettit’s theory exempts ‘legitimate’ authorities from the charge of domination. Thompson, like many others before him, finds this problematic. He thinks that, in many ways, the ultimate expression of domination is when the dominator gets their subjects to accept their authority as legitimate. In other words, when they get their subjects to see the dominating power as something that is keeping with their avowed interests. In such a state, the subject has so internalised the norms and values of domination that they no longer perceive it as an arbitrary exercise of power. It is just part of the natural order; the correct way of doing things. This is the essence of constitutive domination:
Constitutive Domination: Arises when A has internalised the norms and values that legitimates B’s domination; i.e. thinking outside of the current hierarchical order becomes inconceivable for A.
This is the Marxist ideal of ‘false consciousness’ in another guise, and Thompson uses that terminology explicitly in his analysis (indeed, if it wasn’t already obvious, it should by now be obvious that ‘radical republicanism’ is closely allied to Marxism). Now, I have some problems with the idea of false consciousness. I think it is often used in a sloppy way. I think we have to internalise some set of norms and values. From birth, we are trained and habituated to a certain view of life. We have all been brainwashed into becoming the insiders to some normative system. There is no perfectly neutral, outside view. And yet people think that you can critique a system of norms and values merely by pointing out that it has been foisted upon us. That is often how ‘false consciousness’ gets used in everyday conversations and debates (though, to be fair, it doesn’t get used in that many of my everyday conversations). But if all normative systems are foisted upon us, then merely pointing this out is insufficient. You need to do something more to encourage someone to see this set of norms and values as ‘false’. Fortunately, Thompson does this. He doesn’t take issue with all the possible normative systems that might be foisted upon us; he only takes issue with the ones that legitimate hierarchical social orders, specifically those that include relationships of extractive domination. This narrowing focus is key to the idea constitutive domination.
Do algorithmic governance technologies enable constitutive domination? Let’s think about what that might mean in the present context. In keeping with Thompson’s view, I take it that it must mean that the technologies train or habituate us to a set of norms and values that legitimate the extractive relations of surveillance capitalism. Is that true? And if so what might the training mechanisms be?
Well, I have to be modest here. I can’t say that it is true. This is something that would require empirical research. But I suspect that it could be true and that there are a few different mechanisms through which it occurs:
Attention capture/distraction: Algorithmic governance technologies are designed to capture and direct our attention (time spent on device/app is a major metric of success for the companies creating these technologies). Once attention is captured, it is possible to fill people’s minds with content that either explicitly or implicitly reinforces the norms of surveillance capitalism, or that distract us away from anything that might call those norms into question.
Personalisation and reward: Related to the above, algorithmic governance technologies try to customise themselves to an individual’s preference and reward system. This makes repeat engagement with the technologies as rewarding as possible for the individual, but the repeat engagement itself helps to further empower the system of surveillance capitalism. The degree of personalisation made possible by algorithmic governance technologies could be one of the things that makes them particularly adept at constitutive domination.
Learned helplessness: Because algorithmic governance technologies are rewarding and convenient, and because they often do enable people to achieve goals and satisfy preferences, people feel they have to just accept the conveniences of the system and the compromises it requires, e.g. they cannot have privacy and automated convenience at the same time. They must choose one or the other. They cannot resist the system all by themselves. In extreme form, this learned helplessness may translate into full-throated embrace of the compromises (e.g. cheerleading for a ‘post-privacy’ society).
Again, this is all somewhat speculative, but I think that through a combination of attention capture/distraction, personalisation and reward, and learned helplessness, algorithmic governance technologies could enable constitutive domination. In a similar vein, Brett Frischmann and Evan Selinger argue, in their recent book Re-engineering humanity, argue that digital technologies are ‘programming’ to be unthinking, and unreflective machines. They use digital contracting as one of the main examples of this, arguing that people just click and accept the terms of these contracts without ever really thinking about what they are doing. Programming us to not-think might be another way in which algorithmic governance technologies facilitate constitutive domination. The subjects of algorithmic domination have either been trained not to care about what is going on, or start to see it as a welcome, benign framework in which to they can live their lives. This masks the underlying domination and extraction that is taking place.
3. Objections and Replies
What are the objections to all this? In addition to the objections discussed in the previous post, I can think of several, not all of which I will be able to address here, and there probably many more of which I have not thought. I am happy to hear about them in the comments section. Nevertheless, allow me to address a few of the more obvious ones.
First, one could object to the radical republican theory itself. Is it really necessary? Don’t we already have perfectly good theoretical frameworks and concepts for understanding the phenomena that it purports to explain? For example, doesn’t the Marxist concept of exploitation adequately capture the problem of extractive domination? And don’t the concepts of false consciousness, or governmentality or Lukes’s third face of power all capture the problem of constitutive domination?
I have no doubt that this is true. There are often overlaps between different normative and political theories. But I think there is still some value to the domination framework. For one thing, I think it provides a useful, unifying conceptual label for the problems that would otherwise be labelled as ‘exploitation’, ‘false consciousness’ and so on. It suggests that these problems are all rooted in the same basic problem: domination. Furthermore, because of the way in which domination has been used to understand freedom, it is possible to tie these ‘radical’ concerns into more mainstream liberal debates about freedom and autonomy. I find this to be theoretically attractive and theoretically virtuous (see the previous post on micro-domination for more). Finally, because republicanism is a rich political tradition, with a fairly standardised package of preferred rules and policies, it is possible to use the domination framework to guide normative practice.
Second, one could argue that I have overstated the case when it comes to the algorithmic mechanisms of domination. The problems are not as severe as I claim. The interactions/transactions between users and surveillance capitalist companies are not ‘extractive’; they are win-win (as any good economist would argue). There are many other sources of constitutive domination and they may be far more effective than the algorithmic mechanisms to which I appeal; and there is a significant ‘status quo’ bias underlying the entire argument. The algorithmic mechanisms don’t threaten anything particularly problematic; they are just old problems in a new technological guise.
I am sympathetic to each of these claims. I have some intuitions that lead me to think the algorithmic mechanisms of domination might be particularly bad. For example, the degree of personalisation and customisation might enable far more effective forms of constitutive domination; and the ‘superstar’ nature of network economies might make the relationships more extractive than would be the case in a typical market transaction. But I think empirical work is needed to see whether the problems are as severe or serious as I seem to be suggesting.
Third, one could argue that the entire ‘radical’ framework rests upon an overly-simplified, binary view of society. The assumption driving my argument seems to be that the entire system is set up to follow the surveillance capitalist logic; that there is a dominant and univocal system of norms that reinforces that logic; and that you are either a dominator or a dominated, a master or a slave. Surely this is not accurate? Society is more multi-faceted than that. People flit in and out of different roles. Systems of norms and values are multivalent and often inconsistent. Some technologies empower; some disempower; some do a bit of both. You commit a fatal error if you assume it’s all-or-none, one or the other.
This is probably the objection to which I am most sympathetic. It seems to me that radical theorists often have a single ideological enemy (patriarchy; capitalism; neo-liberalism) and they interpret everything that happens through the lens of that ideological conflict. Anything that seems to be going wrong is traced back to the ideological enemy. It’s like a conspiracy-theory view of social order. This seems very disconnected from how I experience and understand the world. Nevertheless, there’s definitely a sense in which the arguments I have put forward in this post see algorithmic governance technologies through the lens of single ideological enemy (surveillance capitalism) and assume that the technologies always serve that ideology. This could well be wrong. I think there are tendencies or intrinsic features of the technological infrastructure that favour that ideology (e.g. see Kevin Kelly’s arguments in his book The Inevitable), but there is more to it. The technology can be used to dismantle relationships of power too. Tracking and surveillance technologies, for example, have been used to document abuses of power and generate support for political projects that challenge dominant institutions. I just worry that these positive uses of technologies are overwhelmed by those that reinforce algorithmic domination.
Anyway, that brings me to the end of this post. I have tried to argue that Thompson’s radical republicanism, with its concepts of extractive and constitutive domination can shed light on the challenges posed by algorithmic governance technologies. Combining the arguments in this post with the arguments in the previous post about algorithmic micro-domination suggests that the concept of domination can provide a useful, unifying framework for understanding the concerns people have about this technology. It gives us a common name for a common enemy.
* I include this qualification in recognition of the fact that there is some biological basis to those categories as well, and that this too sets boundaries on the nature of male-female relations.