Pages

Friday, June 29, 2018

Episode #40: Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars

Sven-Nyholm.jpg

In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more. You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).



Show Notes:

  • 0:00 - Introduction
  • 1:22 - What is a self-driving car?
  • 3:00 - Fatal crashes involving self-driving cars
  • 5:10 - Could self-driving cars ever be completely safe?
  • 8:14 - Limitations of the Trolley Problem
  • 11:22 - What kinds of accident scenarios do we need to plan for?
  • 17:18 - Who should decide which ethical rules a self-driving car follows?
  • 23:47 - Why not randomise the ethical rules?
  • 25:18 - Experimental findings on people's preferences with self-driving cars
  • 29:16 - Is this just another typical applied ethical debate?
  • 31:27 - What would a utilitarian self-driving car do?
  • 36:30 - What would a Kantian self-driving car do?
  • 39:33 - A contractualist approach to the ethics of self-driving cars
  • 43:54 - The responsibility gap problem
  • 46:12 - Scepticism of the responsibility gap: can self-driving cars be agents?
  • 53:17 - A collaborative agency approach to self-driving cars
  • 58:18 - So who should we blame if something goes wrong?
  • 1:03:40 - Is there a duty to hand over driving to machines?
  • 1:07:30 - Must self-driving cars be programmed to kill?

Relevant Links




Monday, June 25, 2018

Radical Algorithmic Domination: How algorithms exploit and manipulate




(Related post: Algorithmic Micro-Domination)

In a recent post, I argued that the republican concept of ‘domination’ could be usefully deployed to understand the challenge that algorithmic governance systems pose to individual freedom. To be more precise, I argued that a modification of that concept — to include instances of ‘micro-domination’ — provided both a descriptively accurate and normatively appropriate basis for understanding the challenge.

In making this argument, I was working with a conception of domination that tries to shed light on what it means to be a free citizen. This is the ‘freedom as non-domination’ idea that has been popularised by Philip Pettit. But Pettit’s conception of domination has been challenged by other republican theorists. Michael Thompson, for example, has recently written a paper entitled ‘Two Faces of Domination in Republican Political Theory’ that argues that Pettit’s view is too narrow and fails to address the forms of domination that plague modern societies. Thompson favours a ‘radical’ conception of domination that focuses less on freedom, and more on inequality of power in capitalist societies. He claims that this conception is more in keeping with the views of republican writers like Machiavelli and Rousseau, and more attuned to the realities of our age.

In this post, I want to argue that Thompson’s radical republican take on domination can also be usefully deployed to make sense of the challenges posed by algorithmic governance. In doing so, I hope to provide further support for the claim that the concept of domination provides a unifying theoretical framework for understanding and addressing this phenomenon.

Thompson’s ‘radical republicanism’ focuses on two specific forms of domination that play a critical role in the organisation of modern societies. They are: (i) extractive domination; and (ii) constitutive domination. In what follows, I will explain both of these forms of domination, trying to stay true to Thompson’s original presentation, and then outline the mechanisms through which algorithmic governance technologies facilitate or enable them. This gist of my argument is that algorithmic governance technologies are particularly good at doing this. I will close by addressing some objections to my view.


1. Extractive Algorithmic Domination
Domination is a kind of power. It arises from asymmetrical relationships between two or more individuals or groups of individuals. The classic example of such an asymmetrical relationship is that between a slave and his/her master. Indeed, this relationship is the example that Pettit uses to illustrate and explain his conception freedom as non-domination. His claim is one of the key properties of this relationship is that the slave can never be free no matter how kind or benevolent the master is. The reason for this is that you cannot be free if you live subject to the arbitrary will of another.

The problem with Pettit’s view is that it overlooks other manifestations and effects of asymmetrical relationships. Pettit sees domination as an analytical concept that sheds light on the nature of free choice. But there is surely more to it than that? To live in a state of domination is not simply to have one’s freedom undermined. It is also to have one’s labour/work exploited and mind controlled by a set of values and norms. Pettit hints at these things in his work but never makes them central to his analysis. Thompson’s radical republicanism does.

The first way it does this is through the idea of extractive domination:

Extractive Domination: Arises when A is in a structural relation with B whose purpose is to enable A to extract a surplus benefit from B, where:
‘a structural relation’ = a relationship defined by social roles and social norms; and
‘a surplus benefit’ = a benefit (usually in the form of physical, cognitive and emotional labour) that would otherwise have benefited B or the wider community but instead flows to A for A’s benefit.


The master-slave relation provides an example of this: it is a relationship defined by social roles and norms, and those norms enable masters to extract surplus benefit (physical labour) from the slave. Other examples would include capitalist-worker relations (in a capitalist society) and male-female relations (under conditions of patriarchy). Male-female relations provide an interesting, if controversial, example. What it means to be ‘male’ or ‘female’ is defined (at least in part)* by social norms, expectations, and values. Thus a relationship between a man and a woman is (at least in part) constituted by those norms, expectations and values. Under conditions of patriarchy, these norms, expectations and values enable men to extract surplus benefits from women, particularly in the form of sexual labour and domestic labour. In the case of sex, the norms and values are directed at male sexual pleasure as opposed to female sexual pleasure; in the case of domestic labour, this provides the foundation for the man to live a ‘successful’ life. Of course, this take on male-female relations is actively resisted by some, and I’ve only provided a simplistic sketch of what it means to live under conditions of patriarchy. Nevertheless, I hope it gives a clearer sense of what is meant by extractive domination. We can return to the problem with simplistic sketches of social systems later.

What I want to do now is to argue that algorithmic governance technologies enable extractive domination. Indeed, that they are, in many ways, the ideal technology for facilitating extractive domination. I don’t think this is a particularly difficult case to make. Contemporary algorithmic governance technologies track, monitor, nudge and incentivise our behaviour. The vast majority of these technologies do so by adopting the ‘Surveillance Capitalist’ business model (see Zuboff 2015 for more on the idea of surveillance capitalism, or read my summary of her work if you prefer). The algorithmic service is often provided to us for ‘free’. I can use Facebook for ‘free’; I can read online media for ‘free’; I can download the vast majority of health and fitness apps for ‘free’ or minimal cost. But, of course, I pay in other ways. These services make their money by extracting data from my behaviour and then by monetising this in various ways, most commonly by selling it to advertisers.

The net result is a system of extractive domination par excellence. The owners and controllers of the algorithmic ecosystem gain a significant surplus benefit from the labour of their users/content providers. Just look at the market capitalisation of companies like Facebook, Amazon and Google, and the net worth of their founders. All of these individuals are, from what I have read, hard-working and fiercely determined, and they also displayed considerable ingenuity and creativity in creating their digital platforms. Still, it is difficult to deny that since they got up and running these digital platforms have effectively functioned to extract rents from the (often unpaid) labour of others. In many ways, the system is more extractive than that which existed under traditional capitalist-worker relations. At least under that system, the workers received some economic benefit for their work, however minimal it may have been, and through legal and regulatory reform, they often received considerable protections and insurances against the depredations of their employers. But under surveillance capitalism the people from who the surplus benefits are extracted are (often) no longer classified as ‘workers’; they are service users or self-employed gig workers. They must fend for themselves or accept the Faustian bargain in availing of free services.

That’s not to say that users receive no benefits or that there isn’t some value-added by the ingenuity of the technological innovators. Arguably, the value of my individual data isn’t particularly high in and of itself. It is only when it is aggregated together with the data of many others that it becomes valuable. You could, consequently, argue that the surveillance capitalists are not, strictly speaking, extracting a surplus benefit because without their technology there would be no benefits at all. But I don’t think is quite right. It is often the case that certain behaviours or skills lack value before a market for them is created — e.g. being an expert in digital marketing wouldn’t have been a particularly valuable skill 100 years ago — but that doesn’t mean that they don’t have value once the market has been established, and that it is impossible for people to extract a surplus benefit from them. Individual data clearly has some value and it seems obvious that a disproportionate share of that value flows towards the owners and controllers of digital platforms. Jaron Lanier’s book Who Owns the Future? looks into this problem in quite some detail and argues in favour of a system of micro-payments to reward us for our tracked behaviours. But that’s all by-the-by. The important point here is that algorithmic governance technologies enable a pervasive and powerful form of extractive domination.


2. Constitutive Algorithmic Domination
So much for extractive domination. What about constitutive domination? To understand this concept, we need to go back, for a moment, to Pettit’s idea of freedom as non-domination. As you’ll recall, the essence of this idea is that to be free you must be free from the arbitrary will of another. I haven’t made much of the ‘arbitrariness’ condition in my discussions so far, but it is in fact crucial to Pettit’s theory. Pettit (like most people) accepts that there can be some legitimate authorities in society (e.g. the state). What differentiates legitimate authorities from illegitimate ones is their lack of arbitrariness. A legitimate authority could, in some possible world, interfere with your choices, but it would do so in a non-arbitrary way. What it means to be non-arbitrary is a matter of some controversy. Pettit argues that potential interferences that are contrary to your ‘avowed interests’ are arbitrary. If you have an avowed interest in X, then any potential interference in X is arbitrary. Consequently, he seems to favour a non-moral theory of arbitrariness: what you have an avowed interest in may or may not be morally acceptable. But he has been criticised for this. Some argue that there must be some moralised understanding of arbitrariness if we are going to reconcile republicanism with democracy, which is something Pettit is keen to do.

Fortunately, we do not have to follow this debate down the rabbit hole. All that matters here is that Pettit’s theory exempts ‘legitimate’ authorities from the charge of domination. Thompson, like many others before him, finds this problematic. He thinks that, in many ways, the ultimate expression of domination is when the dominator gets their subjects to accept their authority as legitimate. In other words, when they get their subjects to see the dominating power as something that is keeping with their avowed interests. In such a state, the subject has so internalised the norms and values of domination that they no longer perceive it as an arbitrary exercise of power. It is just part of the natural order; the correct way of doing things. This is the essence of constitutive domination:

Constitutive Domination: Arises when A has internalised the norms and values that legitimates B’s domination; i.e. thinking outside of the current hierarchical order becomes inconceivable for A.


This is the Marxist ideal of ‘false consciousness’ in another guise, and Thompson uses that terminology explicitly in his analysis (indeed, if it wasn’t already obvious, it should by now be obvious that ‘radical republicanism’ is closely allied to Marxism). Now, I have some problems with the idea of false consciousness. I think it is often used in a sloppy way. I think we have to internalise some set of norms and values. From birth, we are trained and habituated to a certain view of life. We have all been brainwashed into becoming the insiders to some normative system. There is no perfectly neutral, outside view. And yet people think that you can critique a system of norms and values merely by pointing out that it has been foisted upon us. That is often how ‘false consciousness’ gets used in everyday conversations and debates (though, to be fair, it doesn’t get used in that many of my everyday conversations). But if all normative systems are foisted upon us, then merely pointing this out is insufficient. You need to do something more to encourage someone to see this set of norms and values as ‘false’. Fortunately, Thompson does this. He doesn’t take issue with all the possible normative systems that might be foisted upon us; he only takes issue with the ones that legitimate hierarchical social orders, specifically those that include relationships of extractive domination. This narrowing focus is key to the idea constitutive domination.

Do algorithmic governance technologies enable constitutive domination? Let’s think about what that might mean in the present context. In keeping with Thompson’s view, I take it that it must mean that the technologies train or habituate us to a set of norms and values that legitimate the extractive relations of surveillance capitalism. Is that true? And if so what might the training mechanisms be?
Well, I have to be modest here. I can’t say that it is true. This is something that would require empirical research. But I suspect that it could be true and that there are a few different mechanisms through which it occurs:

Attention capture/distraction: Algorithmic governance technologies are designed to capture and direct our attention (time spent on device/app is a major metric of success for the companies creating these technologies). Once attention is captured, it is possible to fill people’s minds with content that either explicitly or implicitly reinforces the norms of surveillance capitalism, or that distract us away from anything that might call those norms into question.

Personalisation and reward: Related to the above, algorithmic governance technologies try to customise themselves to an individual’s preference and reward system. This makes repeat engagement with the technologies as rewarding as possible for the individual, but the repeat engagement itself helps to further empower the system of surveillance capitalism. The degree of personalisation made possible by algorithmic governance technologies could be one of the things that makes them particularly adept at constitutive domination.

Learned helplessness: Because algorithmic governance technologies are rewarding and convenient, and because they often do enable people to achieve goals and satisfy preferences, people feel they have to just accept the conveniences of the system and the compromises it requires, e.g. they cannot have privacy and automated convenience at the same time. They must choose one or the other. They cannot resist the system all by themselves. In extreme form, this learned helplessness may translate into full-throated embrace of the compromises (e.g. cheerleading for a ‘post-privacy’ society).


Again, this is all somewhat speculative, but I think that through a combination of attention capture/distraction, personalisation and reward, and learned helplessness, algorithmic governance technologies could enable constitutive domination. In a similar vein, Brett Frischmann and Evan Selinger argue, in their recent book Re-engineering humanity, argue that digital technologies are ‘programming’ to be unthinking, and unreflective machines. They use digital contracting as one of the main examples of this, arguing that people just click and accept the terms of these contracts without ever really thinking about what they are doing. Programming us to not-think might be another way in which algorithmic governance technologies facilitate constitutive domination. The subjects of algorithmic domination have either been trained not to care about what is going on, or start to see it as a welcome, benign framework in which to they can live their lives. This masks the underlying domination and extraction that is taking place.


3. Objections and Replies
What are the objections to all this? In addition to the objections discussed in the previous post, I can think of several, not all of which I will be able to address here, and there probably many more of which I have not thought. I am happy to hear about them in the comments section. Nevertheless, allow me to address a few of the more obvious ones.

First, one could object to the radical republican theory itself. Is it really necessary? Don’t we already have perfectly good theoretical frameworks and concepts for understanding the phenomena that it purports to explain? For example, doesn’t the Marxist concept of exploitation adequately capture the problem of extractive domination? And don’t the concepts of false consciousness, or governmentality or Lukes’s third face of power all capture the problem of constitutive domination?

I have no doubt that this is true. There are often overlaps between different normative and political theories. But I think there is still some value to the domination framework. For one thing, I think it provides a useful, unifying conceptual label for the problems that would otherwise be labelled as ‘exploitation’, ‘false consciousness’ and so on. It suggests that these problems are all rooted in the same basic problem: domination. Furthermore, because of the way in which domination has been used to understand freedom, it is possible to tie these ‘radical’ concerns into more mainstream liberal debates about freedom and autonomy. I find this to be theoretically attractive and theoretically virtuous (see the previous post on micro-domination for more). Finally, because republicanism is a rich political tradition, with a fairly standardised package of preferred rules and policies, it is possible to use the domination framework to guide normative practice.

Second, one could argue that I have overstated the case when it comes to the algorithmic mechanisms of domination. The problems are not as severe as I claim. The interactions/transactions between users and surveillance capitalist companies are not ‘extractive’; they are win-win (as any good economist would argue). There are many other sources of constitutive domination and they may be far more effective than the algorithmic mechanisms to which I appeal; and there is a significant ‘status quo’ bias underlying the entire argument. The algorithmic mechanisms don’t threaten anything particularly problematic; they are just old problems in a new technological guise.

I am sympathetic to each of these claims. I have some intuitions that lead me to think the algorithmic mechanisms of domination might be particularly bad. For example, the degree of personalisation and customisation might enable far more effective forms of constitutive domination; and the ‘superstar’ nature of network economies might make the relationships more extractive than would be the case in a typical market transaction. But I think empirical work is needed to see whether the problems are as severe or serious as I seem to be suggesting.

Third, one could argue that the entire ‘radical’ framework rests upon an overly-simplified, binary view of society. The assumption driving my argument seems to be that the entire system is set up to follow the surveillance capitalist logic; that there is a dominant and univocal system of norms that reinforces that logic; and that you are either a dominator or a dominated, a master or a slave. Surely this is not accurate? Society is more multi-faceted than that. People flit in and out of different roles. Systems of norms and values are multivalent and often inconsistent. Some technologies empower; some disempower; some do a bit of both. You commit a fatal error if you assume it’s all-or-none, one or the other.

This is probably the objection to which I am most sympathetic. It seems to me that radical theorists often have a single ideological enemy (patriarchy; capitalism; neo-liberalism) and they interpret everything that happens through the lens of that ideological conflict. Anything that seems to be going wrong is traced back to the ideological enemy. It’s like a conspiracy-theory view of social order. This seems very disconnected from how I experience and understand the world. Nevertheless, there’s definitely a sense in which the arguments I have put forward in this post see algorithmic governance technologies through the lens of single ideological enemy (surveillance capitalism) and assume that the technologies always serve that ideology. This could well be wrong. I think there are tendencies or intrinsic features of the technological infrastructure that favour that ideology (e.g. see Kevin Kelly’s arguments in his book The Inevitable), but there is more to it. The technology can be used to dismantle relationships of power too. Tracking and surveillance technologies, for example, have been used to document abuses of power and generate support for political projects that challenge dominant institutions. I just worry that these positive uses of technologies are overwhelmed by those that reinforce algorithmic domination.

Anyway, that brings me to the end of this post. I have tried to argue that Thompson’s radical republicanism, with its concepts of extractive and constitutive domination can shed light on the challenges posed by algorithmic governance technologies. Combining the arguments in this post with the arguments in the previous post about algorithmic micro-domination suggests that the concept of domination can provide a useful, unifying framework for understanding the concerns people have about this technology. It gives us a common name for a common enemy.

* I include this qualification in recognition of the fact that there is some biological basis to those categories as well, and that this too sets boundaries on the nature of male-female relations.




Monday, June 18, 2018

Algorithmic Micro-Domination: Living with Algocracy



In April 2017, Siddhartha Mukherjee wrote an interesting article in the New Yorker. Titled ‘AI versus MD’ the article discussed the future of automated medicine. Automation is already rampant in medicine. There are algorithms for detecting and diagnosing disease, there are robotic arms and tools for helping with surgery, and there some attempts at fully automated services. Mukherjee’s article pondered the future possibilities: Will machines ever completely replace doctors? Is that a welcome idea?

The whole article is worth reading, but one section of it, in particular, resonated with me. Mukherjee spoke to Sebastian Thrun, founder of Google X, who now dedicates his energies to automated diagnosis. Thrun’s mother died from metastatic breast cancer. She, like many others, was diagnosed too late. He became obsessed with creating technologies that would allow us to catch and diagnose diseases earlier — before it was too late. His motivations are completely understandable and, in their direct intention, completely admirable. But what would the world look like if we really went all-in on early, automated, disease detection? Mukherjee paints a haunting picture:

Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

And of course disease diagnosis is just the tip of the iceberg. So many of our activities can now be tracked and surveilled by smart devices. There is a vast ecosystem of apps out there for tracking our purchases, hours of work, physical activity, calories consumed, words read, and so on. If you can think of it, there is probably an app for tracking it. Some of these apps are voluntarily adopted; some of them are imposed upon us by employers and governments. Some of them simply track and log our behaviour; others try to go further and change our behaviour. We are not quite at the total digital panopticon yet. But we are not too far away.

How should we understand this emerging reality? Is it something we should fear? Prima facie, I can see much to welcome in Thrun’s world of diagnostic surveillance: it would surely be a good thing if we could detect diseases earlier and thereby increase the chances of recovery. But, of course, there is a dark side. Who controls the surveillance infrastructure? How much power will it or they have over our lives? Could the system be abused? What about those who want to be ‘offline’ — who don’t want to spend their lives shuttling from ‘the grasp of one algorithm to the next’?

In this post, I want to argue that the concept of domination (a concept taken from republican political theory) provides a useful way of understanding and confronting the challenge of the digital panopticon. This is not a wholly original idea. Indeed, I previously looked at an argument from two political theorists — Hoye and Monaghan — that made this very case. The originality of this post comes from an attempted modification/expansion of the concept of domination that I think sheds better light on the unique nature of algorithmic governance. This is the concept of ‘micro-domination’ that I adopt from some recent work done on disability and domination.

In what follows, I will explain what is meant by ‘micro-domination’, consider how it sheds light on the peculiar features of algorithmic governance, and then look at some criticisms of the idea. I’ll try to be brief. My goal in this post is to introduce an idea; not to provide a fully-rounded defence of it.


1. Non-Domination and Micro-Domination
First, some necessary background. Republicanism is a rich political and philosophical tradition. Its essential ideas date back to the ancient world, and can be found in the writings of Machiavelli and Rousseau. It has undergone something of a rebirth in the past half century thanks to the work of Quentin Skinner and Philip Pettit.

The central concept in republicanism is domination. Domination is the great evil that must be avoided in society. In its broad outline, domination describes a situation in which one individual or group of individuals exercises control over another. This leaves plenty of room for conceptual disagreement. Michael Thompson has recently argued for a ‘radical’ conception of domination that focuses on problems associated with hierarchical and unequal societies. He claims that this conception of domination is better able to confront the problems with power in capitalist societies. Better able than what? Better than the narrower conception of domination favoured by Pettit and Skinner that looks to domination to shed light on the nature of freedom. While I have some sympathy for Thompson’s view, and I hope to cover his radical conception of domination in a later piece, I’ll stick with the narrower, freedom-focused, conception of domination for the time being.

According to that conception, freedom is best understood as non-domination. An individual can be said to be free if he or she is not living under the arbitrary will of another, i.e. is not subject to their good graces or answerable to them. This conception of freedom is usually contrasted with the more popular liberal ideal of freedom as non-interference. According to this view, an individual can be said to be free if he or she is not being interfered with by another. Republicans like Pettit criticise this because they think it fails to capture all the relevant forms of unfreedom.

They usually make their case through simple thought experiments. One of Pettit’s favourites is the ‘Happy Slave’ thought experiment. He asks us to imagine a slave: someone who is legally owned and controlled by a slave-master. Suppose, however, that the slave-master is benevolent and the slave is happy to conform to their wishes. This means that they are not being interfered with: no one is cracking the whip or threatening them with violence if they step out of line. Are they free? Pettit says ‘no’ — of course they aren’t free. Their existence is the epitome of unfreedom, but their lack of freedom has nothing to do with the presence of interference. It has to do with the presence of domination. The master is ever present and could step in and impose their will on the slave at any moment.

A more philosophical way of putting this is to say that republicanism places a modal condition on freedom. It’s not enough for you to live an unmolested life in this actual world; you must live an unmolested life in a range of close, possible worlds. If you constantly live with the fear that someone might arbitrarily step in and impose their will on you, you can never really be free.

That’s the basic idea of freedom as non-domination. What about micro-domination? This is a concept I take from the work of Tom O’Shea. He has written a couple of papers that use the republican theory of freedom to analyse how different institutional and personal circumstances affect people with disabilities. All of what he has written is interesting and valuable, but I want to hone-in on one aspect of it. One of the arguments that he makes is that people with disabilities often suffer from many small scale instances of domination. In other words, there are many choices they have to make in their lives which are subject to the arbitrary will of another. If they live in some institutional setting, or are heavily reliant on care and assistance from others, then large swathes of their daily lives may be dependent on the good will of others: when they wake up, when they go to the bathroom, when they eat, when they go outside, and so on. Taken individually, these cases may not seem all that serious, but aggregated together, they start to look like a more significant threat to freedom:

The result is often a phenomenon I shall call ‘micro-domination’: the capacity for decisions to be arbitrarily imposed on someone, which, individually, are too minor to be contested in a court or a tribunal, but which cumulatively have a major impact on their life.
(O’ Shea 2018, 136)

O’Shea’s work continues from this to look at ways to resolve the problems of domination faced by persons with disabilities. I’m not going to go there. I want to turn to consider how the concept of micro-domination can shed light on the phenomenon of algorithmic governance. To do this I want to sharpen the concept of micro-domination by offering a more detailed definition/characterisation.

Micro-domination: Many small-scale, seemingly trivial, instances of domination where:
(a) Each instance is a genuine case of domination, i.e. it involves some subordination to the arbitrary will of another and some potential threat of their intervening if you step out of line (i.e. fail to conform with what they prefer).
(b) The aggregative effect of many such instances of micro-domination is significant, i.e. it is what results in a significant threat to individual freedom.

With this more detailed characterisation in mind, the question then becomes: does algorithmic governance involve micro-domination?


2. Algorithmic Micro-Domination
Let’s start by clarifying what is meant by algorithmic governance. I gave some sense of what this means in the introduction, but there is obviously more to it. In most of my writings and talks, I define algorithmic governance as the ‘state of being governed by algorithmically-controlled smart devices’. This algorithmic governance can come in many forms. Algorithms can recommend, nudge, manipulate, intervene and, in some cases, take over from individual behaviour.

You can probably think of many examples from your everyday life. Just this morning I was awoken by my sleep monitoring system. I use it every night to record my sleep patterns. Based on its observations, it sets an alarm that wakes me at the optimal time. When I reach my work desk, I quickly checked my social media feeds where I was fed a stream of information that has been tailored to my preferences and interests. I was also encouraged to post an update to the people who follow me (“the 1000 people who follow you on Facebook haven’t heard from you in awhile”). As I was settling into work, my phone buzzed with a reminder from one of my health and fitness apps to tell me that it was time to go for a run. Later in the day, when I was driving to a meeting across town, I used Google maps to plot my route. Sometimes, when I got off track, it recalculated and sent me in a new direction. I dutifully followed its recommendations. Whenever possible, I used the autopilot software on my car to save me some effort, but every now and then it prompted me to take control of the car because some obstacle appeared that it was not programmed to deal with.

I could multiply the examples, but you get the idea. Many small-scale, arguably trivial, choices in our everyday lives are now subject to algorithmic governance: what route to drive, who to talk to, when to exercise and so on. A network of devices monitors and tracks our behaviour and sends us prompts and reminders. This provides the infrastructure for a system of algorithmic micro-domination. Although we may not fully appreciate it, we are now the ‘subjects’ of many algorithmic masters. They surveil our lives and create a space of permissible/acceptable behaviour. Everything is fine if we stay within this space. We can live happy and productive lives (perhaps happier and more productive than our predecessors thanks to the algorithmic nudging), and to all intents and purposes, these lives may appear to be free. But if we step out of line we may be quick to realise the presence of the algorithmic masters.

‘Wait a minute’, I hear you say, ‘surely things aren’t that bad?’ It’s true that some of us voluntarily submit ourselves to algorithmic masters, but not all of us do. The description of my day suggests I am someone who is uniquely immersed in a system of algorithmic governance. My experiences are not representative. We have the option of switching off and disentangling ourselves from the web of algorithmic control.

Maybe so. I certainly wouldn’t want us to develop a narrative of helplessnes around the scope and strength of algorithmic governance, but I think people who argue that we have the option of switching off may underestimate the pervasiveness of algorithmic control. Janet Vertesi’s experiences in trying to ‘hide’ her pregnancy from Big Data systems seems to provide a clear illustration of what can happen if you do opt out. Vertesi, an expert in Big Data, knew that online marketers and advertisers really like to know if women are pregnant. Writing in 2014, she noted that an average person’s marketing data is worth about 10 cents whereas a pregnant person’s data is worth about $1.50. She decided to conduct an experiment in which she would hide her own pregnancy from the online data miners. This turned out to be exceptionally difficult. She had to avoid all credit card transactions for pregnancy-related shopping. She had to implore her family and friends to avoid mentioning or announcing her pregnancy on social media. When her uncle breached this request by sending her a private message on Facebook, she deleted his messages and unfriended him (she spoke to him in private to explain why). In the end, her attempt to avoid algorithmic governance led to her behaviour being flagged as potentially criminal:

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”
It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.
(Vertesi 2014)

The analogy with Pettit’s ‘Happy Slave’ thought experiment is direct and obvious. Vertesi wouldn’t have had any problems if she had lived her life within the space of permissible activity created by the system of algorithmically-controlled commerce. She wouldn’t have been interfered with or overtly sanctioned. By stepping outside that space, she opened herself up to interference. She was no longer tolerated by the system.

We can learn from her experience. Many of us may be happy to go along with the system as currently constituted, but that doesn’t mean that we are free. We are, in fact, subject to its algorithmic micro-domination.


3. Some Objections and Replies.
So the argument to this point is that modern systems of algorithmic governance give rise to algorithmic micro-domination. I think this is a useful way of understanding how these systems work and how they impact on our lives. But I’m sure that there are many criticisms to be made of this idea. For example, someone could argue that I am making too much of Vertesi’s experiences in trying to opt out. She is just one case study. I would need many more to prove that micro-domination is a widespread phenomenon. This is probably right, though my sense is that Vertesi’s experiences are indicative of a broader phenomenon (e.g. in academic hiring I would be extremely doubtful of any candidate that doesn’t have an considerable online presence). There are also two other objections that I think are worth raising here.

First, one could argue that algorithmic micro-domination is either misnamed or, alternatively, not a real instance of domination. One could argue that it is misnamed on the grounds that the domination is not really ‘algorithmic’ in nature. The algorithms are simply tools by which humans or human institutions exert control over the lives of others. It’s not the algorithms per se; it’s Facebook/Mark Zuckerberg (and others) that are the masters. There is certainly something to this, but the tools of domination are often just as important as the agents. The tools are what makes the domination possible and dictate its scope and strength. Algorithmic tools could give rise to new forms domination. That is, indeed, the argument I am making by appealing to the notion of algorithmic ‘micro-domination’. That said, I think there is also something to the idea that algorithmic tools have a life of their own, i.e. are not fully under the control of their human creators. This is what Hoye and Monaghan argued in their original defence of algorithmic domination. They claimed that Big Data systems of governance were ‘functionally agentless’, i.e. it would be difficult to trace what they do to the instructions or actions of an individual human agent (or group of human agents). They felt that this created problems for the republican theory since domination is usually viewed as a human-to-human phenomenon. So if we accept that algorithmic governance systems can be functionally agentless we will need to expand the concept of domination to cover cases in which humans are not the masters. I don;t have a problem with that, but conceptual purists might.

Second, one could have doubts about the wisdom of expanding the concept of domination to cover ‘micro-domination’. Why get hung up on the small things? This is a criticism that is sometimes levelled at the analogous concept of a ‘micro-aggression’. A micro-aggression is a smallscale, everyday, verbal or behavioural act that communicates hostility towards minorities. It is often often viewed as a clear manifestation of structural or institutional racism/discrimination. Examples of micro-aggressions include things like telling a person of colour that their English is very good, or asking them where they come from, or clutching your bag tightly when you walk past them, and so on. They are not cases of overt or explicit discrimination. But taken together they add up to something significant: they tell the person from the minority group that they are not welcome/they do not belong. Critics of the idea of micro-aggressions argue that it breeds hypersensitivity, involves an overintrepretation of behaviour, and can often be used to silence or shut down legitimate speech. This latter criticism is particularly prominent in ongoing debates about free speech on college campuses. I don’t want to wade into the debate about micro-aggressions. All I am interested in is whether similar criticisms could be levelled at the idea of a micro-domination. I guess that they could. But I think the strength of such criticisms will depend heavily on whether there is something valuable that is lost through hypersensitivity to algorithmic domination. In the case of micro-aggressions, critics point to the value of free speech as something that is lost through hypersensitivty to certain behaviours. What is lost through hypersensitivity to algorithmic domination? Presumably, it is the efficiency and productivity that the algorithmic systems enable. Is the loss of freedom sufficient to outweigh those gains? I don’t have an answer right now, but it’s a question worth pursuing.

That’s where I shall leave it for now. As mentioned at the outset, my goal was to introduce an idea, not to provide a compelling defence of it. I’m interested in getting some feedback. Is the idea of algorithmic micro-domination compelling or useful? Are there other important criticisms of the idea? I’d be happy to hear about them in the comments section.




Monday, June 11, 2018

Legal Loopholes and Voting Paradoxes: A Theory



Nick Freeman is a well-known British lawyer. He rose to fame in the 1990s when he successfully defended a number of celebrity clients from dangerous driving prosecutions. He was particularly popular among footballers. His clients included Paul Ince, David Beckham and, perhaps most famously, Alex Ferguson. The case with Ferguson was notorious because of its somewhat scatalogical fact-pattern, and because Ferguson was the most high-profile football manager in the world at the time.

Ferguson was summonsed for speeding along the hard-shoulder of a clogged motorway. His excuse was that he desperately needed to use the bathroom due to an upset stomach he had been nursing from the previous day. He was stopped by the police and charged with an offence. He was in a tricky predicament since he already had a number of penalty points on his licence and being found guilty once more would put him off the road for a number of months.

Enter Freeman. Freeman knew that it was illegal to drive on the hard shoulder of a motorway, unless there was a medical emergency that justified doing so. Now, having a dodgy tummy might not be top of the list of justifying medical emergencies, and we might not look favourably on Ferguson if he set off on his journey knowing that there was a risk that this emergency might arise. But Freeman’s genius, such as it is, lay in arguing that Ferguson’s impending diarrhoea was indeed a justifying medical emergency and that Ferguson was not to be blamed for its sudden onset when he was stuck in the traffic jam. Freeman presented his case with such vigour that he eventually succeeded in getting Ferguson off.

This is typical of Freeman’s modus operandi. He uses an encyclopaedic knowledge of road traffic offences and criminal procedure to find obscure, relatively untested, arguments that benefit his clients. In other words, he finds ‘loopholes’ in the law. Indeed, so successful is he in doing this that he has been christened ‘Mr Loophole’ by the British tabloid press, a moniker he eventually, and somewhat reluctantly, took on for himself. His 2012 book The Art of the Loophole is a guidebook for anyone who wants to follow in his footsteps.

I’m not overly interested in Freeman and his practice, but I am interested in the general phenomenon of legal loopholes and why they arise. Anyone who has studied the law will know that they are pervasive and that the working life of the lawyer is often taken up in trying to find loopholes that work in favour of their clients. But the concept of a loophole is not well-defined, nor the reason for their persistence well-understood. Furthermore, the ethics of exploiting loopholes is hotly contested among lawyers and academics. I doubt I can resolve all those issues in this blogpost, but what I can do is share a theory of loopholes that has been defended by Leo Katz. I find Katz’s theory very interesting. It’s quite complex, relying as it does on an analogy between legal loopholes and voting paradoxes, but once you understand how it works it is quite illuminating. I hope to show why in what follows.


1. What is a Legal Loophole?
A legal loophole is one of those “you know when you see it” phenomena. It’s difficult to offer a precise definition. If I were to try, I would say that a loophole is some vagueness or ambiguity in a rule, or conflict between two legal rules, that can be used to benefit someone in a seemingly perverse or counterintuitive way (in a way that violates the ‘spirit’ if not the ‘letter’ of the law). But this definition is problematic since it seems quite value-laden. It seems to presuppose that exploiting a loophole is unethical since it involves using the law to perverse ends. But oftentimes people who make use of loopholes don’t see it that way. They often think they are using the law to a legitimate end. Take the Alex Ferguson case as an example. You could argue — and I’m sure he and Nick Freeman would argue — that he was making a perfectly legitimate use of the medical exemption rule.

This value-ladenness is something that Katz tries to avoid in his theory of loopholes. As we will see below, he thinks that loopholes are inherent to the logical structure of legal doctrines. Specifically, he claims that they emerge from the fact that legal doctrines try to balance occasionally conflicting principles (e.g. people should obey the rules of the road; there should be some leeway for medical emergencies). He argues that they do not arise simply from a mismatch between the law’s purpose/rationale and its linguistic formulation. It’ll be easier to understand this if we have some working examples. Katz uses about half a dozen in his analysis. I will focus on just three:

Asset Protection: James is a well-to-do doctor who has made a number of misguided business investments. He fears that he will have to declare personal bankruptcy, which will mean that the majority of his personal assets can be seized and sold off by his creditors. However, there is a legal rule stating that certain types of asset are ‘exempt’ from personal bankruptcy rules and cannot be seized by creditors. These are assets that are deemed essential/necessary to life and include things like a family home, pension and insurance. James knows this so he uses his remaining wealth to purchase these exempt assets. This last-minute flurry of purchases triggers his bankruptcy, but he doesn’t mind as his assets are protected.

Contrived Self Defence: Samson’s wife and children were brutally assaulted in a home invasion by three armed robbers. Samson vows revenge. He tracks the three armed robbers and confronts them late at night in a park. They do not know who he is but he provokes them into attacking him with seemingly lethal force. Samson then fights back and ends up fatally wounding one of the attackers, while the other two flee. Samson’s lawyer successfully argues at trial that his client acted in self-defence. (Something akin to this happens in the Death Wish movies from the 1970s)

Political Asylum: Ivan has emigrated to the United States. He wants to be granted an immigrant visa as soon as possible. He could go through the ordinary channels but has been told that these are slow and he is unlikely to succeed. Someone tells him that the fastest route is to be granted political asylum, but this requires proof that one is a political refugee. Upon learning this, Ivan quickly uploads a series of videos to Youtube in which he is critical of the political leadership in his home country. The videos go viral. It is widely known that people who have made similar statements in the past have been executed or assassinated by the regime. Ivan uses this to fast track his immigration visa.

Each of these cases involves someone using legal rules to their advantage, but in a way that doesn’t quite sit right with us. They are classic examples of loophole exploitation. They are, of course, highly stylised and simplified. Lawyers will no doubt be quick to point out that legal systems have additional rules and qualifications that address these scenarios. This is indeed true. Courts and legislatures frequently try to prevent people abusing the law by adding new laws. For example, they might add an extra qualification to the rule about political asylum to state that the reason for seeking political asylum have to arise before you land in the country in which you are seeking aslyum, and/or that they have to come from a sincere political conviction. But qualifications like this are often themselves subject to further loophole exploitation, and it can be difficult to implement them successfully. So there is often a continuous arms race between the law-makers and the would-be exploiters. The deeper question is why does this keep happening?



2. The Voting Analogy
The answer, according to Katz, is that legal doctrines are subject to the same kinds of ‘paradoxes’ as voting systems. It’s long been known that voting systems are subject to all kinds of perverse and counterintuitive manipulations. A ‘voting system’ can be defined as any system that tries to aggregate individual preferences over options into a collective or group preference over the same option set. Suppose three friends have to choose between one of two activities to perform for the weekend: fishing or skydiving. They decide to vote. Each expresses their preference for fishing or skydiving and they go with whatever the majority preference happens to be. That’s a classic voting system in action.

But once you go beyond the confines a simple majority vote on two options, you run into lots of problems. How you structure the voting system — Is it broken down into ‘rounds’? Do people vote on one preference or do they rank their preferences? — can make a big difference to the group outcome, often in ways that seem counterintuitive or perverse. Consider the following example, taken directly from Katz’s book:


Law School: Not too long ago, a certain law school had a problem with professors not marking their exam scripts on time. This meant that students weren’t getting their results on time and it was feared that it would have a knock-on impact on their ability to graduate. A group within the law school decided to do something about it. They introduced a proposal for a €100-a-day fine to be imposed on any professor who failed to submit their marks on time. A vote was to be taken on the proposal at the next faculty meeting. From informal conversations, it seemed that least two-thirds of the faculty approved the fine, but there was one individual — the worst procrastinator in the group — who was resolutely opposed to it. Before the meeting, he talked to everybody and realised that there were three equally-sized coalitions/groups in the faculty:

Radicals: Wanted to impose a €1000-a-day fine, but would be satisfied with a €100-a-day fine.
Moderates: Wanted to impose a €100-a-day fine but would be opposed to anything higher (i.e. would prefer the status quo to what the Radicals wanted most)
Conservatives: Didn’t want to impose any fine, but felt that if a fine was to be imposed then the fine should be really high, i.e. at least €1000-a-day, in order to be maximally effective.

The opposer organised the preference rankings of the groups into the table below.
 

He then realised that there was a way in which he could block the introduction of the €100 fine. Using a procedural rule in the Law School’s by-laws, he proposed a vote first be taken on amending the proposal to raise the fine from €100 to €1000 and then that a vote be taken on whether or not to introduce the fine. The rest of the school agreed. On the first vote, the Radicals and Conservatives formed a two-thirds majority and approved the increased amount in the proposal. On the second vote, the Moderates and Conservatives forms a two-thirds majority and rejected the introduction of the fine. The opposer got his way.

This is an example of a very famous voting paradox, first identified by the Marquis de Condorcet in the 18th century. If we label the three options facing the law faculty, we can begin to see the paradox more clearly. Call the introduction of a €100 fine ‘option A’; call the introduction of a €1000 fine ‘option B’; and call the status quo (i.e. no fine) ‘option C’. An ordinary ‘rule’ or ‘axiom’ of individual decision-making is that our preferences should be transitive, i.e. they should form a logically consistent hierarchy. If we prefer A to C and B to A then we should also, by logical inference, prefer B to C. If we turned around and said that we preferred C to B, then there would be something odd or inconsistent about our preferences. They would be intransitive. And yet this is exactly what is happening in the case of the Law School. Each individual has a logically consistent preference hierarchy, but the group as a whole does not. The group preferences are intransitive. We can see from the breakdown of the faculty preferences in the table above that there are (different) majority coalitions that prefer both A to C, B to A and C to B. It is this group intransitivity that can be exploited by our wily resolute opposer. He can manipulate the voting procedure so as to introduce a seemingly irrelevant third option (the €1000 fine) into the agenda and thereby unseat the majority coalition that favoured introducing the €100 fine.

Of course, this paradox arises from the vagaries of the particular voting system adopted by the Law School. You might think that another voting system would not be vulnerable to this problem. This is true, but only up to a point. There is another famous theorem from voting theory — Arrow’s impossibility theorem — which shows that any democratic voting system we might hope to create will be vulnerable to one or more paradoxes of this sort. The only voting system that completely avoids paradoxes is a dictatorship (where the preferences of one individual dictate the group preference), which of course is not really a voting system, except in some strict logical sense. You might like to know more about Arrow’s theorem. If so, I’d recommend reading Amartya Sen’s recent explanation of it, or indeed Katz’s simplified presentation of it in his book. I won’t go into it here because it is too complex and, in any event, I don’t think it is strictly necessary. If you understand the paradox that arises in the Law School example then you have pretty much everything you need to understand Katz’s theory of loopholes.


3. How Voting Paradoxes Explain Legal Loopholes
Katz’s theory claims that legal loopholes arise for the same reason that voting paradoxes arise. To accept Katz’s theory you need to accept three propositions. I’ll go through each of them in some detail.

Proposition One: Multi-criterial decision-making systems are like voting systems.

This is the critical first step in the argument. It requires some unpacking. Recall the earlier definition of a voting system: it is something that aggregates the preference rankings of individuals into a group preference ranking. How is that like a multi-criterial decision-making system? Well, first, think in more detail about a multi-criterial decision. Suppose you have to decide whether to take up a new job or stick with your old job. How would you make that decision? If you are like me, then you would use multiple criteria to help you decide. You would focus on the salary offer, the likely working conditions, the commuting time, the work-life balance made possible by the job, and so on. Each of these criteria can be used to rank the options before you. The salary criterion might rank the new job above the old job; the work-life balance criterion might rank the old job above the new job; and so on. Once you have established the ranking orders for each criterion, you’ll have to aggregate them together into a single choice. This is directly analogous to what happens in a voting system. The criteria are like voters: they each have their own preference ranking. The decision is like the group preference: it is what emerges from the amalgamation and aggregation of the individual preference rankings.

Of course, the analogy isn’t perfect. We often assign different weights to different criteria whereas in democratic voting systems we usually stick to a one-person-one-vote principle (though weighting is common in voting systems more generally). Furthermore, as Katz notes, decision-making criteria aren’t strategic whereas voters (sometimes) are. In other words, criteria don’t change their preference ranking in order to manipulate the final decision. But voters often do this because they anticipate and pre-empt the voting behaviour of others. Nevertheless these disanalogies don’t upset the argument that much. Indeed, Arrow himself developed a multi-criterial decision-making version of his impossibility theorem around the same time that he came up with the voting version. So the connection between the two phenomena has long been recognised.

This brings us to the second proposition:

Proposition 2: Legal rules/doctrines are like multicriterial decision-making systems.

This means that individual legal rules or doctrines often try to aggregate multiple decision-making criteria. Specifically, they try to aggregate different ethical criteria or policy criteria. Consider some of the rules/doctrines from the examples given earlier in this post. The self-defence rule, for example, has a number of elements to it. It entitles you to use lethal force to repel a seemingly lethal attack, but there are usually limitations to its use. The force has to be proportionate/necessary. We don’t want people killing each other willy-nilly. If less force could be used to repel the attack, or if you could avoid the attack completely by retreating, we usually prefer it if you do so. At the same time, we recognise that people have a right to defend their own rights: to stand their ground and protect themselves if someone else is brutally attacking them. The self-defence rule has to balance these two ethical principles. It has to allow people the right to defend themselves (and therefore respect the ‘rights principle’) and it has to make sure people don’t abuse this right by applying excessive/disproportionate punishment (and therefore respect the ‘proportionality principle’). Something similar is true in the case of the Asset Protection example given above. The relevant legal doctrine has to balance the right for creditors to be repaid what they are owed against the right/desirability of not depriving people of assets that are essential to their well-being. These principles can, on occasion, rank different actions in different ways. The job of the legal rule/doctrine is to help us to aggregate the rankings together and come up with the correct legal decision.

We now have everything we need to complete Katz’s argument:

Proposition 3: Because legal rules/doctrines are like multicriterial decision-making systems, and because multicriterial decision-making systems are like voting systems, they are vulnerable to the same kinds of paradoxes or perverse manipulations. These are what we call ‘legal loopholes’.

How do we get from the first two propositions to this? The gist of the argument is simply that multi-criterial decision-making systems are vulnerable to the same kinds of manipulative acts as voting systems. Go back to the earlier example of the Law School Vote. We saw there how one resolute procrastinator was able to defy the majority preference for some kind of fine to be introduced by manipulating the agenda of the vote. He did this by introducing a seemingly irrelevant third alternative (the €1000 a day fine) into the voting system. We should, of course, be cautious about how we use the term ‘irrelevant’ in this context. The term is adopted from decision theory and does not necessarily track with ordinary usage. In one sense, the introduction of the €1000-a-day option is very relevant: some people prefer it to the €100 a day option. But in another sense it is irrelevant: if group preferences were transitive, you wouldn’t expect its introduction to alter the relative ranking of the €100 a day fine and the status quo. And yet it does. By manipulating the agenda of the vote, the resolute procrastinator can ensure that it makes an absolutely critical difference. It flips the relative ranking of those two options, allowing the status quo to win out. Katz argues that this really shows that seemingly ‘irrelevant’ alternatives are actually much more relevant than initially suspected.

The question is whether something similar can happen with legal doctrines. Katz argues that it does. Sometimes, if we can introduce a seemingly irrelevant alternative into the picture, they can alter the decision. The self defence doctrine is a good illustration of this. In some cases of self defence, you don’t have the opportunity to safely retreat from the lethal attack. In these cases, you basically have two options, either you stay and be killed by your attacker; or you stay and fight back, killing your attacker. According to the law, both options are equally acceptable (i.e. both are legally permissible) from your perspective (what the attacker is doing to you may be legally impermissible but that is a separate question). Another way of putting it is that in this case, the proportionality principle and the rights principle point to the same legal evaluation. In other self-defence cases, you may have, in addition to the option of staying and being killed and staying and killing, the option of reasonable retreat. In these cases, the legal evaluation of the options is very different. Suddenly, the once legally permissible option of staying and killing your attacker might seem legally impermissible. Why didn’t you retreat when you had the chance? Katz argues that what is happening in this case is that the principles underlying the self defence doctrine rank the options differently: the rights principle says that standing your ground is permissible; the proportionality principle does not. We need to break the deadlock between them — to aggregate the different rankings into a legal decision — and so we (or, rather, most jurisdictions) allow the proportionality principle to win the day when the option of reasonable retreat is on the table.



The claim is that this is directly analogous to what happens in a voting system. Someone who wants to use the law to suit their purposes can manipulate contexts so that certain options are on the table (or not) and thus take advantage of the different rankings assigned to those options by the different underlying doctrines. That is what Samson is doing in the contrived self-defence case: by confronting his attackers in a park late at night he is taking reasonable retreat off the table. That is what James is doing in the asset protection case: by purchasing the exempt assets he is taking the option of seizing his assets and selling them off less reasonable. And that is what Ivan is doing in the political asylum case: by making his videos and speaking out against the regime in his home country, he is taking the option of returning to his home country and living an unmolested life off the table.

Clever lawyers can help individuals manipulate the agenda of legal decision-making in similar ways by advising them on how to limit or open up new options, or by providing evidence to support claims to the effect that certain options were or were not available to them.What’s more, following Arrow’s insights into voting, it would seem to follow that loophole exploitation of this sort is inevitable if the law is trying to aggregate different ethical/policy criteria. You can never completely eliminate loopholes from the law; they are inherent to the logic of legal decision-making.


4. Conclusion
That brings us to the end of this post. To briefly recap, loopholes are common and persistent phenomena in the law. The job of the lawyer is often conceived in terms of exploiting loopholes on behalf of their clients. I’ve been outlining Leo Katz’s theory of legal loopholes. This theory argues that legal loopholes are directly analogous to voting paradoxes. Just as voting paradoxes arise when we try to aggregate individual preference rankings into a group preference ranking; so too do legal loopholes arise when we try to aggregate the rankings assigned by different underlying ethical or policy principles into a single legal evaluation.

I like Katz’s theory because it draws an interesting connection between two seemingly disparate areas of social life (voting and legal decision-making). Intertheoretic unification of this sort is usually thought to be a virtue. That said, I am also drawn to it because it is quite elaborate and theoretically sophisticated. But neither of these things are necessarily virtues. One could argue that Katz’s theory is too clever by half and that a much simpler explanation of loopholes is possible. Also, I certainly haven’t tested to see whether it explains every putative case of a legal loophole. Indeed, I would worry that in the end it may not explain loopholes so much as redefine them (maybe in part because loopholes are not particularly well-defined in the first place).

Alas, I’ll have to leave those issues unresolved. I offer Katz’s theory for your consideration and leave you to play around with the details. If you would like to learn more, I would recommend reading Katz’s full explanation of his theory. It fleshes out the analogy between legal decision-making and voting in far more detail than I provided here.




Monday, June 4, 2018

Episode #39 - Re-engineering Humanity with Frischmann and Selinger

51kGSOkv4EL._SX329_BO1,204,203,200_.jpg

In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the 'Free Will Wager' and how it pertains to debates about technology and social engineering.

You can listen to the episode below or download it here. You can also subscribe on Stitcher and iTunes (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:33 - What is techno-social engineering?
  • 7:55 - Is techno-social engineering turning us into simple machines?
  • 14:11 - Digital contracting as an example of techno-social engineering
  • 22:17 - The three important ingredients of modern techno-social engineering
  • 29:17 - The Digital Tragedy of the Commons
  • 34:09 - Must we wait for a Leviathan to save us?
  • 44:03 - The Free Will Wager
  • 55:00 - The problem of Engineered Determinism
  • 1:00:03 - What does it mean to be self-determined?
  • 1:12:03 - Solving the problem? The freedom to be off

Relevant Links