Wednesday, November 4, 2015

Understanding the Threat of Algocracy




On 2nd November, I gave a talk entitled "The Threat of Algocracy: Reality, Resistance and Accommodation" to the Programmable City Project at Maynooth University. You can watch the video of my presentation (minus the Q&A) above.

The talk defended one central thesis: That the increase in algorithm-based decision making poses a threat to the legitimacy of our political and legal system. The threat in question is relatively unique (due to its technological basis) and difficult to resist and accommodate.

In order to defend this thesis, I tried to ask and answer four questions:

1. What is 'algocracy'? Broadly speaking, to me 'algocracy' is the phenomenon whereby algorithms takeover public decision-making systems. More precisely, the term 'algocracy' can be used to describe decision-making systems in which computer-coded algorithms structure and constrain the way in which human beings interact with these decision-making processes (see, generally, Aneesh 2009). There are many different possible algocratic systems. I focus on algocratic systems made possible by the rise of big data, the internet of things, surveillance, data-mining and predictive analytics.
2. What is the 'threat of algocracy'?  Public decision-making processes ought to be legitimate. Most people take this to mean that the processes should satisfy a number of proceduralist and instrumentalist conditions. In other words, the processes should be fair and transparent whilst at the same time achieving good outcomes. The problem with algocratic systems is that they tend to favour good outcomes over transparency and fairness. This is the threat they pose to political legitimacy.
3. Can we (or should we) resist the threat? I argue that it is difficult to resist the threat of algocracy (i.e. to dismantle or block the creation of algocratic systems) due to the ubiquity of the technology and the strength of the political and economic forces favouring the creation of algocratic systems. I also argue that, in many cases, it may not be morally desirable to dismantle or block the creation of such systems.
4. Can we accommodate the threat? I argue that it is difficult to accommodate the threat of algocracy (i.e. to allow for meaningful participation in and comprehension of these systems). I examine three possible accommodationist solutions and find them lacking in several respects.

The talk provides more detail on these four questions. I find it difficult to watch and listen to myself give presentations of this sort, but other people may find it more tolerable. And if you can't get enough of this topic, I did an interview on the Review the Future podcast about it last year and I also wrote a short post describing the nature of the threat a couple of years back.






No comments:

Post a Comment