|IBM's Watson (Image from Clockready via Wikipedia)|
In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson…and our lives will be better for it.
(Ginni Rometty commenting on IBM’s Watson)
I’ve written a few posts now about the social and ethical implications of algorithmic governance (algocracy). Today, I want to take a slightly more general perspective on the same topic. To be precise, I want to do two things. First, I want to discuss the process of algorithm-construction and the two translation problems that are inherent to this process. Second, I want to consider the philosophical importance of this process.
In writing about these two things, I’ll be drawing heavily from the work done by Rob Kitchin, and in particular from the ideas set out in his paper ‘Thinking critically about and researching algorithms’. Kitchin is currently in charge of The Programmable City research project at Maynooth University in Ireland. This project looks closely at the role of algorithms in the design and function of ‘smart’ cities. The paper in question explains why it is important to think about algorithms and how we might go about researching them. I’ll be ignoring the latter topic in this post, though I may come back to it at a later stage.
1. Algorithm-Construction and the Two Translation Problems
The term ‘algorithm’ can have an unnecessarily mystifying character. If you tell someone that a decision affecting them was made ‘by an algorithm’, or if, like me, you talk about the rise of ‘algocracy’, there is a danger that you present an overly alarmist and mysterious picture. The reality is that algorithms themselves are relatively benign and easy to understand (at least conceptually). It is really only the systems through which they are created and implemented that give rise to problems.
An algorithm can be defined in the following manner:
Algorithm: A set of specific, step-by-step instructions for taking an input and converting into an output.
So defined, algorithms are things that we use everyday to perform a variety of tasks. We don’t run these algorithms on computers; we run them on our brains. A simple example might be the sorting algorithm you use for stacking books onto the shelves in your home. The inputs in this case are the books (and more particularly the book titles and authors). The output is the ordered sequence of books that ends up on your shelves. The algorithm is the set of rules you use to end up with that sequence. If you’re like me, this algorithm has two simple steps: (i) first you group books according to genre or subject matter; and (ii) you then sequence books within those genres or subject areas in alphabetical order (following the author’s surname). You then stack the shelves according to the sequence.
But that’s just what an algorithm is in the abstract. In the modern digital and information age, algorithms have a very particular character. They lie at the heart of the digital network created by the internet of things, and the associated revolutions in AI and robotics. Algorithms are used to collect and process information from surveillance equipment, to organise that information and use it to form recommendations and action plans, to implement those action plans, and to learn from this process.
Everyday we are exposed to the ways in which websites use algorithms to perform searches, personalise advertising, match us with potential romantic partners, and recommend a variety of products and services. We are perhaps less-exposed to the ways in which algorithms are (and can be) used to trade stocks, identify terrorist suspects, assist in medical diagnostics, match organ donors to potential donees, and facilitate public school admissions. The multiplication of such uses is what gives rise the phenomenon of ‘algocracy’, i.e. rule by algorithms.
All these algorithms are instantiated in computer code. As such, the contemporary reality of algorithm construction gives rise to two distinct translation problems:
First Translation Problem: How do you convert a given task into a human-language series of defined steps?
Second Translation Problem: How do you convert that human-language series of defined steps into code?
We use algorithms in particular domains in order to perform particular tasks. To do this effectively we need to break those tasks down into a logical sequence of steps. That’s what gives rise to the first translation problem. But then to implement the algorithm on some computerised or automated system we need to translate the human-language series of defined steps into code. That’s what gives rise to the second translation problem. I call these ‘problems’ because in many cases there is no simple or obvious way in which to translate from one language to the next. Algorithm-designers need to exercise judgment, and those judgments can have important implications.
Kitchin uses a nice example to illustrate the sorts of issues that arise. He discusses an algorithm which he had a role in designing. The algorithm was supposed to calculate the number of ‘ghost estates’ in Ireland. Ghost estates are a phenomenon that arose in the aftermath of the Irish property bubble. When developers went bankrupt, a number of housing developments (‘estates’) were left unfinished and under-occupied. For example, a developer might have planned to build 50 houses in a particular estate, but could have run into trouble after only fully completing 25 units, and selling 10. That would result in a so-called ghost estate.
But this is where things get tricky for the algorithm designer. Given a national property database with details on the ownership and construction status of all housing developments, you could construct an algorithm that sorts through the database and calculates the number of ghost estates. But what rules should the algorithm use? Is less than 50% occupancy and completion required for a ghost estate? Or is less than 75% sufficient? Which coding language do you want to use to implement the algorithm? Do you want to add bells and whistles to the programme, e.g. by combining it with another set of algorithms to plot the locations of these ghost estates on a digital map? Answering these questions requires some discernment and judgment. Poorly thought-out answers can give rise to an array of problems.
2. The Philosophical Importance of Algorithms
Once we appreciate the increasing ubiquity of algorithms, and once we understand the two translation problems, the need to think critically about algorithms becomes much more apparent. If algorithms are going to be the lifeblood of modern technological infrastructures, if those infrastructures are going to shape and influence more and more aspects of our lives, and if the discernment and judgment of algorithm-designers is key to how they do this, then it is important that we make sure we understand how that discernment and judgment operates.
More generally than this, if algorithms are going to sit at the heart of contemporary life, it seems like they should be of interest to philosophers. Philosophy is divided into three main branches of inquiry: (i) epistemology (how do we know?); (ii) ontology (what exists?); and (iii) ethics/morality (what ought we do?). The growth of algorithmic governance would seem to have important repercussions for all three branches of inquiry. I’ll briefly illustrate some of those repercussions here though it should be noted that what I am about to say is by no means exhaustive (Note: Floridi discusses similar ideas under his concept of information philosophy).
Looking first to epistemology, it is pretty clear that algorithms have an important impact how we acquire knowledge and on what can be known. We witness this in our everyday lives. The internet and the attendant growth in data-acquisition has resulted in the compilation of vast databases of information. This allows us to collect more potential sources of knowledge. But it is impossible for humans to process and sort through those databases without algorithmic assistance. Google’s Pagerank algorithm and Facebook’s Edgerank algorithm effectively determine a good proportion of the information with which we a presented on day-to-day basis. In addition to this, algorithms are now pervasive in scientific inquiry and can be used generate new forms of knowledge. A good example of this is the C-Path cancer prognosis algorithm. This is a machine-learning algorithm that was used to discover new ways in which to better assess the progression of certain forms of cancer. IBM hope that their AI system Watson will be provide similar assistance to medical practitioners. And if we believe Ginni Rometty (from the quote at the top of this post) use of such systems will effectively become the norm. Algorithms will shape what can be known and will generate knew forms of knowledge.
Turning to ontology, it might be a little trickier to see how algorithms can actually change our understanding of what kinds of stuff exists in the world, but there are some possibilities. I certainly don’t believe that algorithms have an effect on the foundational questions of ontology (e.g. whether reality is purely physical or purely mental), though they may change how we think about those questions. But I do think that algorithms can have a pretty profound effect on social reality. In particular, I think that algorithms can reshape social structures and create new forms of social object. Two examples can be used to illustrate this. The first example draws from Rob Kitchin’s own work on the Programmable City. He argues that the growth in so-called ‘smart’ cities gives rise to a translation-transduction cycle. On the one hand, various facets of city life are translated into software so that data can be collected and analysed. On other hand, this new information then transduces the social reality. That is to say, it reshapes and reorganises the social landscape. For example, traffic modeling software might collect and organise data from the real world and then planners will use that data to reshape traffic flows around a city.
The second example of ontological impact is in the slightly more esoteric field of social ontology. As Searle points out in his work on this topic, many facets of social life have a subjectivist ontology. Objects and institutions are fashioned into existence out of our collective imagination. Thus, for instance, the state of being ‘married’ is a product of a subjectivist ontology. We collectively believe in and ascribe that status to particular individuals. The classic example of a subjectivist ontology in action is money. Modern fiat currencies have no intrinsic value: they only have value in virtue of the collective system of belief and trust. But those collective systems of belief and trust often work best when the underlying physical reality of our currency systems is hard to corrupt. As I noted before, the algorithmic systems used by cryptocurrencies like Bitcoin might provide the ideal basis for a system of collective belief and trust. Thus, algorithmic systems can be used to add to or alter our social ontology.
Finally, if we look to ethics and morality we see the most obvious philosophical impacts of algorithms. I have discussed examples on many previous occasions. Algorithmic systems are sometimes presented to people as being apolitical, technocratic and value-free. They are anything but. Because judgment and discernment must be exercised in translating tasks into algorithms, there is much opportunity for values and to affect how they function. There are both positive and negative aspects to this. If well-designed, algorithms can be used to solve important moral problems in a fair and efficient manner. I haven’t studied the example in depth, but it seems like the matching algorithms used to facilitate kidney exchanges might be a good illustration of this. I have also noted, on a previous occasion, Tal Zarsky’s argument that well-designed algorithms could be used to eliminate implicit bias from social decision-making. Nevertheless, one must also be aware that implicit biases can feed into the design of algorithmic systems, and that once those systems are up and running, they may have unanticipated and unexpected outcomes. A good recent example of this is the controversy created by Google’s photo app, which used a facial recognition algorithm to label photographs of some African-American people as ‘gorillas’.
Anyway, that’s all for this post. Hopefully the challenges of algorithm construction and the philosophical importance of algorithmic systems is now a little clearer.
Post a Comment