Tuesday, December 18, 2018

Algorithmic Governance in Transport: Some Thoughts

I recently participated in a workshop on ‘Algorithmic Governance in Transport’ at the OECD in Paris. The workshop was organised by the International Transport Forum (ITF), which is a sub-unit of the OECD that focuses on transport policy and regulation. The workshop featured a wide range of participants, mainly drawn from industry, government and public policy, with a handful of academics like myself thrown in for good measure. The purpose of the workshop was to consider how algorithmic governance technologies might be regulated, or be used for regulating, the transport sector.

The meeting was conducted under ‘Chatham House’ rules, so I am honour-bound not to report on what particular people said at it. But it was an interesting event and I thought it might be useful if I could put some shape on my own thoughts on the topic in its aftermath. That’s what this post tries to do. I should confess at the outset that, although I have written and thought a lot about algorithmic governance over the past few years, I haven’t really thought that much about its applications to the transport sector. I’ve been more interested in its role in bureaucratic decision-making, finance and criminal justice. Nevertheless, one thing that struck me at the workshop was how similar the issues are across these different sectors. So I think there are some important general lessons about algorithmic governance that can be gleaned from the discussion of transport below (and, since I’m not an expert on transport, much that I probably leave out that should be included).

I’ll divide the remainder of this post into three main sections. First, I’ll talk about the various applications of algorithmic governance technologies in transport. Second, I’ll sketch out a general framework for thinking about the regulation of algorithmic governance technologies in transport. And third, I’ll highlight my key ‘take aways’ from the workshop. I want to be clear that everything I am about to say is my own take on things and does not represent the views of anyone else at the meeting nor, of course, those of the ITF and the OECD.

1. Algorithmic Governance in Transport
I’ve been defining ‘algorithmic governance’ for years. I still don’t know if I am getting it right. I always point out that I use the term in a restrictive sense to refer to a kind of control system that is made possible by modern digital technologies. I do this because the word ‘algorithmic’ can be applied quite generally to any rule-based decision-making process. After all, an algorithm is just a set of instructions for taking an input and producing a defined output. If you applied the term generally, then pretty much any bureaucratic, quasi-legal management system could count as an algorithmic governance system. But this would then obscure some of the important differences between these systems and modern, computer-based systems. It’s those differences that interest me.

This is something I wrote about at greater length in my paper ‘The Threat of Algocracy’, where I followed the work of the sociologist A.Aneesh in distinguishing bureaucratic governance systems from algorithmic ones. To cut to the chase, the way I see it, an algorithmic governance system is any technological system in which computer-coded algorithms are used to collect data, analyse data, and make decisions on the basis of that data. Algorithmic governance systems are usually intended to control, push, nudge, incentivise or manipulate the behaviour of human beings or, indeed, other machines or machine components (a qualification that becomes important in the discussion below). So, to put it more pithily, algorithmic governance is, to me, a kind of technology, not a management philosophy or ideology (though it may, of course, be supported by some such belief system).

A classic example of an algorithmic governance system in action — and one that I have used many times before — is the chaotic storage algorithm that Amazon started using to stock warehouses a few years back. This system collects data on available shelf space within a warehouse and allocates stock to that shelf-space on a somewhat random/chaotic basis (i.e. not following a traditional rule such as grouping items according to type or in alphabetical order but rather allocating on the basis of available shelf-space). When human workers need to fill an order, the algorithm plots a route through the warehouse for them. As I understand it, the system has changed in the past few years as Amazon has stepped up the use of robots and automation within its warehouses. Now it is the robots that store and locate shelving units and the humans that do the more dextrous work of picking the stock from the shelves to fill customer orders. If anything, this has resulted in an even more algorithmised system taking root within the warehouses.

Algorithmic governance systems are not all created equal. They vary along a number of dimensions. This is something I have discussed in exhaustive, and probably excessive detail, in my two papers on the ‘logical space’ of algocracy (and I have another one due out next year). The gist of both papers is that algorithmic governance systems vary (a) in terms of the functions they perform (collection, analysis, decision-making): some systems perform just one of these functions, some perform all three; and (b) in terms of how they engage with human participants: some systems require input from human, others usurp or replace human decision-makers. These variations can be important when it comes to understanding the ethical and societal impact of algorithmic governance, but acknowledging them adds a lot of complexity to the discussion.

So what about algorithmic governance in transport? How does it arise and what can it be used to do? I have to talk in generalities here. As is clear from my definition, algorithmic governance systems can be used to collect and process data and make decisions on the basis of that data. These are all functions that are useful in the management of transport systems. Transport is concerned with getting stuff (where ‘stuff’ includes people, animals, goods etc) from A to B in the safest and most efficient manner possible. It uses a variety of means for doing this, including bikes, scooters, cars, trucks, trains, planes and more. There is a lot of transport-related data that is now collected, including information about driver behaviour, machine performance, pedestrians, road surfaces and so on. The assumption is that this data could be analysed and used to create a safer, more efficient and, possibly, fairer transportation system. Some of the obvious uses of algorithmic governance in transport would include:

  • Analysis and communication of transport-related information to key decision-makers (drivers, pedestrians, traffic planners)

  • Control of the access to transport systems and the public/spaces these systems utilise (e.g. giving some people preferential access to modes of public transport, parking space; or creating a platform linking private cabs to customers a la Uber and Lyft)

  • Control of the actual transport system (e.g. automated metro lines, self-driving cars, autopilot systems)

  • Nudge/regulate transport-related behaviours of humans (e.g. use of speed-tracking signs to encourage drivers to slow down)

There are many more specific applications that we could consider and imagine. These four general applications will suffice, however, for understanding why people might care about the ‘regulation’ of algorithmic governance in transport. That’s the topic I turn to next.

2. A Framework for Thinking about Regulation
The focus of the OECD workshop was on coming up with guidelines for the regulation of algorithmic governance systems in transport. The organisers divided the discussion into three main subject themes: (i) the creation of machine-readable and implementable regulations; (ii) regulation by algorithms and (iii) regulation of algorithms. I found this breakdown initially confusing. I wasn’t sure if the distinctions could be sustained. By the end of the workshop I decided that it did have some utility, but that some clarification was needed, at least if I was to make sense of it all.

The problem for me was that word ‘regulation’ is somewhat ambiguous in meaning. Sometimes we use the word to refer, generally, to any attempt to control and manage behaviour so that it conforms with certain preferred standards. This general conception of regulation is agnostic as to the means used to ensure compliance with the preferred standards of behaviour. Other times, however, we use the word to refer to a particular kind of control and management of behaviour, specifically the use of linguistically encoded rules and regulations (‘You must do X’, ‘You must refrain from Y’) that communicate preferred standards and are enforced through some system of monitoring and punishment.

This ambiguity of meaning creates a problem when it comes to algorithmic governance in transport because, given my previous definition and understanding of an algorithmic governance system, it is possible for such systems to be both subjects of regulation, as well as means by which we implement regulation. In other words, you could have a regulatory code (a set of rules and standards) that you think all algorithmic governance systems should abide by, and you could, at the same time, use the algorithmic governance system to regulate the behaviour of both the people (and the machines/physical spaces) at the heart of the transport system. Understanding and appreciating the dynamics of this ‘two-faced’ nature of algorithmic governance systems in transport is, I think, important when it comes to thinking about the regulatory questions. This is true in all other domains in which algorithmic governance systems are used as well.

So the bottom line is that we need some framework for clarifying how to think about algorithmic governance and regulation, given the ‘two-faced’ nature of algorithmic governance systems in relation to any regulatory system. Now lots of people have tried to develop frameworks of this sort before (Karen Yeung has a particularly good one, for example), and I don’t necessarily want to reinvent the wheel, but as result of what I heard at the workshop, I sketched out one that I thought was useful. Here’s my attempt to explain it.

The framework has three parts to it. It proposes that when you think about regulation and the algorithmic governance of transport (or anything else) you need to think about three separate things: (i) the nature of the regulatory system that sits ‘above’ or ‘around’ the algorithmic governance system; (ii) the nature of the algorithmic governance system itself and the functions it performs in the transport sector; and (iii) the impact/effects of that algorithmic governance system. This is illustrated in the diagram below.

Let’s go through each element of this framework, starting with the nature of the regulatory system that sits above or around the algorithmic governance system. You’ll have to forgive the metaphorical language I’m using to describe this aspect of the framework. The basic idea is that any algorithmic system that is used in transport management and control will sit within some broader regulatory context. There is no aspect of modern life that is not subject to at least some regulation. There will be some set of linguistically encoded rules that participants within the transport system will be expected to abide by. This ruleset will cover, to a greater or lesser extent, the behaviour of all the major participants in the transport system, including service providers, commuters, sellers, local governments and so on. Any algorithmic governance system introduced into the transport sector will, at a minimum, raise questions about the ruleset and the regulatory context. Its introduction will be a ‘disruptive moment’ and may lead to ambiguities or uncertainties about the applicability of that ruleset to the new technology and the powers of those charged with monitoring and enforcing it. This is true for all new technologies. To figure out how disruptive it really is, you’ll have to ask some important questions: what does the broader regulatory context look like? Is there a single regulator responsible for all aspects of transport (unlikely) or are there lots of regulators? Do they have sufficient expertise/personnel/resources to address the new technology? Can they effectively enforce their rulesets? Are the rules they use sufficiently detailed/robust to apply to the new technology? Should they apply to the new technology? In short, is the broader regulatory context ‘fit for purpose’?

This brings us to the second aspect of the framework: the algorithmic governance system itself. This is the technological management and control system that is intervening in the management of transport. Getting clear about the nature of that system, and the functions it can perform is crucial when it comes to figuring out its regulatory impact. What exactly is it doing and what is it capable of doing? Is it publicly or privately owned? Could we use the system to better implement and monitor the existing regulatory ruleset? Does it render parts of that regulatory ruleset moot? Does it sit in an unwelcome regulatory ‘gap’ that means it is exempt from regulations that we really think it ought not to be exempt from? The last question is particularly important because many new technologies can be used to perform a kind of ‘regulatory arbitrage’, i.e. enable companies to make money by avoiding a regulatory compliance burden faced by some rival. Arguably, this is exactly what made ride-sharing services like Uber and Lyft successful at the outset: they sat outside existing regulations for the taxi-trade and so could operate at a reduced cost. They had other advantages too, of course, and as they have become more successful regulators have swept in to fill the regulatory gaps. This is the natural progression, for better worse. It is important, however, that when thinking about this second aspect of the framework we don’t lose sight of the fact that algorithmic governance systems are not just things that need to be regulated or that need to comply with regulations, but are also things that can regulate behaviour in more or less successful ways. A well designed, smart transport information system, for example, can make commuting much more pleasant and effective from the commuter’s perspective. We may not want an existing and inflexible set of regulations to hinder that.

And this, naturally enough, brings us to the third aspect of the framework: the effects of the algorithmic governance system on the real world. It is these effects that play the crucial role in determining how the system should be regulated and how it can be used to regulate. Algorithmic governance systems are often sold to us on the basis that they can make the management and control of technology or human behaviour more efficient, more effective and generally more beneficial. Fitter, happier, more productive and all that jazz. These putative benefits may be quite real and may persuade us to adopt the system (or accept it, if it has already been adopted). But algorithmic governance systems usually have certain risks associated with them and it is important that these are addressed if the system is going to be adopted. These risks are somewhat generic, but manifest in slightly different ways in different contexts. There are six such risks that I thought about when considering the impact in transport:

Security/safety: Is the system actually safe and secure? Is it vulnerable to hacking or malicious interference? Does it increase the risk of accidents? This strikes me as being a big issue in the transport sector. People won’t like to use automated transport management systems if they are accident prone, buggy, or open to malicious hacking.

Data protection/privacy: Does the system respect my rights to privacy and data protection? Obviously, this is a long-standing issue in debates about algorithmic governance since such systems are usually reliant on data collection and surveillance. In Europe, at any rate, there is a reasonably comprehensive set of data protection regulations that any algorithmic governance system will need to abide by in its operation and management.

Responsibility/liability: If something does go wrong with the system, who foots the bill? Who is to blame if a self-driving car gets into an accident or if a traffic monitoring system gets hacked and the data is leaked to third parties? Some people worry about the possibility of responsibility ‘gaps’ arising in the case of complex and autonomous algorithmic governance systems. Clever lawyers may be able to craft arguments that allow the creators of such systems to avoid liability. Do we need to address this problem? Should we adopt a strict liability approach or a social insurance approach? There are many mooted solutions to this problem, some or all of which could apply to the transport sector.

Transparency/explainability: The machine learning techniques at the heart of modern algorithmic governance systems are notoriously opaque and difficult to explain to end users. This could be a major problem in transport if those systems deny or impede people’s access to transport (and thereby affect their right to freedom of movement) or otherwise interfere with their commuting behaviour in a way that affects their rights or legal position. Some such interferences might be justified/legitimate, but to prove this they will need to be transparent and explainable to those affected. There are various proposals out there to address this problem, some rely on technical solutions to the interpretability problem in machine learning, others on more robust legal regulations that insist upon greater transparency and explainability (some people also argue that the transparency/explainability problem is overstated since human regulatory systems are often opaque and unexplainable).

Fairness/Bias: If the system does affect someone’s access to transport, does it do so in a ‘fair’ or ‘equal’ manner? There are concerns that algorithmic governance systems do little more than reinforce existing stereotypes and biases and so fail to be fair. But what ‘fairness’ actually means, could vary depending on who you ask. Some people think fairness means correcting for historical injustices in access to transport (which seems legitimate since some transport systems have been designed to deliberately exclude or include certain groups of people based on race or social class — Robert Moses’s design of the Long Island parkways being the most well-known example). But others think it means not using protected characteristics (race, gender etc.) in determining how a system works. For those who are interested, I had a long podcast discussion with Reuben Binns about what fairness means or could be taken to mean in such debates. We don’t offer any definitive conclusions, but we at least explore the range of possibilities.

Welfare/Wellbeing: How does the system impact on the wellbeing of those who use the transport system (where ‘wellbeing’ is defined in such a way that excludes the other concerns mentioned above)? Obviously, the hope is that it makes things better, but even if it does there are downstream effects of algorithmic governance systems that need to be factored in. Perhaps the main one in transport is whether the systems ultimate displace lots of human workers from employment in the transport sector. Will these workers be compensated in any way? Should people who make use of these systems actually help to retrain the displaced workers? What about the effects of transport automation? Will it make people lazier and less vigilant when they drive? Will it undermine their moral agency and accentuate their moral patiency (as I have argued before)?

There may well be other impacts that need to be considered, but I think these six, at a minimum, should shape how we think about the regulation of (and by) algorithmic governance systems in transport. It may be that we need to conduct an ‘algorithmic impact assessment’ prior to to the introduction of such a system, as the members of the AI Now institute have been arguing, and as the State of New York seem to accept.

3. Conclusion and Key Take Aways
I won’t conclude by summarising everything I have said to this point. Instead, I’ll conclude with some of my own key takeaways from the conference. These takeaways all connect back to the framework outlined above, and in some sense serve as additional ‘thinking’ principles that can be applied when using that framework. There are really only two of them that stand out in my memory.

First, avoid the compliance trap. I’ve effectively said this already but it bears repeating. I think there is a danger of being too rigid in thinking about the relationship between algorithmic governance systems and the broader regulatory system. In particular, there is a danger of thinking that the algorithmic governance system must always comply with or fit into the pre-existing regulatory system — that all regulatory gaps must be plugged. That may be appropriate on some occasions but not all. There is no guarantee that the pre-existing regulatory system is optimal, and we must be willing to entertain the new modalities or styles of regulation that are made possible by the algorithmic governance system. Indeed, the possibilities afforded to us by the technology might be used to reevaluate or reconstruct the pre-existing regulatory system. Several examples of this came up at the conference. One that stuck with me was a system for the smart allocation and redesignation of public parking spaces that is used in Amsterdam. Instead of using fixed signposts to indicate the times when a parking space could be used, they use a hologram that projects the relevant information directly onto the space and that can be reprogrammed or readjusted on a dynamic basis. It’s a simple example, but illustrates the point because if the pre-existing regulatory system was adopted too rigidly or if didn’t allow for this possibility, the advantage of a flexible, dynamically updated parking allocation system would be lost.

Second, benefits often come with hidden or underappreciated costs. I’m not someone who thinks that technology is perfectly value neutral, but I do think that most technologies can be put to good or bad use. This dual-functionality is important to bear in mind when trying to get the most out of a new technology. In particular, it is important to bear in mind that a technology that has positive consequences along one dimension of value may have negative consequences along others. If we become too fixated on one dimension of value we might lose sight of this. To give an example, many people are deeply concerned about the Chinese ‘social credit system’, which does a number of things, including surveilling and assigning people points based on their compliance with certain regulations, and then denying them access to transport if they accrue number of negative points. It seems like a clearly dystopian use of algorithmic governance in transport. But suppose you didn’t want to deny people access to transport but, instead, wanted to use an algorithmic governance system to grant historically disadvantaged groups preferential access to certain kinds transport (in the interests of ensuring their right to freedom of movement). That sounds like a wonderfully progressive use of technology, right? Maybe, but note that to do this effectively you would probably have to introduce something very similar to the Chinese social credit system, namely a preferential scoring system that monitored and collected data about people to ensure that it could discriminate in their favour. This could lead to more monitoring and surveillance of disadvantaged populations, which could been seen as a cost of introducing the seemingly beneficial system, and could introduce more risk into the system since it the data collected could be used for less progressive and welcome purposes.

So even though we should avoid the compliance trap, and be open to the use of algorithmic governance to improve the way in which transport systems operate, we must not also lose sight of the hidden costs of these systems. We need to broaden our thinking and consider the full sweep of potential impacts, both positive and negative.

No comments:

Post a Comment