Pages

Friday, October 12, 2018

The Automation of Policing: Challenges and Opportunities


[These are some general reflections on the future of automation in policing. They are based on a workshop I gave to the ACJRD (Association for Criminal Justice Research and Development) annual conference in Dublin on the 4th October 2018). I took it that the purpose of the workshop was to generate discussion. As a result, the claims made below are not robustly defended. They are intended to be provocative and programmatic.]

This conference is all about data and how it can be used to improve the operation of the criminal justice system. This focus is understandable. We are, as many commentators have observed, living through a ‘data revolution’ in which we are generating and collecting more data than ever before. It makes sense that we would want to put all this data to good use in the prevention and prosecution of crime.

But the collection and generation of data is only part of the revolution that is currently taking place. The data revolution, when combined with advances in artificial intelligence and robotics, enables the automation of functions traditionally performed by human beings. Police forces can be expected to make use of the resulting automating technologies. From predictive policing, to automated speed cameras, to bomb disposal robots, we already see a shift away from a human-centric policing systems to ones in which human police officers must partner with, or be replaced by, machines.

What does this mean for the future of policing? Will police officers undergo significant technological displacement, just as workers in other industries have? Will the advent of smart, adaptable security robots change how we think about the enforcement of the law? I want to propose some answers to these questions. I will divide my remarks into three main sections. I will start by setting out a framework for thinking about the automation of policing. I will then ask and propose answers to two questions: (i) what does the rise of automation mean for police officers (i.e. the humans currently at work in the policing system)? and (ii) what does it mean for the policing system as a whole?


1. A Framework for Thinking about the Automation of Policing
Every society has rules and standards. Some, but not all, of these rules are legal in nature. And some, but not all, of these legal rules concern what we call ‘crimes’. Crimes are the rules to which we attach the most social and public importance. Somebody who fails to comply with such rules will open themselves up to public prosecution and condemnation. Nevertheless, it is important to bear in mind that crimes are just one subset of the many rules and standards we try to uphold. What’s more, the boundaries of the ‘criminal’ are fluid — new crimes are identified and old crimes are declassified on a semi-regular basis. This fluidic boundary is important when we consider the impact of automation on policing (more on this later).

When trying to get people to comply with social rules, there are two main strategies we can adopt. We can ‘detect and enforce’ or we can ‘predict and prevent’. If we detect and enforce, we will try to discover breaches of the rules after the fact and then impose some sanction or punishment on the person who breached them (the ‘offender’). This punishment can be levied for any number of reasons (retribution, compensation, rehabilitation etc), but a major one — and one that is central to the stability of the system — is to deter others from doing the same thing. If we predict and prevent, we will try to anticipate potential breaches of the rules and then plan interventions that minimise or eliminate the likelihood of the breach taking place.

I’ve tried to illustrate all this in the diagram below.



This diagram is important to the present discussion because it helps to clarify what we mean when we talk about the automation of policing. Police officers are the people we task with ensuring compliance with our most cherished social rules and standards (crimes) and most police forces around the world follow both ‘predict and prevent’ as well as ‘detect and enforce’ strategies. So when we talk about the automation of policing we could be talking about the automation of one (or all) of these functions. In what follows I’ll be considering the impact of automation on all of them.

(Note: I appreciate that there is more to the criminal justice system than this framework lets on. There is also the post-enforcement management of offenders (through prison and probation) as well as other post-release and early-intervention systems, which may properly be seen as part of the policing function. There is much complexity here that gets obscured when we talk, quite generally, about the ‘automation of policing’. I can’t be sensitive to every dimension of complexity in this analysis. This is just a first step.)


2. The Automation of Police Officers
Let’s turn then to the first major question: what effect will the rise of automating technologies have on police officers? There is a lot of excitement nowadays about automating technologies. Police forces around the world are making use of data analytics systems (‘predictive policing’) to help them predict and prevent crime in the most efficient way possible. Various forms of automated surveillance and enforcement are also commonplace through the use of speed cameras and red light cameras. There are also more ‘showy’ or obvious forms of automation on display, though they are slightly less common. There are no robocops just yet, but many police forces make use of bomb disposal robots, and some are experimenting with fully-automated patrol bots. The most striking example of this is probably the Dubai police force, which has rolled out security bots and drone surveillance at tourist spots. There are also some private security bots, such as those made by Knightscope Robotics in California, which could be used by police forces.

If we assume that similar and more advanced automating technologies are going to come on-stream in the future, obvious questions arise for those who currently make their living within the police force. Do they need to start looking elsewhere for employment? Will they, ultimately, be replaced by robots and other automating technologies? Or will it still make sense for the children of 2050 to dream of being police officers when they grow up?

To answer that question I think it is important to make a distinction, one that is frequently made by economists looking at automation, between a ‘job’ and a ‘task’. There is the job of being a police officer. This is the socially-defined role to which we assign the linguistic label ‘police officer’. This is, more or less, arbitrarily defined by grouping together different tasks (patrolling, investigating, form-filling, data analysis and so on) and assigning them to that role. It is these tasks that really matter. They are what police officers actually do and how they justify their roles. In modern policing, there is a large number of relevant tasks, some of which are further sub-divided and sub-grouped according to the division and rank of the individual police officer. Furthermore, some tasks that are clearly essential to modern policing (IT security, data analysis, community service) are sometimes assigned new role labels and not included within the traditional class of police officer. This illustrates the arbitrariness of the socially defined role.

This leads to an important conclusion: When we think about the automation of police officers, it is important not to focus on the job per se (since that is arbitrarily defined) but rather on the tasks that make up that job. It is these tasks, rather than the job, that are going to be subject to the forces of automation. Routine forms of data analysis, surveillance, form-filling and patrolling are easily automatable. If they are automated, this does not mean that the job of being a police officer will disappear. It is more likely that the job will be redefined to include or prioritise other tasks (e.g. in-person community engagement and creative problem-solving).

This leads me to another important point. I’ve been speaking somewhat loosely about the possibility of automating the tasks that make up the role of being a police officer. There are, in fact, different types of task relationships between humans and automating technologies that are obscured when you talk about things in this way. There are three kinds of relationship that I think are worth distinguishing between:

Tool Relationships: These arise when humans simply use technology as a tool to perform their job-related tasks more efficiently. Tools do not replace humans; they simply enable those humans to perform their tasks more effectively. Some automating technologies, e.g. bomb disposal robots, are tools. They are teleoperated by human controllers. The humans are still essential to the performance of the task.
Partnership Relationships: These arise when the machines are capable of performing some elements of a task by themselves (autonomously) but they still partner with humans in their performance. I think many predictive policing systems are of this form. They can perform certain kinds of data analysis autonomously, but they are still heavily reliant on humans for inputting, interpreting and acting upon that data.
Usurpation Relationships: These arise when the machines are capable of performing the whole task by themselves and do not require any human assistance or input. I think some of the new security bots, as well as certain automated surveillance and enforcement technologies are of this type. They can fully replace human task performers, even if those humans retain some supervisory role.


All three relationship types pose different risks for humans working within the policing system. Tool relationships don’t threaten mass technological unemployment, but they do threaten to change the skills and incentives faced by police officers. Instead of being physically dextrous and worrying about their own safety, police officers just have to be good at controlling machines that are put in harm’s way. Something similar is true for partnership relationships, although those systems may threaten at least some displacement and unemployment. Usurpation relationships, of course, promise to be the most disruptive and the most likely to threaten unemployment for humans. Even if we need some human commanders and supervisors for the usurpers we probably need fewer of them relative to those who are usurped.

So what’s the bottom line then? Will human police officers be displaced by automating technologies? I make three predictions, presented in order of likelihood:


  • (a) Future (human) police officers will require different skills and training as a result of automation: they will have to develop skills that are complementary to those of the machines, not in competition with them. This is a standard prediction in debates about automation and should not be controversial (indeed, the desire for machine-complementary skills in policing is already obvious).

  • (b) This could lead to significant polarisation, inequality, and redefinition of what it means to be a police officer. This, again, tracks with what happens in other industries. Some people are well-poised to benefit from the rise of the machines: they have skills that are in short supply and they can leverage the efficiencies of the technologies to their own advantage. They will be in high demand and will attract high wages. Others will be less well-poised to benefit from the rise of the machines and will be pushed into forms of work that are less skilled and less respected. This could lead to some redefinition of the role of being a ‘police officer’ and some dissatisfaction within the ranks.

  • (c) There might be significant technological unemployment of police officers. In other words, there may be many fewer humans working in the police forces of the future than at present. This is the prediction about which I am least confident. Police officers, unlike many other workers, are usually well-unionised and so are probably more able to resist technological unemployment than other workers. I also suspect there is some public desire for human faces in policing. Nevertheless, mass unemployment of police officers is still conceivable. It may also happen by stealth (e.g. existing human workers are left to retired and not replaced, and there is redefinition of roles to gradually phase out humans).



3. The Automation of Policing as a Whole
So much for the individual police officers, what about the system of policing as a whole? Let’s go back to the framework I developed earlier on. As mentioned, automating technologies could be (and are being) used to perform both the ‘detect and enforce’ and the ‘predict and prevent’ functions. What I want to suggest now is that, although this is true, it’s possible (and perhaps even likely) that automating technologies will encourage a shift away from ‘detect and enforce’ modes of policing to ‘predict and prevent’ modes. Indeed, automating technologies may encourage a ‘prevent-only’ model of policing. Furthermore, even when automated systems are used to perform the detect and enforce functions, they are likely to do so in a different way.

Neither of these suggestions is unique to me. Regulatory theorists have long observed that technology often encourages a shift from ‘detect and enforce’ to ‘predict and prevent’ methods of ensuring normative compliance. Roger Brownsword, for example, talks about the shift from rule-enforcement to ‘technological management’ in regulatory systems. He gives the example of a golf course that is having trouble with its members driving their golf carts over a flowerbed. To stop them doing this, the management committee first create a rule that assigns penalties to those who drive their golf carts over the flowerbed. They then put up a surveillance camera to help them detect breaches of this rule. This helps a little bit, but then a new technology comes on the scene that enables, through GPS-tracking, the remote surveillance and disabling of any golf carts that come close to the flowerbed. Since that system is so much more effective — it renders non-compliance with the rule impossible — the committee adopt that instead. They have moved from traditional rule-enforcement to technological management.

Elizabeth Joh argues that something similar is likely to happen in the ‘smart cities’ of the future. Instead of using technology simply to detect and enforce breaches of the law, it will be used to prevent non-compliance with the rules. The architecture of the city will ‘hardcode’ in the preferred normative values and will work to disable or deactivate anyone who tries to reject those values. Furthermore, constant surveillance and monitoring of the population will enable future police forces locate those who pose a threat to the system and quickly defuse them. This may lead to an expansion of the set of rules that the policing systems try to uphold to include relatively minor infractions that disturb the public peace. Joh thinks that this might lead to a ‘disneyfication’ of future policing. She bases this judgment on a famous study by Shearing and Stenning on security practices in Disney theme parks. She thinks this provides a model for policing in the smart city:

”The company anticipates and prevents possibilities for disorder through constant instructions to visitors, physical barriers that both guide and limit visitors’ movements, and through “omnipresent” employees who detect and correct the smallest errors (Shearing & Stenning 1985: 301). None of the costumed characters nor the many signs, barriers, lanes, and gardens feel coercive to visitors. Yet through constant monitoring, prevention, and correction embedded policing is part of the experience…”

(Joh 2018)

It’s exactly this kind of policing that is enabled by automating technologies.

This is not to say that detection and enforcement will play no part in the future of policing. There will always be some role for that, but it might take a very different form. Instead of the strong arm of the law we might have the soft hand of the administrator. Instead of being sent to jail and physically coerced when we fail to comply with the law; we might be nudged or administratively sanctioned. The Chinese Social Credit system, much reported on and much maligned in the West, provides one possible glimmer of this future. Through mass surveillance and monitoring, compliance with rules can be easily rewarded or punished through a social scoring system. Your score determines your ease of access to social services and opportunities. We already have isolated and technologically enabled version of this in Western democracies — e.g. penalty points systems for driving licences and credit rating scores in finance — the Chinese system simply takes these to their logical (and technological) extreme.

There is one other possible future for policing that is worth mentioning. It could be that the rise of automating technologies will encourage a shift away from public models of policing to privatised models. This is already happening to some extent. Many automated systems used by modern police forces are owned by private companies and hence lead to a public-private model of policing (with many attendant complexities when it comes to the administration of the law). But this trend may continue to push towards wholly private forms of automated policing. I cannot currently afford my own team of private security guards, but I might be able to afford my own team of private security bots (or at least rent them from an Uber-like company).

The end result may be that the keeping the peace is no longer be a public responsibility discharged by the police, but a private responsibility discharged by each of us.


4. Conclusion: Ethico-Legal Concerns
Let me conclude by briefly commenting on the ethical and legal concerns that could result from the automation of policing. There are five that I think are worth mentioning here that arise in other cases of automation too:

Freedom to Fail: The shift from rule-enforcement to technological management seems to undermine human autonomy and moral agency. Instead of being given the opportunity to exercise their free will and agency, humans are constrained and programmed into compliance. They lose their freedom to fail. Should we be concerned?
Responsibility Gaps: As we insert autonomous machines into the policing questions arise as to who is responsible for the misdeeds of these machines. Responsibility gaps open up that must be filled.
Transparency and Accountability: Related to the problem of responsibility gaps, automating technologies are often opaque or unclear in their operations. How can we ensure sufficient transparency? Who will police the automated police?
Biased Data —> Biased Outcomes: Most modern-day automating technologies are trained on large datasets. If the information within these datasets is biased or prejudiced this often leads to the automating technologies being biased or prejudices. Concerns about this have already arisen in relation to predictive policing and algorithmic sentencing. How can we stop this from happening?
The Value of Inefficiency: One of the alleged virtues of automating technologies is their efficient and unfailing enforcement/compliance with rules. But is this really a good thing? As Woodrow Hartzog and his colleagues have pointed out it could be that we don’t want our social rules to be efficiently enforced. Imagine if you were punished everytime you broke the speed limit? Would that be a good thing? Given how frequently we all tend to break the speed limit, and how desirable this is on occasion, it may be that efficient enforcement is overkill. In other words, it could be that there is some value to inefficiency that is lost when we shift to automating technologies. How can we preserve valuable forms of efficiency?


I think each of these issues is worthy of more detailed consideration. I just want to close by noting how similar they are to the issues raised in other debates about automation. This is one of the main insights I derived from preparing this talk. Although we certainly should talk about the consequences of automation in specific domains (finance, driving, policing, military weapons, medicine, law etc.), it is also worth developing more general theoretical models that can both explain and prescribe answers to the questions we have about automation.




1 comment:

  1. Identification of a criminal is a big question for police, but because of CCTV camera it's easy to bring criminal to jail.

    ReplyDelete