Pages

Wednesday, October 31, 2018

Episode #48 - Gunkel on Robot Rights





In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights.

You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:52 - Isn't the idea of robot rights ridiculous?
  • 3:37 - What is a robot anyway? Is the concept too nebulous/diverse?
  • 7:43 - Has science fiction undermined our ability to think about robots clearly?
  • 11:01 - What would it mean to grant a robot rights? (A precis of Hohfeld's theory of rights)
  • 18:32 - The four positions/modalities one could take on the idea of robot rights
  • 21:32 - The First Modality: Robots Can't Have Rights therefore Shouldn't
  • 23:37 - The EPSRC guidelines on robotics as an example of this modality
  • 26:04 - Criticisms of the EPSRC approach
  • 28:27 - Other problems with the first modality
  • 31:32 - Europe vs Japan: why the Japanese might be more open to robot 'others'
  • 34:00 - The Second Modality: Robots Can Have Rights therefore Should (some day)
  • 39:53 - A debate between myself and David about the second modality (why I'm in favour it and he's against it)
  • 47:17 - The Third Modality: Robots Can Have Rights but Shouldn't (Bryson's view)
  • 53:48 - Can we dehumanise/depersonalise robots?
  • 58:10 - The Robot-Slave Metaphor and its Discontents
  • 1:04:30 - The Fourth Modality: Robots Cannot Have Rights but Should (Darling's view)
  • 1:07:53 - Criticisms of the fourth modality
  • 1:12:05 - The 'Thinking Otherwise' Approach (David's preferred approach)
  • 1:16:23 - When can robots take on a face?
  • 1:19:44 - Is there any possibility of reconciling my view with David's?
  • 1:24:42 - So did David waste his time writing this book?

 

Relevant Links





Tuesday, October 30, 2018

What do I believe? A thematic summary of my academic publications




I have published quite a number of academic papers in the past 7-8 years. It has gotten to the point now that I find myself trying to make sense of them all. If you were to read them, what would you learn about me and my beliefs? Are there any coherent themes and patterns within these papers? I think there are and this is my attempt to hunt them out. I'm sure this will seem self-indulgent to some of you. I can only apologise. It is a deliberately self-indulgent exercise, but hopefully the thematic organisation is of interest to people other than myself, and some of the arguments may be intriguing or pique your curiosity. I'm going to keep this overview updated.

Reading note: There is some overlap in content between the sections below since some papers belonged to more than one theme. Also, clicking on the titles of the papers will take you directly to an open access version of them.


Theme 1: Human Enhancement, Agency and Meaning

What impact does human enhancement technology have on our agency and our capacity to live meaningful lives? I have written several papers that deal with this theme:


    • Argument: Far from undermining our responsibility, advances in the neuroscience of behaviour may actually increase our responsibility due to enhanced control [not sure if I agree with this anymore: I have become something of a responsibility sceptic since writing this].

    • Argument: Enhancing people's cognitive faculties could increase the democratic legitimacy of the legal system.

    • Argument: Enhancement technologies may turn us into 'hyperagents' (i.e. agents that are capable of minutely controlling our beliefs, desires, attitudes and capacities) but this will not undermine the meaning of life.

    • Argument: Enhancement technologies need not undermine social solidarity and need not result in the unfair distribution of responsibility burdens.

    • Argument: Cognitive enhancement drugs may undermine educational assessment but not in the way that is typically thought, and the best way to regulate them may be through the use of commitment contracts.

    • Argument: We should prefer internal methods for enhancing moral conformity (i.e. drugs and brain implants) over external methods (nudges/AI assistance/automation).

    • Argument: There are strong conservative reasons (associated with agency and individual achievement) for favouring the use of enhancement technologies.

    • Argument: Moral enhancement technologies need not undermine our freedom + freedom of choice is not intrinsically valuable; it is, rather, an axiological catalyst.


Theme 2: The Ethics and Law of Sex Tech

How does technology enable new forms of sexual intimacy and connection? What are the ethical and legal consequences of these new technologies? Answering these questions has become a major theme of my work.

    • Argument: Contrary to what many people claim, sex work may remain relatively resilient to technological displacement. This is because technological displacement will (in the absence of some radical reform of the welfare system) drive potential workers to industries in which humans have some competitive advantage over machines. Sex work may be one of those industries.

    • Argument: There may be good reasons to criminalise robotic rape and robotic child sexual abuse (or, alternatively, reasons to reject widely-accepted rationales for criminalisation).

    • Argument: Consent apps are a bad idea because they produce distorted and decontextualised signals of consent, and may exacerbate other problems associated with sexual autonomy.

    • Argument: Quantified self technologies could improve the quality of our intimate relationships, but there are some legitimate concerns about the use of these technologies (contains a systematic evaluation of seven objections to the use of these technologies).

    • Argument: Response to the critics of the previous article.


    • Argument: No single argument defended in this paper. Instead it presents a framework for thinking about virtual sexual assault and examines the case for criminalising it. Focuses in particular on the distinction between virtual sexual assault and real world sexual assault, responsibility for virtual acts, and the problems with consent in virtual worlds.

    • Argument: Makes the case for taking sex robots seriously from an ethical and philosophical perspective.

  • Should we Campaign Against Sex Robots? (with Brian Earp and Anders Sandberg). In Danaher, J and McArthur, N. (eds) Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press, 2017.
    • Argument: A systematic evaluation and critique of the idea that we should campaign against the development of sex robots. 

    • Argument: There may be symbolic harms associated with the creation of sex robots but these are contingent and reformable and subordinate to the consequential harms; the consequential harms are unproven and difficult to prove; and so the best way to approach the development of sex robots is to adopt an experimental model.

    • Argument: The best response to the creation of objectifying and misogynistic sex robots is not to ban them or criminalise them but to build 'better' ones. In this respect, those who are concerned about sex robots can learn from the history of the feminist porn wars.

    • Argument: Humans can have loving intimate relationships with robots; this need not erode or distort our understanding of intimacy.


Theme 3: The Threat of Algocracy

What are the advantages and disadvantages of algorithmic governance in politics, law and everyday life? How does algorithmic governance affect individual choice and freedom? How does it affect the legitimacy of political decision-making? This has been another major theme of my work over the past few years (with several new papers on the way in the next few months).

    • Argument: Algorithmic governance poses a significant threat to the legitimacy of public decision-making and this threat is not easily resisted or accommodated.

    • Argument: Because algorithmic decision-support tools pose a threat to political legitimacy, we should favour the use of internal methods of moral enhancement.

    • Argument: The rise of smart machines to govern and manage our lives threatens to accentuate our moral patiency over our moral agency. This could be problematic because moral agency is central to modern civilisation.

    • Argument: No specific argument. The paper uses a collective intelligence methodology to generate a research agenda for the topic of algorithmic governance. This agenda is a detailed listing of research question and the methods by which to answer them.

    • Argument: An evaluation of some of the ways in which algorithmic governance technologies could be productively used by two or more people in intimate relationships.

    • Argument: Contrary to some of the popular criticisms, the use of AI assistants in everyday life does not lead to problematic forms of cognitive degeneration, significantly undermine individual autonomy, nor erode important interpersonal virtues. Nevertheless there are risks and we should develop a set of ethical principles for people who make use of these systems.


Theme 4: Automation, Work and the Meaning of Life

How will the rise of automating technologies affect the future of employment? What will humans do when (or if) they are no longer needed for economic production? I have written quite a number of papers on this theme over the past five years, as well as a long series of blog posts. It is also going to be the subject of a new book that I'm publishing in 2019, provisionally titled Automation and Utopia, with Harvard University Press.

    • Argument: Sex work may remain relatively resilient to technological displacement. This is because technological displacement will (in the absence of some radical reform of the welfare system) drive potential workers to industries in which humans have some competitive advantage over machines. Sex work may be one of those industries.

    • Argument: Technological unemployment does pose a major threat to the meaning of life, but this threat can be mitigated by pursuing an 'integrative' relationship with technology.

    • Argument: Partly an extended review of David Frayne's book The Refusal of Work; partly a defence of the claim that we should be more ashamed of the work that we do.

    • Argument: People who think that there is a major economic 'longevity dividend' to be earned through the pursuit of life extension fail to appropriately consider the impact of technological unemployment. That doesn't mean that life extension is not valuable; it just means the arguments in favour of it need to focus on the possibility of a 'post-work' future.

    • Argument: Does exactly what the title suggests. Argues that paid employment is structurally bad and getting worse. Consequently we should prefer not to work for a living.


Theme 5: Brain-Based Lie Detection and Scientific Evidence

Can brain-based lie detection tests (or concealed information tests) be forensically useful? How should the legal system approach scientific evidence? This was a major theme of my early research and I still occasionally publish on the topic.

    • Argument: Why lawyers need to be better informed about the nature and limitations of scientific evidence, using brain-based lie detection evidence as an illustration.

    • Argument: The use of blinding protocols could improve the quality of scientific evidence in law and overcome the problem of bias in expert testimony.

    • Argument: (a) Reliability tests for scientific evidence need to be more sensitive to the different kinds of error rate associated with that evidence; and (b) there is potential for brain-based lie detection to be used in a legal setting as long as we move away from classic 'control question' tests to 'concealed information' test.

    • Argument: The P300 concealed information test could be used to address the problem of innocent people pleading guilty to offences they did not commit.



    • Argument: A defence of a 'legitimacy enhancing test' for the responsible use of brain-based lie detection tests in the law.


Theme 6: God, Morality and the Problem of Evil

The philosophy of religion has been a major focus of this blog, and I have spun this interest into a handful of academic papers too. They all deal with the relationship between god and morality or the problem of evil. I keep an interest in this topic and may write more such papers in the future.


    • Argument: Skeptical theism has profound and problematic epistemic consequences. Attempts to resolve or ameliorate those consequences by drawing a distinction between our knowledge of what God permits and our knowledge of the overall value of an event/state of affairs don't work.

  • Necessary Moral Truths and Theistic Metaethics. (2013) SOPHIA, DOI 10.1007/s11841-013-0390-0.
    • Argument: Some theists argue that you need God to explain/ground necessary moral truths. I argue that necessary moral truths need no deeper explanation/grounding.

    • Argument: There is no obligation to worship God. Gwiazda's attempt to defend this by arguing that there is a distinction between threshold and non-threshold obligations doesn't work in the case of God.

    • Argument: An attempt to draw an analogy between the arguments of sceptical theists and the arguments of AI doomsayers like Nick Bostrom. Not really a philosophy of religion paper; more a paper about dubious epistemic strategies in debates about hypothetical beings. 

    • Argument: In order to work, divine command theories must incorporate an epistemic condition (viz. moral obligations do not exist unless they are successfully communicated to their subjects). This is problematic because certain people lack epistemic access to the content of moral obligations. While this argument has been criticised, I argue that it is quite effective.


Theme 7: Moral standards and legal interpretation

Is the interpretation of legal texts a factual/descriptive inquiry, or is it a moral/normative inquiry? I have written a couple of papers arguing that it is more the latter. Both of these papers focus on the 'originalist' theory of constitutional interpretation. 

    • Argument: If we analogise laws to speech acts, as many now do, then we must pay attention to the 'success conditions' associated with those speech acts. This means we necessarily engage in a normative/moral inquiry, not a factual one.

    • Argument: Legal utterances are always enriched by the pragmatic context in which they are uttered. Constitutional originalists try to rely on a common knowledge standard of enrichment; this standard fails, which once again opens to door to a normative/moral approach to legal interpretation.


Theme 8: Random

Papers that don't seem to fit in any particular thematic bucket.

    • Argument: A critical analysis of Matthew Kramer's defence of capital punishment. I argue that Kramer's defence fails the moral test that he himself sets for it.

    • Argument: The widespread deployment of autonomous robots will give rise to a 'retribution gap'. This gap is much harder to plug than the more widely discussed responsibility/liability gaps.

    • Argument: Using Samuel Scheffler's 'collective afterlife' thesis, I argue that we should commit to creating artificial offspring. Doing so might increase the meaning and purpose of our present lives.

    • Argument: Human identity is more of a social construction than a natural fact. This has a significant effect on the plausibility of certain techniques for 'mind-uploading'.


    • Argument: Our conscience is not a product of free will or autonomous choice. This has both analytical and normative implications for how we treat conscientious objectors.







Saturday, October 20, 2018

Episode #47 - Eubanks on Automating Inequality


MalO2E3H_400x400.jpg

 In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance.

 You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 1:39 - The future is unevenly distributed but not in the way you might think
  • 7:05 - Virginia's personal encounter with the tools for automating inequality
  • 12:33 - Automated helplessness?
  • 14:11 - The history of poverty management: denial and moralisation
  • 22:40 - Technology doesn't disrupt our ideology of poverty; it amplifies it
  • 24:16 - The problem of poverty myths: it's not just something that happens to other people
  • 28:23 - The Indiana Case Study: Automating the system for claiming benefits
  • 33:15 - The problem of automated defaults in the Indiana Case
  • 37:32 - What happened in the end?
  • 41:38 - The L.A. Case Study: A "match.com" for the homeless
  • 45:40 - The Allegheny County Case Study: Managing At-Risk Children
  • 52:46 - Doing the right things but still getting it wrong?
  • 58:44 - The need to design an automated system that addresses institutional bias
  • 1:07:45 - The problem of technological solutions in search of a problem
  • 1:10:46 - The key features of the digital poorhouse
 

Relevant Links


 

Sunday, October 14, 2018

Robots and the Expanding Moral Circle




(I appear in this video from 15:49-25:51) 


[The following is, roughly, the text of a speech I delivered to the Trinity College, Dublin Historical Society on the 10th October 2018 (which you can watch in the video above from 15:49 - 25:51). It was for debate on the topic of AI personhood. The proposition that was up for debate was “That this House would recognize AI as legal persons”. I was supposed to speak in favour of the proposition but, as you’ll see below, I don’t quite do that, though I do argue for something not too from this idea. I find that formal debates present an interesting challenge. They are hardly the best means of getting at the truth, but there is, I think, some value to distilling your arguments on a particular topic down into a short speech. It means you have to focus on what it most relevant to your case and skip some of the nuance and waffle that is common in academic talks. This is my way of saying that what you are about to read is hardly the most careful and sophisticated defence of my views on AI moral personhood, but it has the virtue of brevity.]


(1) Not going to talk about legal personhood
In every debate in which I have participated, I have disagreed with the proposition. Tonight is no different. Unfortunately, I am not going to argue that we should recognize AI as legal persons. I don’t think that is an interesting question for at least three reasons. First, legal personhood is a social construct that can be manipulated and reshaped by us if we choose: it is not something with independent moral content or weight. Second, and this may shock you, it may already be the case that AIs can be recognized as legal persons. Shawn Bayern (a law prof at Northwestern University in the US) has argued that there are loopholes in US corporate law that allow for an AI to legally control a limited liability company. If he is right, then since LLCs are legal persons, AIs can also be legal persons, at least in the US, which could transfer to the EU due to mutual recognition provisions. Third, whether or not this is a good idea – the recognition of AIs as legal persons – depends on something else. Specifically, I think it depends on whether AIs/robots (I will talk about both) have a moral status that deserves legal recognition and protection. That’s what I want to consider.


(2) The Ethical Behaviourist Approach
Now, I am not going to argue that AIs/robots currently have moral status. I am just going to argue that they very plausibly could in the not too distant future. The reason for this is that I am an ethical behaviourist. I believe that all claims about the moral status of a particular entity (e.g. human being, animal) depend on inferences we make from external behaviours and representations made by that entity. In debates about moral status people will talk about things like sentience, the capacity to feel pain, the capacity to have interests, to be a continuing subject of conscious experience, and so on, as if these properties are what matter to judgments of moral status. I don’t disagree with any of that: I think they are what matters. I just think all judgments about the existence of those properties depends on inferences we make from behavioural states.

This posture of ethical behaviourism leads me to endorse a ‘performative equivalency’ standard when it comes to making judgments about moral status. According to this standard, if a robot/AI is performatively equivalent to another entity to whom we afford moral status, then the robot/AI must be afforded the same moral status. This can then translate into legal recognition and protection. I think it is possible (likely?) robots/AI will meet this PE-standard in the near future, and so they should be granted moral status.


(3) An initial defence of Ethical Behaviourism
Why should we embrace this performative equivalency standard? I think this is ultimately a view that is best defended in the negative, but there are three initial reasons I would offer:

The first is the Kantian Reason: we cannot know the thing-in-itself we can only ever know it through its external representations. We do not have direct epistemic access to someone’s conscious experiences of this world (which are central to judgments of moral status); we only have access to their behaviours and performances. It follows from this that the PE standard is the only one we can apply in moral affairs.

The second reason is common sense: we all know this to be true in our day-to-day lives. It’s obvious that we do not know what is going on in someone else’s head and so must make judgments about how they experience the world through their external representations to us. In other words, we are all, already, following the PE standard in our day-to-day moral decision-making.

The third reason is that this chimes pretty well with scientific practice: psychologists who make inferences as to what is going on in a person’s mind do so through behavioural measures; and neuroscientists validate correlations between brain states and mental states through behavioural measures. I’m just advocating the same approach when it comes to ascriptions of moral status.


(4) Objections and Replies
So that’s the initial defence of my position. If you are like the other people with whom I have shared this view you will think it is completely ridiculous. So let me soften the blow by responding to some common objections:


Objection 1: Robots/AIs aren’t made out of the right biological stuff (or don’t have the right biological form) and this is what matters to ascriptions of moral status, not performative equivalency (I sometimes call this the ‘ontology matters’ or ‘matter matters’ objection).

Response: Now, I happen to think this view is ridiculous as it amounts to an irrational form of biological mysterianism, but I would actually be willing to concede something to it just for the sake of argument. I would be willing to concede that being made of the right biological stuff is a sufficient condition for moral status, but that it is not a necessary one. In other words, if you have a human being or animal that doesn’t have a sophisticated behavioural repertoire you might be within your rights to grant it moral status on the grounds of biological constitution alone; it just doesn’t follow from this that it would be right to deny moral status to a robot that does have a sophisticated behavioural repertoire because it isn’t made of the right stuff. They are both sufficient conditions for moral status.

Objection 2: Robots/AIs have different origins to human beings/animals. They have been programmed and designed into existence whereas we have evolved and developed. This undermines any inferences we might make from behaviour to moral status. To slightly paraphrase the philosopher Michael Hauskeller: “[A]s long as we have an alternative explanation for why [the robot/AI] behaves that way (namely, that it has been designed and programmed to do so), we have no good reason to believe that its actions are expressive of anything [morally significant] at all” (Hauskeller 2017)

Response: I find it hard to accept this view because I find it hard to accept that different origins matter more than behaviour in moral judgments of others. Indeed, I think this is a view with a deeply problematic history: it’s effectively the basis for all forms of racism and minority exclusion: that you are judged by racial and ethnic origin, not actual behaviour. Most importantly, however, it’s not clear that there are strong ‘in principle’ differences in origin between humans and AIs of the sort that Hauskeller and others suppose. Evolution is a kind of behavioural programming (and is often explained in these terms by scientists). So you could argue that humans are programmed as well as AIs. Also, with the advent of genetic engineering and other forms of human enhancement the lines between humans and machines in terms of origin are likely to blur even more in the future. So this objection will become less sustainable.

Objection 3: Robots/AI will be owned and controlled by humans; this means they shouldn’t be granted moral status.

Response: I hesitate to include this objection but it is something that Joanna Bryson – one of the main critics of AI moral status –made much of in her earlier work (she may have distanced herself from it since). My response is simple: the direction of moral justification is all wrong here. The mere fact that we might own and control robots/AI does not mean we should deny them moral status. We used to allow humans to own and control other humans. That doesn’t mean it was the right thing to do. Ownership and control are social facts that should be grounded in sound moral judgments, not the other way around.

Objection 4: If performative equivalency is the standard of moral status, then manufacturers of robots/AI are going to engage in various forms of deception or manipulation to get us to think they deserve moral status when they really don’t.

Response: I’m not convinced that the commercial motivations for doing this are that strong, but set that to the side. This is, probably, the main concern that people have about my view. I have three responses to it: (i) I don’t think people really know what they mean by ‘deception/manipulation’ in this context – if a robot consistently (and the emphasis is on consistently) behaves in a way that is equivalent to other entities to whom we afford moral status then there is no deception/manipulation (those concepts have no moral purchase unless cashed out in terms of behavioural inconsistencies); (ii) if you are worried about this, then a lot of the worry can be avoided by setting the ‘performative equivalency’ standard relatively high, (i.e. err on the side of false negatives rather than false positives when it comes to expanding the moral circle though this strategies does have its own risks) and (iii) deception and manipulation are rampant in human-to-human relationships but this doesn’t mean that we deny humans moral status – why should we take a different approach with robots?




(5) Conclusion
Let me wrap up by making two final points. First, I want to emphasise that I am not making any claims about what the specific performative equivalency test for robots/AI should be – that’s something that needs to be determined. All I am saying is that if there is performative equivalency, then there should be a recognition of moral status. Second, my position does have serious implications for the designers of robots/AI. It means that their decisions to create such entities has a moral dimension that they may not fully appreciate and may like to disown. This might be one reason why there is such resistance to the idea. But we shouldn’t allow them to shirk responsibility if, as I believe, performative equivalency is the correct moral standard to apply in these cases. Okay, that’s it from me. Thank you for your attention.








Friday, October 12, 2018

The Automation of Policing: Challenges and Opportunities


[These are some general reflections on the future of automation in policing. They are based on a workshop I gave to the ACJRD (Association for Criminal Justice Research and Development) annual conference in Dublin on the 4th October 2018). I took it that the purpose of the workshop was to generate discussion. As a result, the claims made below are not robustly defended. They are intended to be provocative and programmatic.]

This conference is all about data and how it can be used to improve the operation of the criminal justice system. This focus is understandable. We are, as many commentators have observed, living through a ‘data revolution’ in which we are generating and collecting more data than ever before. It makes sense that we would want to put all this data to good use in the prevention and prosecution of crime.

But the collection and generation of data is only part of the revolution that is currently taking place. The data revolution, when combined with advances in artificial intelligence and robotics, enables the automation of functions traditionally performed by human beings. Police forces can be expected to make use of the resulting automating technologies. From predictive policing, to automated speed cameras, to bomb disposal robots, we already see a shift away from a human-centric policing systems to ones in which human police officers must partner with, or be replaced by, machines.

What does this mean for the future of policing? Will police officers undergo significant technological displacement, just as workers in other industries have? Will the advent of smart, adaptable security robots change how we think about the enforcement of the law? I want to propose some answers to these questions. I will divide my remarks into three main sections. I will start by setting out a framework for thinking about the automation of policing. I will then ask and propose answers to two questions: (i) what does the rise of automation mean for police officers (i.e. the humans currently at work in the policing system)? and (ii) what does it mean for the policing system as a whole?


1. A Framework for Thinking about the Automation of Policing
Every society has rules and standards. Some, but not all, of these rules are legal in nature. And some, but not all, of these legal rules concern what we call ‘crimes’. Crimes are the rules to which we attach the most social and public importance. Somebody who fails to comply with such rules will open themselves up to public prosecution and condemnation. Nevertheless, it is important to bear in mind that crimes are just one subset of the many rules and standards we try to uphold. What’s more, the boundaries of the ‘criminal’ are fluid — new crimes are identified and old crimes are declassified on a semi-regular basis. This fluidic boundary is important when we consider the impact of automation on policing (more on this later).

When trying to get people to comply with social rules, there are two main strategies we can adopt. We can ‘detect and enforce’ or we can ‘predict and prevent’. If we detect and enforce, we will try to discover breaches of the rules after the fact and then impose some sanction or punishment on the person who breached them (the ‘offender’). This punishment can be levied for any number of reasons (retribution, compensation, rehabilitation etc), but a major one — and one that is central to the stability of the system — is to deter others from doing the same thing. If we predict and prevent, we will try to anticipate potential breaches of the rules and then plan interventions that minimise or eliminate the likelihood of the breach taking place.

I’ve tried to illustrate all this in the diagram below.



This diagram is important to the present discussion because it helps to clarify what we mean when we talk about the automation of policing. Police officers are the people we task with ensuring compliance with our most cherished social rules and standards (crimes) and most police forces around the world follow both ‘predict and prevent’ as well as ‘detect and enforce’ strategies. So when we talk about the automation of policing we could be talking about the automation of one (or all) of these functions. In what follows I’ll be considering the impact of automation on all of them.

(Note: I appreciate that there is more to the criminal justice system than this framework lets on. There is also the post-enforcement management of offenders (through prison and probation) as well as other post-release and early-intervention systems, which may properly be seen as part of the policing function. There is much complexity here that gets obscured when we talk, quite generally, about the ‘automation of policing’. I can’t be sensitive to every dimension of complexity in this analysis. This is just a first step.)


2. The Automation of Police Officers
Let’s turn then to the first major question: what effect will the rise of automating technologies have on police officers? There is a lot of excitement nowadays about automating technologies. Police forces around the world are making use of data analytics systems (‘predictive policing’) to help them predict and prevent crime in the most efficient way possible. Various forms of automated surveillance and enforcement are also commonplace through the use of speed cameras and red light cameras. There are also more ‘showy’ or obvious forms of automation on display, though they are slightly less common. There are no robocops just yet, but many police forces make use of bomb disposal robots, and some are experimenting with fully-automated patrol bots. The most striking example of this is probably the Dubai police force, which has rolled out security bots and drone surveillance at tourist spots. There are also some private security bots, such as those made by Knightscope Robotics in California, which could be used by police forces.

If we assume that similar and more advanced automating technologies are going to come on-stream in the future, obvious questions arise for those who currently make their living within the police force. Do they need to start looking elsewhere for employment? Will they, ultimately, be replaced by robots and other automating technologies? Or will it still make sense for the children of 2050 to dream of being police officers when they grow up?

To answer that question I think it is important to make a distinction, one that is frequently made by economists looking at automation, between a ‘job’ and a ‘task’. There is the job of being a police officer. This is the socially-defined role to which we assign the linguistic label ‘police officer’. This is, more or less, arbitrarily defined by grouping together different tasks (patrolling, investigating, form-filling, data analysis and so on) and assigning them to that role. It is these tasks that really matter. They are what police officers actually do and how they justify their roles. In modern policing, there is a large number of relevant tasks, some of which are further sub-divided and sub-grouped according to the division and rank of the individual police officer. Furthermore, some tasks that are clearly essential to modern policing (IT security, data analysis, community service) are sometimes assigned new role labels and not included within the traditional class of police officer. This illustrates the arbitrariness of the socially defined role.

This leads to an important conclusion: When we think about the automation of police officers, it is important not to focus on the job per se (since that is arbitrarily defined) but rather on the tasks that make up that job. It is these tasks, rather than the job, that are going to be subject to the forces of automation. Routine forms of data analysis, surveillance, form-filling and patrolling are easily automatable. If they are automated, this does not mean that the job of being a police officer will disappear. It is more likely that the job will be redefined to include or prioritise other tasks (e.g. in-person community engagement and creative problem-solving).

This leads me to another important point. I’ve been speaking somewhat loosely about the possibility of automating the tasks that make up the role of being a police officer. There are, in fact, different types of task relationships between humans and automating technologies that are obscured when you talk about things in this way. There are three kinds of relationship that I think are worth distinguishing between:

Tool Relationships: These arise when humans simply use technology as a tool to perform their job-related tasks more efficiently. Tools do not replace humans; they simply enable those humans to perform their tasks more effectively. Some automating technologies, e.g. bomb disposal robots, are tools. They are teleoperated by human controllers. The humans are still essential to the performance of the task.
Partnership Relationships: These arise when the machines are capable of performing some elements of a task by themselves (autonomously) but they still partner with humans in their performance. I think many predictive policing systems are of this form. They can perform certain kinds of data analysis autonomously, but they are still heavily reliant on humans for inputting, interpreting and acting upon that data.
Usurpation Relationships: These arise when the machines are capable of performing the whole task by themselves and do not require any human assistance or input. I think some of the new security bots, as well as certain automated surveillance and enforcement technologies are of this type. They can fully replace human task performers, even if those humans retain some supervisory role.


All three relationship types pose different risks for humans working within the policing system. Tool relationships don’t threaten mass technological unemployment, but they do threaten to change the skills and incentives faced by police officers. Instead of being physically dextrous and worrying about their own safety, police officers just have to be good at controlling machines that are put in harm’s way. Something similar is true for partnership relationships, although those systems may threaten at least some displacement and unemployment. Usurpation relationships, of course, promise to be the most disruptive and the most likely to threaten unemployment for humans. Even if we need some human commanders and supervisors for the usurpers we probably need fewer of them relative to those who are usurped.

So what’s the bottom line then? Will human police officers be displaced by automating technologies? I make three predictions, presented in order of likelihood:


  • (a) Future (human) police officers will require different skills and training as a result of automation: they will have to develop skills that are complementary to those of the machines, not in competition with them. This is a standard prediction in debates about automation and should not be controversial (indeed, the desire for machine-complementary skills in policing is already obvious).

  • (b) This could lead to significant polarisation, inequality, and redefinition of what it means to be a police officer. This, again, tracks with what happens in other industries. Some people are well-poised to benefit from the rise of the machines: they have skills that are in short supply and they can leverage the efficiencies of the technologies to their own advantage. They will be in high demand and will attract high wages. Others will be less well-poised to benefit from the rise of the machines and will be pushed into forms of work that are less skilled and less respected. This could lead to some redefinition of the role of being a ‘police officer’ and some dissatisfaction within the ranks.

  • (c) There might be significant technological unemployment of police officers. In other words, there may be many fewer humans working in the police forces of the future than at present. This is the prediction about which I am least confident. Police officers, unlike many other workers, are usually well-unionised and so are probably more able to resist technological unemployment than other workers. I also suspect there is some public desire for human faces in policing. Nevertheless, mass unemployment of police officers is still conceivable. It may also happen by stealth (e.g. existing human workers are left to retired and not replaced, and there is redefinition of roles to gradually phase out humans).



3. The Automation of Policing as a Whole
So much for the individual police officers, what about the system of policing as a whole? Let’s go back to the framework I developed earlier on. As mentioned, automating technologies could be (and are being) used to perform both the ‘detect and enforce’ and the ‘predict and prevent’ functions. What I want to suggest now is that, although this is true, it’s possible (and perhaps even likely) that automating technologies will encourage a shift away from ‘detect and enforce’ modes of policing to ‘predict and prevent’ modes. Indeed, automating technologies may encourage a ‘prevent-only’ model of policing. Furthermore, even when automated systems are used to perform the detect and enforce functions, they are likely to do so in a different way.

Neither of these suggestions is unique to me. Regulatory theorists have long observed that technology often encourages a shift from ‘detect and enforce’ to ‘predict and prevent’ methods of ensuring normative compliance. Roger Brownsword, for example, talks about the shift from rule-enforcement to ‘technological management’ in regulatory systems. He gives the example of a golf course that is having trouble with its members driving their golf carts over a flowerbed. To stop them doing this, the management committee first create a rule that assigns penalties to those who drive their golf carts over the flowerbed. They then put up a surveillance camera to help them detect breaches of this rule. This helps a little bit, but then a new technology comes on the scene that enables, through GPS-tracking, the remote surveillance and disabling of any golf carts that come close to the flowerbed. Since that system is so much more effective — it renders non-compliance with the rule impossible — the committee adopt that instead. They have moved from traditional rule-enforcement to technological management.

Elizabeth Joh argues that something similar is likely to happen in the ‘smart cities’ of the future. Instead of using technology simply to detect and enforce breaches of the law, it will be used to prevent non-compliance with the rules. The architecture of the city will ‘hardcode’ in the preferred normative values and will work to disable or deactivate anyone who tries to reject those values. Furthermore, constant surveillance and monitoring of the population will enable future police forces locate those who pose a threat to the system and quickly defuse them. This may lead to an expansion of the set of rules that the policing systems try to uphold to include relatively minor infractions that disturb the public peace. Joh thinks that this might lead to a ‘disneyfication’ of future policing. She bases this judgment on a famous study by Shearing and Stenning on security practices in Disney theme parks. She thinks this provides a model for policing in the smart city:

”The company anticipates and prevents possibilities for disorder through constant instructions to visitors, physical barriers that both guide and limit visitors’ movements, and through “omnipresent” employees who detect and correct the smallest errors (Shearing & Stenning 1985: 301). None of the costumed characters nor the many signs, barriers, lanes, and gardens feel coercive to visitors. Yet through constant monitoring, prevention, and correction embedded policing is part of the experience…”

(Joh 2018)

It’s exactly this kind of policing that is enabled by automating technologies.

This is not to say that detection and enforcement will play no part in the future of policing. There will always be some role for that, but it might take a very different form. Instead of the strong arm of the law we might have the soft hand of the administrator. Instead of being sent to jail and physically coerced when we fail to comply with the law; we might be nudged or administratively sanctioned. The Chinese Social Credit system, much reported on and much maligned in the West, provides one possible glimmer of this future. Through mass surveillance and monitoring, compliance with rules can be easily rewarded or punished through a social scoring system. Your score determines your ease of access to social services and opportunities. We already have isolated and technologically enabled version of this in Western democracies — e.g. penalty points systems for driving licences and credit rating scores in finance — the Chinese system simply takes these to their logical (and technological) extreme.

There is one other possible future for policing that is worth mentioning. It could be that the rise of automating technologies will encourage a shift away from public models of policing to privatised models. This is already happening to some extent. Many automated systems used by modern police forces are owned by private companies and hence lead to a public-private model of policing (with many attendant complexities when it comes to the administration of the law). But this trend may continue to push towards wholly private forms of automated policing. I cannot currently afford my own team of private security guards, but I might be able to afford my own team of private security bots (or at least rent them from an Uber-like company).

The end result may be that the keeping the peace is no longer be a public responsibility discharged by the police, but a private responsibility discharged by each of us.


4. Conclusion: Ethico-Legal Concerns
Let me conclude by briefly commenting on the ethical and legal concerns that could result from the automation of policing. There are five that I think are worth mentioning here that arise in other cases of automation too:

Freedom to Fail: The shift from rule-enforcement to technological management seems to undermine human autonomy and moral agency. Instead of being given the opportunity to exercise their free will and agency, humans are constrained and programmed into compliance. They lose their freedom to fail. Should we be concerned?
Responsibility Gaps: As we insert autonomous machines into the policing questions arise as to who is responsible for the misdeeds of these machines. Responsibility gaps open up that must be filled.
Transparency and Accountability: Related to the problem of responsibility gaps, automating technologies are often opaque or unclear in their operations. How can we ensure sufficient transparency? Who will police the automated police?
Biased Data —> Biased Outcomes: Most modern-day automating technologies are trained on large datasets. If the information within these datasets is biased or prejudiced this often leads to the automating technologies being biased or prejudices. Concerns about this have already arisen in relation to predictive policing and algorithmic sentencing. How can we stop this from happening?
The Value of Inefficiency: One of the alleged virtues of automating technologies is their efficient and unfailing enforcement/compliance with rules. But is this really a good thing? As Woodrow Hartzog and his colleagues have pointed out it could be that we don’t want our social rules to be efficiently enforced. Imagine if you were punished everytime you broke the speed limit? Would that be a good thing? Given how frequently we all tend to break the speed limit, and how desirable this is on occasion, it may be that efficient enforcement is overkill. In other words, it could be that there is some value to inefficiency that is lost when we shift to automating technologies. How can we preserve valuable forms of efficiency?


I think each of these issues is worthy of more detailed consideration. I just want to close by noting how similar they are to the issues raised in other debates about automation. This is one of the main insights I derived from preparing this talk. Although we certainly should talk about the consequences of automation in specific domains (finance, driving, policing, military weapons, medicine, law etc.), it is also worth developing more general theoretical models that can both explain and prescribe answers to the questions we have about automation.




Friday, October 5, 2018

Episode #46 - Minerva on the Ethics of Cryonics

Francesca_BW_square_small.jpg

 In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal of Medical Ethics, Bioethics, Cambridge Quarterly Review of Ethicsand the Hastings Centre Report. We talk about life, death and the wisdom and ethics of cryonics.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).


Show Notes:

  • 0:00 - Introduction
  • 1:34 - What is cryonics anyway?
  • 6:54 - The tricky logistics of cryonics: you need to die in the right way
  • 10:30 - Is cryonics too weird/absurd to take seriously? Analogies with IVF and frozen embryos
  • 16:04 - The opportunity cost of cryonics
  • 18:18 - Is death bad? Why?
  • 22:51 - Is life worth living at all? Is it better never to have been born?
  • 24:44 - What happens when live is no longer worth living? The attraction of cryothanasia
  • 30:28 - Should we want to live forever? Existential tiredness and existential boredom
  • 37:20 - Is immortality irrelevant to the debate about cryonics?
  • 41:42 - Even if cryonics is good for me might it be the unethical choice?
  • 45:00 (ish) - Egalitarianism and the distribution of life years
  • 49:39 - Would future generations want to revive us?
  • 52:34 - Would we feel out of place in the distant future?

Relevant Links

 

Thursday, October 4, 2018

The Philosophy of Space Exploration (Index)




I have written quite a few pieces about the philosophy and ethics of space exploration over the past 12 months. I am very interested in the idea that space exploration can represent a long-term utopian project for humanity. I expect this series will continue to grow so I thought it might be useful to collect them together in one place. Enjoy!








I have also recorded some podcasts that touch cover the philosophy and ethics of space exploration: