Tuesday, October 24, 2017

Podcast Interview - Singularity Bros #114 on Robot Sex

Logo from the Singularity Bros Podcast

As part of the major publicity drive that I am putting together for the book Robot Sex: Social and Ethical Implications, I just appeared on the Singularity Bros Podcast. We have a very wide-ranging and philosophically rich discussion about the ethics of sexual relationships with robots. You should check it out here.

And remember: if you want to buy the book, it is just a click away.

Sunday, October 22, 2017

Freedom and the Unravelling Problem in Quantified Work

A Machinist at the Tabor Company where Frederick Taylor (founder of 'scientific management') consulted.

[This is a text version of a short talk I delivered at a conference on ’Quantified Work’. It was hosted by Dr Phoebe Moore at Middlesex University on the 13th October 2017 and was based around her book ‘The Quantified Self in Precarity’.]

Surveillance has always been a feature of the industrial workplace. With the rise of industrialism came the rise of scientific management. Managers of manufacturing plants came to view the production process as a machine, not just as something that involved the use of machines. The human workers were simply parts of that machine. Careful study of the organisation and distribution of the machine parts could enable a more efficient production process. To this end, early pioneers in scientific management (such as Frederick Taylor and Lillian and Frank Gilbreth) invented novel methods for surveilling how their workers spend their time.

Nowadays, the scale and specificity of our surveillance techniques has changed. Our digitised workplaces enable far more information to be collected about our movements and behaviour, particularly when wearable smart-tech is factored into the mix. The management philosophy underlying the workplace has also changed. Where Taylor and the Gilbreths saw the goal of scientific management as creating a more consistent and efficient machine, we now embrace a workplace philosophy in which the ability to rapidly adapt to a changing world is paramount (the so-called ‘agile’ workplace). Acceleration and disruption are now the aim of the game. Workers must be equipped with the tools to enable them to navigate an uncertain world. What’s more, work now never ends — it follows us home on our laptops and phones — and we are constantly pressured to be available to work, while maintaining overall health and well-being. Employers are attuned to this and have instituted various corporate wellness programmes aimed at enhancing employee health and well-being, while raising productivity. The temptation to use ‘quantified self’ technology to track and nudge employee behaviour is, thus, increasing.

These are the themes addressed in Phoebe’s book, and I think they prompt the following question, one that I will seek to answer in this talk:

Question: Does the rise of ‘quantified self’ surveillance threaten our freedom in some new or unique way?

In other words, do these new forms of workplace surveillance constitute something genuinely new or unprecedented in the world of work, or are they really just more of the same? I consider two answers to that question.

Answer 1: No, because work always, necessarily, undermines our freedom
The first answer is the sceptical one. The notion that work and freedom are mutually inconsistent is a long-standing one in left-wing circles. Slavery is the epitome of unfreedom. Work, it is sometimes claimed, is a form of ‘waged’ or ‘economic’ slavery. You are not technically owned by your employer (after all you could be self-employed, as many of us now are in the ‘gig’ economy) but you are effectively compelled to work out of economic necessity. Even in countries with a generous social welfare provision, access to this provision is usually tied to the ability and willingness to work. There is, consequently, no way to escape the world of work.

I’ve covered arguments of this sort previously on my blog. My favourite, comes from the work of Julia Maskivker. The essence of her argument is this:

(1) A phenomenon undermines our freedom if: (a) it limits our ability to choose how to make use of our time; (b) it limits our ability to be the authors of our own lives; and/or (c) it involves exploitative/coercive offers.
(2) Work, in modern society, (a) limits our ability to choose how to make use of our time; (b) limits our ability to be the authors of our own lives; and c) involves an exploitative/coercive offer.
(3) Therefore, work undermines our freedom.

Now, I’m not going to defend this argument here. I did that on a previous occasion. Suffice to say, I find the premises in it plausible, with something reasonable to said in defence of each. I’m not defending it because my present goal is not to consider whether work does in fact, always, undermine our freedom, but, rather, to consider what the consequences of accepting this view are for the debate about quantified work practices.

You could argue that if you accept it, then there is nothing really interesting to be said about the freedom-affecting potential of quantified work. If work always undermines our freedom, then quantified work practices are just more in a long line of freedom-undermining practices. They do not threaten something new or unique.

I am sympathetic to this claim but I want to resist it. I want to argue that even if you think freedom is necessarily undermined by work, there is the possibility of something new and different being threatened by quantified work practices. This is for three reasons. First, even if the traditional employer-employee relationship undermines freedom, there is usually some possibility of escape from that freedom-undermining characteristic in the shape of down time or leisure time. Quantified work might pose a unique threat if it encourages and facilitates more surveillance in that down time. Second, quantified work might threaten something new if its utility is largely self-directed, rather than other-directed. In other words, if it is imposed from the bottom-up, by workers themselves, and not from the top-down, by employers. Finally, quantified work might threaten something new simply due to the scale and ubiquity of the available surveillance technology.

As it happens, I think there are some reasons to think that each of these three things might be true.

Answer 2: Yes, due to the unravelling problem
The second answer maintains that there is something new and different in the modern world of quantified work. Specifically, it claims that quantified work practices pose a unique threat to our freedom because they hasten the transition to a signalling economy, which in turn leads to the unravelling problem. I take this argument from the work of Scott Peppet.

A ‘signalling’ economy is to be differentiated from a ‘sorting’ economy. The difference has to do with how information is acquired by different economic actors. Information is important when making decisions about what to buy and who to employ. If you are buying a used car, you want to know whether or not it is a ‘lemon’. If you are buying health insurance, the insurer will want to know if you have any pre-existing conditions. If you are looking for a job, your prospective employer will want to know whether you have the capacity to do it well. Accurate, high-quality information enables more rational planning, although it sometimes comes at the expense of those whose informational disclosures rule them out of the market for certain goods and services. In a ‘sorting’ economy, the burden is on the employer to screen potential employees for the information they deem relevant to the job. In a ‘signalling’ economy, the burden is on the employee to signal accurate information to the employer.

With the decline in long-term employment, and the corresponding rise in short-term, contract-based work, there has been a remarkable shift away from a sorting economy to a signalling economy. We are now encouraged to voluntarily disclose information to our employers in order to demonstrate our employability. Doing so is attractive because it might yield better working conditions or pay. The problem is that what initially appears to be a voluntary set of disclosures ends up being a forced/compelled disclosure. This is due to the unravelling problem.

The problem is best explained by way of an example. Imagine you have a bunch of people selling crates of oranges on the export market. The crates carry a maximum of 100 oranges, but they are carefully sealed so that a purchaser cannot see how many oranges are inside. What’s more, the purchaser doesn’t want to open the box prior to transport because doing so would cause the oranges to go bad. But, of course, the purchaser can easily verify the total number of oranges in the box after transport by simply opening it and counting them. Now suppose you are one of the people selling the crates of oranges. Will you disclose to the purchaser the total number of oranges in the crate? You might think that you shouldn’t because, if you are selling less than the others, you would put you at a disadvantage on the market. But a little bit of game theory tells us that we should expect the sellers to disclose the number of oranges in the crates. Why so? Well, if you had 100 oranges in your crate, you would be incentivised to disclose this to any potential purchaser. Doing so makes you an attractive seller. Correspondingly, if you had 99 oranges in your crate, and all the sellers with 100 oranges have disclosed this to the purchasers, you should disclose this information. If you don’t, there is a danger that a potential seller will lump you in with anyone selling 0-98 oranges. In other words, because those with the maximum number of oranges in their crates are sharing this information, purchasers will tend to assume the worst about anyone not sharing the number of oranges in their crate. But once you have disclosed the fact that you have 99 oranges in your crate, the same logic will apply to the person with 98 oranges and so on all the way down to the seller with 1 orange in their crate.

This is informational unravelling in practice. The seller with only 1 orange in their crate would much rather not disclose this fact to the purchasers, but they are ultimately compelled to do so by the incentives in operation on the market. The claim I am making here — and that Peppet makes in his paper — is that unravelling is also likely to happen on the employment market. The more valuable information we have about ourselves, the more we are incentivised to disclose this to our employers in order to maintain our employability. Those with the best information will do so voluntarily and willingly, but ultimately everybody will be forced to do so in an effort to differentiate themselves from other, potentially ‘inferior’, employees.

This could have a pretty dramatic effect on our freedom. If quantified self technologies enable more and more valuable information be tracked and disclosed, there will be more and more unravelling, which will in turn lead to more and more forced disclosures. This could result in something quite different from the old world of workplace surveillance, partly because it is being driven from the bottom up, i.e. workers do it themselves in order to secure some perceived advantage. There are laws in place that prevent employers from seeking certain information about their employees (e.g. information about health conditions) but those laws usually only cover cases where the employer demands the information. Where the information is being supplied, seemingly willingly, by masses of gig workers looking to increase their employability, the situation is rather different. This could be compounded by the fact that the types of information that are desirable in the new, agile, workplace will go beyond simple productivity metrics into information about general health and well-being. New and more robust legal protections may be required to redress this problem of seemingly voluntary disclosure.

I’ll close on a more positive note. Even though I think the unravelling problem is worth taking seriously, the argument I have presented is premised on the assumption that the information derived from quantified self technologies is in fact valuable. This may not be the case. It may turn out that accurately signalling something like the numbers of hours you slept last night, the number of calories you consumed yesterday, or the number of steps you have taken, is not particularly useful to employers. In that case, the scale of the unravelling problem might be mitigated. But we should still be cautious. There is a distinction to be drawn between information that is genuinely valuable (i.e. has some positive link to economic productivity) and information that simply perceived to be valuable (i.e. thought to be of value by potential employers). Unfortunately, the latter is what really counts, not the former. I see this all the time in my own job. Universities are interested in lots of different metrics for gauging the success of their employees (papers published, number of citations, research funding received, number of social media engagements, number of paper downloads etc. etc.). Many of these metrics are of dubious value. But that doesn’t matter. They are perceived as having some value and so academic staff are encouraged to disclose more and more of them.

Saturday, October 14, 2017

Some things you wanted to know about robot sex* (but were afraid to ask)


I am pleased to announce that Robot Sex: Social and Ethical Implications (MIT Press, 2017), edited by myself and Neil McArthur, is now available for purchase. You can buy the hardcopy/ebook via Amazon in the US. You can buy the ebook in the UK as well, but the hardcopy might take another few weeks to arrive. I've never sold anything before via this blog. That all changes today. Now that I actually have something to sell, I'm going to turn into the most annoying, desperate, cringeworthy and slightly pathetic salesman you could possibly imagine...

...Hopefully not. But I would really appreciate it if people could either (a) purchase a copy of the book and/or (b) recommend it to others and/or (c) review it and generally spread the word. Academic books are often outrageously expensive, but this one lies at the more reasonable end of the spectrum ($40 in the US and £32 in the UK). I appreciate it is still expensive though. To whet your appetite, here's a short article I put together with Neil McArthur that covers some of the themes in the book.


Sex robots are coming. Basic models exist today and as robotics technologies advance in general, we can expect to see similar advances in sex robotics in particular.

None of this should be surprising. Technology and sex have always gone hand-in-hand. But this latest development in the technology of sex seems to arouse considerable public interest and concern. Many people have questions that they want answered, and as the editors of a new academic book on the topic, we are willing to oblige. We present here, for your delectation, *some* of the things you might have wanted to know about robot sex, but were afraid to ask.

1. What is a sex robot?
A ‘robot’ is an embodied artificial agent. A sex robot is a robot that is designed or used for the purpose of sexual stimulation. One of us (Danaher) has argued that sex robots will have three additional properties (a) human-like appearance, (b) human-like movement and behaviour and (c) some artificial intelligence. Each of these properties comes in degrees. The current crop of sex robots, such as the Harmony model developed by Abyss Creations, possess them to a limited extent. Future sex robots will be more sophisticated. You could dispute this proposed definition, particularly its fixation on human-likeness, but we suggest that it captures the kind of technology that people are interested in when they talk about ‘sex robots’.

2. Can you really have sex with a robot?
In a recent skit, the comedian Richard Herring suggested that the use of sex robots would be nothing more than an elaborate form of masturbation. This is not an uncommon view and it raises the perennial question: what does it mean to ‘have sex’? Historically, humans have adopted anatomically precise definitions of sexual practice: two persons cannot be said to have ‘had sex’ with one another until one of them has inserted his penis into the other’s vagina. Nowadays we have moved away from this heteronormative, anatomically-obsessive definition, not least because it doesn’t capture what same-sex couples mean when they use the expression ‘have sex’. In their contribution to our book, Mark Migotti and Nicole Wyatt favour a definition that centres on ‘shared sexual agency’: two beings can be said to ‘have sex’ with one another when they intentionally coordinate their actions to a sexual end. This means that we can only have sex with robots when they are capable of intentionally coordinating their actions with us. Until then it might really just be an elaborate form of masturbation -- emphasis on the 'elaborate'.

3. Can you love a robot?
Sex and love don’t have to go together, but they often do. Some people might be unsatisfied with a purely sexual relationship with a robot and want to develop a deeper attachment. Indeed, some people have already formed very close attachments to robots. Consider, for example, the elaborate funerals that US soldiers have performed for their fallen robot comrades. Or the marriages that some people claim to have with their sex dolls. But can these close attachments ever amount to ‘love’? Again, the answer to this is not straightforward. There are many different accounts of what it takes to enter into a loving relationship with another being. Romantic love is often assumed to require some degree of reciprocity and mutuality, i.e. it’s not enough for you to love the other person, they have to love you back. Furthermore, romantic love is often held to require free will or autonomy: it’s not enough for the other person to love you back, they have to freely choose you as their romantic partner. The big concern with robots is that they wouldn’t meet these mutuality and autonomy conditions, effectively being pre-programmed, unconscious, sex slaves. It may be possible to overcome these barriers, but it would require significant advances in technology.

4. Should we use child sex robots to treat paedophilia?
Robot sex undoubtedly has its darker side. The darkest of all is the prospect of child sex robots that cater to those with paedophiliac tendencies. In July 2014, in a statement that he probably now regrets, the roboticist Ronald Arkin suggested that we could use child sexbots to treat paedophilia in the same way that methadone is used to treat heroin addiction. After all, if the sexbot is just an artificial entity (with no self-consciousness or awareness) then it cannot be harmed by anything that is done to it, and if used in the right clinical setting, this might provide a safe outlet for the expression of paedophiliac tendencies, and thereby reduce the harm done to real children. ‘Might’ does not imply ‘will’, however, and unless we have strong evidence for the therapeutic benefits of this approach, the philosopher Litska Strikwerda suggests that there is more to be said against the idea than in its favour. Allowing for such robots could seriously corrupt our sexual beliefs and practices, with no obvious benefits for children.

5. Will sex robots lead to the collapse of civilisation?
The TV series Futurama has a firm answer to this. In the season 3 episode, ‘I Dated a Robot’, we are told that entering into sexual relationships with robots will lead to the collapse of civilisation because everything we value in society — art, literature, music, science, sports and so on — is made possible by the desire for sex. If robots can give us ‘sex on demand’ this motivation will fade away. The Futurama-fear is definitely overstated. Unlike Freud, we doubt that the motivations for all that is good in the world ultimately reduce to the desire for sex. Nevertheless, there are legitimate concerns one can have about the development of sex robots, in particular the ‘mental model’ of sexual relationships that they represent and reinforce. Others have voiced these concerns, highlighting the inequality inherent in a sexual relationship with a robot and how that may spill over into our interactions with one another. At the same time, there are potential upsides to sex robots that are overlooked. One of us (McArthur) argues in the book that sex robots could distribute sexual experiences more widely and lead to more harmonious relationships by correcting for imbalances in sex drive between human partners. Similarly, our colleague Marina Adshade, argues that sex robots could improve the institution of marriage by making it less about sex and more about love.

This is all speculative, of course. The technology is still in its infancy but the benefits and harms need to be thought through right now. We recommend viewing its future development as a social experiment, one that should be monitored and reviewed on an ongoing basis. If you want to learn more about the topic, you should of course buy the book.

~ Full Table of Contents ~

I. Introducing Robot Sex
1. 'Should we be thinking about robot sex?' by John Danaher 
2. 'On the very idea of sex with robots?' by Mark Migotti and Nicole Wyatt

II. Defending Robot Sex
3. 'The case for sex robots' by Neil McArthur 
4. 'Should we campaign against sex robots?' by John Danaher, Brian Earp and Anders Sandberg 
5. 'Sexual rights, disability and sex robots' by Ezio di Nucci

III. Challenging Robot Sex
6. 'Religious perspectives on sex with robots' by Noreen Hertzfeld 
7. 'The Symbolic-Consequences argument in the sex robot debate' by John Danaher 
8. Legal and moral implications of child sex robots' by Litska Strikwerda

IV. The Robot's Perspective
9. 'Is it good for them? Ethical concern for the sexbots' by Steve Petersen 
10. 'Was it good for you too? New natural law theory and the paradox of sex robots' by Joshua Goldstein

V. The Possibility of Robot Love
11. 'Automatic sweethearts for transhumanists' by Michael Hauskeller
12. 'From sex robots to love robots: Is mutual love with a robot possible' by Sven Nyholm and Lily Eva Frank

VI. The Future of Robot Sex
13. 'Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications' by Matthias Scheutz and Thomas Arnold
14. 'Deus sex machina: Loving robot sex workers and the allure of an insincere kiss' by Julie Carpenter
15. 'Sex robot induced social change: An economic perspective' by Marina Adshade

Sunday, October 1, 2017

Episode #30 - Bartholomew on Adcreep and the Case Against Modern Marketing


In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book.

You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here).

Show Notes

  • 0:00 - Introduction
  • 0:55 - The crisis of attention
  • 2:05 - Two types of Adcreep
  • 3:33 - The history of advertising and its regulation
  • 9:26 - Does the history tell a clear story?
  • 12:16 - Differences between Europe and the US
  • 13:48 - How public and private spaces have been colonised by marketing
  • 16:58 - The internet as an advertising medium
  • 19:30 - Why have we tolerated Adcreep?
  • 25:32 - The corrupting effect of Adcreep on politics
  • 32:10 - Does advertising shape our identity?
  • 36:39 - Is advertising's effect on identity worse than that other external forces?
  • 40:31 - The modern technology of advertising
  • 45:44 - A digital panopticon that hides in plain sight
  • 48:22 - Neuromarketing: hype or reality?
  • 55:26 - Are we now selling ourselves all the time?
  • 1:04:52 - What can we do to redress adcreep?

Relevant Links


Thursday, September 28, 2017

Algorithmic Governance: Developing a Research Agenda Through Collective Intelligence

I have a new paper, just published, on the topic of algorithmic governance. This one is a bit different from my usual fare. It's a report from a 'collective intelligence' workshop that I ran with my colleague Michael Hogan from the psychology department at NUI Galway. It tries to develop a research agenda for the study of algorithmic governance by harnessing the insights from an interdisciplinary group of scholars. It's available in open access format at the journal Big Data and Society. Just click on the paper title below to read the full thing.

Title: Algorithmic Governance: Developing a research agenda through collective intelligence
Journal: Big Data and Society
Authors: John Danaher, Michael J Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, Aisling De Paor, Heike Felzmann, Muki Haklay, Su-Ming Khoo, John Morison, Maria Helen Murphy, Niall O’Brolchain, Burkhard Schafer and Kalpana Shankar
Abstract: We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

Tuesday, September 26, 2017

How to Build and Rawlsian Algorithm for Self-Driving Cars

Google's Self-Driving Car - via Becky Stern on Flickr

Swerve or slow down? That is the question. The question that haunts designers of self-driving cars. The dilemma will be familiar to students of moral philosophy. Suppose an autonomous car is driving down an urban street. You are the passenger. Suddenly, from behind a parked car, a group of pedestrians stumbles out into the middle of the road. If the car breaks and continues on its current course, it will not slow down in time to avoid colliding with the group. If it does collide with them, it will almost certainly kill them all. If the car swerves, it will collide with a solid wall, almost certainly killing you. What should it be programmed to do?

Some philosophical thought experiments are completely fanciful — the infamous trolley problems upon which this story is based are good examples of this. This particular philosophical thought experiment is not. It’s likely that self-driving cars will encounter some variant of the swerve or slow down dilemma in the course of their operation. After all, human drivers encounter these dilemmas on a not infrequent basis. And no matter how safe and risk averse the cars are programmed to be, they will have unplanned encounters with reckless pedestrians.

So what should be done? The answer is that the car should be programmed to follow some sort of moral algorithm — a rule (or set of rules) that tells it how to behave in these scenarios. One possibility is that it should be programmed to follow an act-utilitarian algorithm: the probability of death for you and the pedestrians should be calculated (the car could be fed the latest statistics about deaths in these kinds of scenarios and update accordingly), and it should pick the option that maximises the overall survival rate. Alternatively, the car could be programmed to follow a ‘heroic self-sacrifice’ algorithm, i.e. whenever it encounters a scenario like this, it should sacrifice the car and its passenger, not the pedestrians. Either way, the same outcome is likely: the car should swerve, not slow down. More selfish algorithms are possible too. Maybe the car should follow a ‘Randian algorithm’ whereby the interests of the driver trump the interests of the pedestrians?

In this post, I want to look at yet another possible algorithm that could be adopted in the swerve or slow down dilemma: a Rawlsian one. This is a proposal that has recently been put forward by Derek Leben, a philosopher at the University of Pittsburgh. I find the proposal fascinating because not only is it philosophically interesting, it also sounds eminently feasible. At the same time, it forces us to confront some uncomfortable truths about ethics in the age of autonomous vehicles.

I’ll break the discussion down into three parts. First, I’ll briefly explain Rawlsianism - the philosophy that inspires the algorithm. Second, I’ll outline how a Rawlsian algorithm would work in practice. And third, I’ll address some of the objections one could have to the use of the Rawlsian algorithm. This post is very much a summary of Leben’s article, which I encourage everyone to read. It’s one of the more interesting pieces of applied philosophy that I have read in recent times.

1. What is a Rawlsian Algorithm?
Leben’s proposal is obviously based on the work of John Rawls, who was the most influential political philosopher of the 20th century. Rawls’s most famous work was the 1971 classic A Theory of Justice in which he outlined his vision for a just society. We don’t need to get too mired in the details of the theory for the purposes of understanding Leben’s proposal; a few choice elements are all we need.

First, we need to appreciate that Rawls’s theory is a form of liberal contractarianism. That is to say, Rawls works from the basic liberal assumption that people are moral equals (i.e. no one person has the right to claim moral authority over another without certain legitimacy conditions being met). This moral assumption creates problems because we often need to exercise some coercive control over one another’s behaviour in order to secure mutually beneficial outcomes.

This problem is easily highlighted by thinking about some of the classic ‘games’ that are used to explain the issues that arise when two or more people must cooperate for mutual gain. The Prisoners’ Dilemma is the most famous of these. The set-up will be familiar to many readers (if you know it, skip this paragraph and the next). Two prisoners are arrested for the same crime and put in separate jail cells. The police are convinced that they have enough evidence to charge them with an offence that attracts a two year sentence, however, they would like to charge at least one of them with a more severe offence that attracts a ten year sentence. To enable this, the police offer each prisoner the same deal. If one of them ‘squeals’ on their partner and the partner remains silent, they can get off free and the partner will be charged with the ten-year offence. If they both squeal on each other, they both get charged with a five-year offence. If they both remain silent, they will be charged with the two-year offence. If you were one of these prisoners, what would you do? Before answering that, take a look at the payoff matrix for the game, which is illustrated below. The strategies of ’squealing’ and ’staying silent’ have been renamed ‘defect’ and ‘cooperate’ in keeping with the convention in the literature.

So now that you have looked at the payoff matrix, return to the question: what should you do? If we follow strict principles of rationality, the answer is that you should squeal on your partner. In the language of game theory, doing so ‘strictly dominates’ staying silent: it always yields a higher payoff (or in this instance: a lesser sentence), no matter what your opponent does. The difficulty with this analysis, though, is that it yields an outcome that is clearly worse for both prisoners than remaining silent. They both end up in jail for five years when they could have got away with a two-year sentence. In technical terms, we say that the ‘rational’ choice in the game yields a ‘Pareto inefficient’ outcome. There is another combination of choices in the game that would make every player better off without a loss to anyone else (an outcome that is ‘Pareto optimal’).

The Prisoners’ Dilemma is just a story, but the interaction it describes is supposed to be a common one. Indeed, one of Rawls’s key contentions — and if you don’t believe me read his lecture notes on political philosophy — is that coming up with a way to solve Prisoners’ Dilemma-scenarios is central to liberal political theory. Somehow, we have to move society out of the Pareto inefficient outcomes and into the Pareto optimal ones. The obvious way to do this is to establish a state with a monopoly on the use of violence. The state can then threaten people with worse outcomes if they fail to cooperate. But coercive threats don’t sit easily with the liberal conscience. There has to be some morally defensible reason for allowing the state to exercise its muscle in this way.

That’s where the Rawlsian algorithm comes in. Like other liberal theorists, Rawls argued that the authority we grant the state has to be such that reasonable people would agree to it. Imagine that everyone is getting together to negotiate the ‘contract of state’. What terms and conditions would they agree to? One difficulty we have in answering this question is that we are biased by our existing predicament. Some of us are well-off under the current system and will, no doubt, favour its terms and conditions. Others are less well-off and will favour renegotiation. To figure out what reasonable people would really agree to, we need to rid ourselves of these biases. Rawls recommended that we do this by imagining that we are negotiating the contract of state from behind a ‘veil of ignorance’. This veil hides our current predicament from us. As a result, we don’t know where we will end up after the contract has been agreed. We might be among the better off; but then again we might not.

Rawls’s key claim then is that if we were negotiating from behind the veil of ignorance, we would adopt the following decision rule:

Maximin decision rule: Favour those terms and conditions (policies, rules, procedures etc) that maximise the benefits to the worst off members of society.

Or, to put it another way, favour the distribution of the benefits and burdens of social living that ‘raises the floor’ to its highest possible level.

This maximin decision rule is in effect a ‘Rawlsian’ algorithm. How could it be implemented in a self-driving car?

2. The Rawlsian Algorithm in Practice
To implement a Rawlsian algorithm in practice, you need to define three variables:

Players: First, you need to define the ‘players’ in the game in which you are interested. In the case of the swerve or slow down ‘game’ the players are the passenger(s) (i.e. you - P1) and the pedestrians. For ease of analysis, let’s say there are four of them (P2... P5).

Actions: Second, you need to define the actions that can be taken in the game. In our case, the actions are the decisions that can be made by the car’s program. There are two actions in this game: slow down (A1) and swerve (A2)

Utility Functions: Third, you’ll need to define the utility functions for the players in the game, i.e. the payoffs they receive for each action taken. In our case, the payoffs can be recorded as the probability of survival for each of the players. This will be a number between 0 and 1. Presumably, actual tables of data could be assembled for this purpose based on records of past accidents of this sort, but let’s say for our purposes that if the car slows down and collides with the four pedestrians, it lowers their probability of survival from 0.99 to 0.05. And if it swerves, it lowers your probability of survival from 0.99 to 0.01. (Just note that this means we are assuming that the pedestrians have a slightly higher probability of survival from collision in this scenario than the passenger does)

This gives us the following payoff matrix for the game:

With this information in place, we can easily program the car to follow a maximin decision procedure. Remember the goal of this procedure is to ‘raise the floor’ for the highest number of people. This can be done by following three simple steps:

Step One: Identify the worst payoffs for each possible action and compile them into a set. In our case, the worst payoff for A1 is the 0.05 probability of survival and the worst payoff for A2 is the 0.01 probability of death. This gives us the following set of worst outcomes (0.05, 0.01).

Step Two: From this set of worst outcomes, identify the best possible outcome and the actions that yield it. Call this outcome a. In our case, outcome a is the 0.05 probability of survival and the action that yields it is A1.

Step Three: If there is only one action that yields a , implement this action. If there is more than one action that yields a, then you need to ‘mask’ for a (i.e. eliminate the as from the analysis) and repeat steps one and two again (i.e. maximise for the second worst outcome). You repeat this process until either (i) you identify a unique action that can yield an outcome a or (ii) if there is only one outcome left in the game, and a tie between two or more actions that yield that outcome, then you randomise between those actions (because it doesn’t matter from the maximin perspective).

In the case of the swerve or slow down dilemma, the algorithm is very simply applied. Following step two there will be only one action (A1) that yields the least-bad outcome in the game (the 0.05 probability of survival). This is the action that will be selected by the car. This means the car will slow down rather than swerve. This is in keeping with Rawls’s maximin procedure since it is raises the worst possible outcome from a 0.01 probability of survival to a 0.05 probability of survival. This is, admittedly, somewhat counterintuitive because it means that more people are likely to die, but we return to this point below.

The Rawlsian algorithm is illustrated below.

The Rawlsian Algorithm - diagram from Leben 2017

Two points should be noted before we proceed. First, two aspects of this decision-procedure are not found in Rawls’s original writing: (i) the ‘masking’ procedure and the (ii) randomisation option. These are modifications introduced by Leben, but they make a lot of sense and I would not be inclined to challenge them (I’ve long been a fan of randomisation options in the design of moral algorithms). Second, the maximin procedure can yield significantly different outcomes if you modify the probabilities of survival ever so slightly. For example, if you reversed the probabilities of survival so that the pedestrians had the 0.01 probability of survival following collision and the driver had the 0.05, the maximin procedure would favour swerving over slowing down. This is despite the fact that the utilitarian choice in both cases is the same.

3. Objections to the Rawlsian Algorithm
One thing I like about Leben’s proposal is that it is eminently practicable. Sometimes discussions about moral algorithms are fantastical because they demand information that we simply do not have nor could not hope to have. That doesn’t seem to be true here. We could assign reasonable figures to the probability of survival in this scenario that could be quickly calculated and updated by the car’s onboard computer. Furthermore, I like how it puts another option on the table when it comes to the design of moral algorithms. To date, much of the discussion has focused on standard act-utilitarian versus deontological algorithms. This is largely due to the fact that the discussion has been framed in terms of trolley problem dilemmas, which were first invented to test our intuitions with respect to those moral theories.

That said, there are some obvious concerns. One could reject Rawls’s views and so reject any algorithm based on them, but as Leben notes, his job is not to defend Rawlsianism as a whole. That’s too large a task. Other concerns can be tied more specifically to the application of Rawlsianism to the swerve or slow down scenario. Leben discusses three in his article.

The first is that the utility functions are incomplete. The survival probabilities are just one factor among many that we should be considering. Some people would claim that not all lives are equal. Some people are young; some are old. The young have more of their lives left to live. Perhaps they should be favoured in any crash algorithm over the old? In other words, perhaps there should be some weighting for ‘life years’ included in the Rawlsian calculation. Leben points out that, if you wanted to, you could include this information. The QALY (quality adjusted life years) measure is in widespread use in healthcare contexts and could inform the car’s decision-making. It might be a little bit more difficult to implement this in practice. The car would have to be given access to everybody’s QALY score and this would have to be communicated to the car prior to its decision. This is not impossible — given ongoing developments in smart tech and mass surveillance, people could be forced to carry devices on their person that communicated this information to the car — but allowing for it would have other significant social costs that should be borne in mind.

The second is that applying the Rawlsian algorithm might create a perverse incentive. Remember, the maximin decision procedure tries to avoid the worst possible outcomes. This means, bizarrely, that it actually pays to be the person with the highest probability of death in a swerve or slow down dilemma. We see this clearly above: the mere fact that the passenger had a higher probability of death from collision with the wall was enough to save his/her skin, despite the fact that doing so would raise the probability of death for more people. This might give people a perverse incentive not to take precautions to protect themselves from harm on the roads. But this incentive is probably overstated. Even though the kinds of dilemmas covered by the algorithm are not implausible, they are still going to be relatively rare. The benefits of taking precautions in all other contexts are likely to outweigh the costs of doing so when you land yourself in a swerve or slow-down type scenario.

The third and final concern is simply the one I noted above: that the maximin procedure yields a very counterintuitive result in the example given. It says the car should collide with the pedestrians even though this means that more people are likely to die. This is pretty close to a typical utilitarianism vs Rawlsianism concern and so brings us into bigger issues in moral philosophy. But Leben says a couple of sensible things about this. One is that how counterintuitive this is will depend on how much Rawlsian Kool Aid we have imbibed. Rawls argued that we should think about social rules from behind a ‘thick’ veil of ignorance, i.e. a veil that masks us from pretty much everything we know about our current selves, leaving us with just our basic rational and cognitive faculties. If we really didn’t know who we might end up being in the swerve or slow down dilemma, we might be inclined to favour the maximin rule. The other point, which is probably more important, is that every moral rule that is consistently followed yields counterintuitive results. So if we’re after totally intuitive results when it comes to designing self-driving cars, we are probably on a fool’s errand. Still, as I discussed previously when looking at the work of Hin Yan Liu, the fact that self-driving cars might follow moral rules more consistently than humans ever could, might tell against them for other reasons.

Anyway, that's it for this post.

Friday, September 22, 2017

Episode #29 - Moore on the Quantified Worker


In this episode, I talk to Phoebe Moore. Phoebe is a researcher and a Senior Lecturer in International Relations at Middlesex University. She teaches International Relations and International Political Economy and has published several books, articles and reports about labour struggle, industrial relations and the impact of technology on workers' everyday lives. Her current research, funded by a BA/Leverhulme award, focuses on the use of self-tracking devices in companies. She is the author of a book on this topic entitled The Quantified Self in Precarity: Work, Technology and What Counts, which has just been published. We talk about the quantified self movement, the history of workplace surveillance, and a study that Phoebe did on tracking in a Dutch company.

You can download the episode here, or listen below. You can also subscribe on iTunes and Stitcher.

Show Notes

  • 0:00 - Introduction
  • 1:27 - Origins and Ethos of the Quantified Self Movement
  • 7:39 - Does self-tracking promote or alleviate anxiety?
  • 10:10 - The importance of gamification
  • 13:09 - The history of workplace surveillance (Taylor and the Gilbreths)
  • 16:27 - How is workplace quantification different now?
  • 20:26 - The Agility Agenda: Workplace surveillance in an age of precarity
  • 29:09 - Tracking affective/emotional labour
  • 34:08 - Getting the opportunity to study the quantified worker in the field
  • 38:18 - Can such workplace self-tracking exercises ever be truly voluntary?
  • 41:05 - What were the key findings of the study?
  • 46:07 - Why was there such a high drop-out rate?
  • 49:37 - Did workplace tracking lead to increased competitiveness?
  • 53:32 - Should we welcome or resist the quantified worker phenomenon?

Relevant Links