Monday, November 16, 2015

Is Anyone Competent to Regulate Artificial Intelligence?




Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

In outlining these problems, I was drawing from the work of Matthew Scherer and his soon-to-be-published article “Regulating Artificially Intelligent Systems: Risks, Challenges, Competencies and Strategies”. Today I want to return to that article and consider the next step in the regulatory project. Once we have a handle on the basic problems, we need to consider who might competent to deal with them. In most Western countries, there are three main regulatory bodies:

Legislatures: The body of elected officials who enact general laws and put in place regulatory structures (e.g. in the US the Houses of Congress; in the UK the Houses of Parliament; in Ireland the Houses of the Oireachtas).

Regulatory Agencies: The body of subject-area specialists, established through legislation, and empowered to regulate a particular industry/social problem, often by creating, investigating and enforcing regulatory standards (there are many examples, e.g. the US Food and Drug Administration, the UK Financial Conduct Authority; the Irish Planning Board (An Bord Pleanala))

Courts: The judges and other legal officials tasked with arguing, adjudicating and, ultimately, enforcing legal standards (both civil and criminal).

To these three bodies, you could perhaps add “The Market” which can enforce a certain forms of discipline on private commercial entities, and also internal regulatory bodies within those commercial entities (though such bodies are usually forced into existence by law). For the purposes of this discussion, however, I’ll be sticking to the three bodies just outlined. The question is whether any of these three bodies are competent to regulate the field of artificial intelligence. This is something Scherer tries to answer in his article. I’ll follow his analysis in the remainder of this post, but where Scherer focuses entirely on the example of the United States I’ll try to be a little more universal.

Before I get underway, it is worth flagging two things about AI that could affect the competency of anyone to regulate its development and deployment. The first is that AI is (potentially) a rapidly advancing technology: many technological developments made over the past 50 years are now coming together in the form of AI. This makes it difficult for regulatory bodies to ‘keep up’. The second is that advances in AI can draw on many different fields of inquiry, e.g. engineering, statistics, linguistics, computer science, applied mathematics, psychology, economics and so on. This makes it difficult for anyone to have the relevant subject-area expertise.


1. The Competencies of Legislatures
Legislatures typically consist of elected officials, appointed to represent the interests of particular constituencies of voters, with the primary goal of enacting policy via legislation. Legislatures are set up slightly differently around the world. For example, in some countries, there are non-elected legislatures working in tandem with elected legislatures. In some countries, lobbyists have significant influence over legislators; in others this influence is relatively weak. In some countries, the executive branch of government effectively controls the legislature; in others the executive is an entirely distinct branch of government.

Scherer argues that three things must be remembered when it comes to understanding the regulatory role of a legislature:

Democratic Legitimacy: The legislature is generally viewed as the institution with the most democratic legitimacy, i.e. it is the institution that represents the people’s interests and answers directly to them. Obviously, the perceived legitimacy of the legislature can wax and wane (e.g. it may wane when lobbying power is excessive). Nevertheless, it will still tend to have more perceived democratic legitimacy than the other regulatory bodies.

Lack of Expertise: Legislatures are generally made up of career politicians. It is very rare for these career politicians to have subject matter expertise when it comes to a proposed regulatory bill. They will have to rely on judgments from constituents, advisors, lobbyists and experts called to give evidence before a legislative committee.

Delegation and Oversight: Legislatures have the ability to delegate regulatory power to other agencies. Sometimes they do this by creating an entirely new agency through a piece of legislation. Other times they do so by expanding or reorganising the mission of a pre-existing agency. The legislature then has the power to oversee this agency and periodically call it account for its actions.

What does all this mean when it comes to the AI debate? It means that legislatures are best placed to determine the values and public interests that should go into any proposed regulatory scheme. They are directly accountable to the people and so they can (imperfectly) channel those interests into the formation of a regulatory system. Because they lack subject matter expertise, they will be unable to determine particular standards or rules that should govern the development and deployment of AI. They will need to delegate that power to others. But in doing so, they could set important general constraints that reflect the public interest in AI.

There is nothing too dramatic in this analysis. This is what legislatures are best-placed to do in virtually all regulatory matters. That said, the model here is idealistic. There are many ways in which legislatures can fail to properly represent the interests of the public.


2. The Competencies of Regulatory Agencies
Regulatory agencies are bodies established via legislation and empowered to regulate a particular area. They are quite variable in terms of structure and remit. This is because they are effectively designed from scratch by legislatures. In most legal systems, there are some general constraints imposed on possible regulatory structures by constitutional principles (e.g. a regulatory agency cannot violate or undermine constitutionally protected rights). But this still gives plenty of scope for regulatory innovation.

Scherer argues that there are four things about regulatory agencies that affect their regulatory competence:

Flexibility: This is what I just said. Regulatory agencies can be designed from scratch to deal with particular industries or social problems. They can exercise a variety of powers, including policy-formation, rule-setting, information-collection, investigation, enforcement, and sanction. Flexibility often reduces over time. Most of the flexibility arises during the ‘design phase’. Once an agency comes into existence, it tends to become more rigid for both sociological and legal reasons.

Specialisation and Expertise: Regulatory agencies can appoint subject-matter experts to assist in their regulatory mission. Unlike legislatures who have to deal with all social problems, the agency can keep focused on one mission. This enhances their expertise. After all, expertise is a product of both: (a) pre-existing qualification/ability and (b) singular dedication to a particular task.

Independence and Alienation: Regulatory agencies are set up so as to be independent from the usual vagaries of politics. Thus, for example, they are not directly answerable to constituents and do not have to stand for election every few years. That said, the independence of agencies is often more illusory than real. Agencies are usually answerable to politicians and so (to some extent) vulnerable to the same forces. Lobbyists often impact upon regulatory agencies (in some countries there is a well-known ‘revolving door’ for staff between lobbying firms, private enterprises, and regulatory agencies). Finally, independence can come at the price of alienation, i.e. a perceived lack of democratic legitimacy.

The Power of Ex Ante Action: Regulatory agencies can establish rules and standards that govern companies and organisations when they are developing products and services. This allows them to have a genuine impact on the ex ante problems in any given field. This makes them very different from the courts, who usually only have ex post powers.


What does this mean for AI regulation? Well, it means that a bespoke regulatory agency would be best placed to develop the detailed, industry-specific rules and standards that should govern the research and development of AI. This agency could appoint relevant experts who could further develop their expertise through their work. This is the only way to really target the ex ante problems highlighted previously.

But there are clearly limitations to what a bespoke regulatory agency can do. For one thing, the fact that regulatory structures become rigid once created is a problem when it comes to a rapidly advancing field like AI. For another, because AI potentially draws on so many diffuse fields, it may be difficult to recruit an appropriate team of experts. Relevant insights that catapult AI development into high gear may come from unexpected sources. Furthermore, people who have the relevant expertise may be hoovered up by the enterprises they are trying to regulate. Once again, we may see a revolving door between the regulatory agency and the AI industry.


3. The Competencies of Courts
Courts are judicial bodies that adjudicate on particular legal disputes. They usually have some residual authority over regulatory agencies. For instance, if you are penalised by a regulatory agency you will often have the right to appeal that decision to the courts. This is a branch of law known as administrative law. Although legal rules vary, most courts adopt a pretty deferential attitude toward regulatory agencies. They do so on the grounds that the agencies are the relevant subject-matter experts. That said, courts can still use traditional legal mechanisms (e.g. criminal law or tort law) to resolve disputes that may arise from the use of a technology or service.

Scherer focuses on the tort law system in his article. So the scenario lurking in the background of his analysis is a case in which someone is injured or harmed by an AI system and tries to sue the manufacturer for damages. He argues that four things must be kept in mind when assessing the regulatory competence of the tort law system in cases like this:

Fact-Finding Powers: Rules of evidence have been established that give courts extensive fact-finding powers in particular disputes. These rules reflect both a desire to get at the truth and to be fair to the parties involved. This means that courts can often acquire good information about how products are designed and safety standards implemented, but that information is tailored to a particular case and not to what happens in the industry more generally.

Reactive and Reactionary: Courts can only intervene and impose legal standards after a problem has arisen. This can have a deterrent effect on future activity within an industry. But the reactive nature of the court also means that it has a tendency to be reactionary in its rulings. In other words, courts can be victims of “hindsight bias” and assume that the risk posed by a technology is greater than it really is.

Incrementalist: Because courts only deal with individual cases, and because the system as a whole moves quite slowly, it can really only make incremental changes.

Misaligend Incentives: In common law systems, the litigation process is adversarial in nature: one side prosecutes a claim; the other defends. Lawyers only take cases to court that they think can be won. They call witnesses that support their side. In this, they are concerned solely with the interests of their clients, not with the interests of the public at large. That said, in some countries class actions are possible, which allow for many people to bring the same type of case against a defendant. This means some cases can represent a broader set of interests.

What does all this mean for AI regulation? Well, it suggests that the court system cannot deal with any of the ex ante problems alluded to earlier on. It can only deal with ex post problems. Furthermore, in dealing with those problems, it may move too slowly to keep up with the rapid advances in the technology, and may tend to overestimate the risks associated with the technology. If you think those risks are great (bordering on the so-called “existential” risk-category proposed by Nick Bostrom), this reactionary nature might be a good thing. But, even still, the slowness of the system will count against it. Scherer thinks this tips the balance decisively in favour of some specially constructed regulatory agency.




4. Conclusion: Is there hope for regulation?
Now that we have a clearer picture of the regulatory ecosystem, we can think more seriously about the potential for regulation in solving the problems of AI. Scherer has a proposal in his article, sketched out in some reasonable detail. It involves leveraging the different competencies of the three bodies. The legislature should enact an Artificial Intelligence Development Act. The Act should set out the values for the regulatory system:

[T]o ensure that AI is safe, secure, susceptible to human control, and aligned with human interests, both by deterring the creation of AI that lack those features and by encouraging the development of beneficial AI that include those features. 
(Scherer 2015)

The act should, in turn, establish a regulatory agency with responsibility for the safe development of AI. This agency should not create detailed rules and standards for AI, and should not have the power to sanction or punish those who fail to comply with its standards. Instead, it should create a certification system, under which agency members can review and certify an AI system as “safe”. Companies developing AI systems can volunteer for certification.

You may wonder why any company would bother to do this. The answer is that the Act would also create a differential system of tort liability. Companies that undergo certification will have limited liability in the event that something goes wrong. Companies that fail to undergo certification will face strict liability standards in the event of something going wrong. Furthermore, this strict liability system will be joint and several in nature: any entity in the design process could face full liability. This creates an incentive for AI developers to undergo certification, whilst at the same time not overburdening them with compliance rules.

In a way, this is a clever proposal. It tries to balance the risks and rewards of AI. The belief is that we shouldn’t stifle creativity and development within the sector, and that we should encourage safe and beneficial forms of AI. My concern is that this system misses some of the unique properties of AI that make it such a regulatory challenge. In particular, the proposal seems to ignore the difficulty of (a) finding someone to regulate and (b) the control problem.

This is ironic given that Scherer was quite good at outlining those challenges in the first part of his article. There, he noted how AI developers need not be large, well-integrated organisations based in a single jurisdiction. But if they are not, then it may be difficult to ‘reach’ them with the proposed regulatory regime. I am guessing the joint and several liability proposal is designed to address this problem as it creates an incentive for anyone involved in the process to undergo certification, but it assumes that diffuse networks of developers have the end goal of producing a ‘consumer’ type device. This may not be true.

Furthermore, earlier in the article, Scherer noted how AI systems can do things that are beyond the control or anticipation of their original designers. This creates liability problems but these problems can be addressed through the use of strict liability standards. At the same time, however, it also creates problems in the certification process. Surely if AI systems can act in unplanned and unanticipated ways, it follows that members of a putative regulatory agency would not be well-equipped to certify an AI system as “safe”? That could be concerning. The proposed system would probably be better than nothing, and we shouldn’t make the perfect the enemy of the good, but anyone who is convinced of the potential for AI to pose an “existential threat” to humanity is unlikely to think that regulation of this sort can play a valuable role in mitigating that risk.

Scherer is aware of this. He closes by stating that his goal is not to provide the final word but rather to start a conversation on the best legal mechanisms for managing AI risk. That’s certainly a conversation that needs to continue.

Saturday, November 14, 2015

Blockchain Technology, Smart Contracts and Smart Property






Blockchain technology is at the heart of cryptocurrencies like Bitcoin. Most people have heard of Bitcoin and some are excited by the prospect it raises of a decentralised, stateless currency/payment system. But this is not the most interesting thing about Bitcoin. It is the blockchain technology itself that is the real breakthrough. It not only provides the foundation for a currency and payment system; it also provides the foundation for new ways of organising and managing basic social relationships. This includes legal relationships such as those involved in contractual exchange and proprietary ownership. The most prominent expression of this potential comes in the shape of Ethereum, an open source platform that allows developers to use blockchains for whatever purpose they see fit.

This might sound a little abstract and confusing. Blockchain technology is exciting, but many people are put off by the technical and abstruse concepts underpinning it. Proponents of the technology talk about strange things like cryptographic hash functions and public key encryption. They also refer to obscure mathematical puzzles like the Byzantine Generals problem in order to explain how it works. This is daunting. Many wonder whether they have to master this obscure conceptual vocabulary in order to understand what all the fuss is about.

If they want to engage with the technology at the deepest levels, they do. But to gain a high level understanding of how it works, and to share some of the excitement of its proponents, they don’t. My goal in this post is to provide that high-level understanding, and to explain how the technology could provide an underpinning for things like smart contracts and smart property. With luck, this will enable people to see the potential for this technology and will pique their interest in its political, legal and ethical implications.

I appreciate that there are many other articles out there that try to do the same thing. I am merely adding one more to the pile. I do so in the hope that it may prove useful to some, but also in the hope that it helps me to better understand the phenomenon. After all, most writing is an exercise in self-explanation. It is through communication that we truly begin to understand.

The remainder of this post is divided into three main sections. The first talks about the ‘Trust Problem’ that motivates the creation of the blockchain. The second tries to provide a detailed but non-mathematical description of how the blockchain works to solve the trust problem. The third explains how the technology could support a system of smart contracts and smart property.


1. The Trust Problem and the Motivation for the Blockchain
All human societies have a trust problem. In order to survive and make a living, we must coordinate and cooperate with others. In doing so, there is potential for these others to mislead, deceive and disappoint. To ensure successful ongoing cooperation, we need to be able to trust each other. Many societies have invented elaborate rituals, laws and governance systems to address this trust problem. At its most fundamental level, blockchain technology tries to do the same.

To illustrate, let’s use the example of a currency and payment system. This seems appropriate given the origins of blockchain technology in the development of such systems. I’m going to use the example of a real-world currency system: the currency used (historically) on the Island of Yap. Some people will be familiar with this example as it is beloved by economists. The only problem is that example has become heavily mythologised and abstracted from the actual historical reality. I’m not an expert on that history, so what I am about to describe is also likely to be highly mythologised and simplified. I hope that’s okay: the goal here is to explain the rationale behind blockchain technology, not to write an accurate monetary history of the Island of Yap.

Anyway, with that caveat in mind, the Islanders of Yap had an unusual monetary system. They did not use coins as money. Instead, they used stone discs of varying sizes. These discs were mined from another island, several hundred miles away. This ensured the discs that had been mined and brought back to the island retained their value over time. The picture below provides an example and illustrates just how large these discs could get. People would exchange these large discs in important transactions. But obviously the islanders could not just hand the discs to one another to finalise the transaction. The discs remain fixed in place. In order to know who-owned-what, the islanders needed to keep some kind of ledger, which recorded transactional data and allowed them to figure out which stone disc belongs to which islander.




One way to do this would have been to use a trusted third party ledger. In other words, to find some respected tribal elder or chief and make it a requirement that all transactions be logged with him/her. That way, whenever a dispute arose, the islanders could go to the elder and he/she could resolve the dispute. The elder could confirm that Islander A really does own the disc and is entitled to exchange it with Islander B, or vice versa. This is illustrated in the diagram below.



We make use of such trusted third party systems everyday. Indeed, modern political, legal and monetary systems are almost entirely founded upon them. When you make a payment via credit or debit card, that transaction must first be logged with a bank or credit card company, who will verify that you have the necessary funds and that the payment came from you, before the payment is finally confirmed. The same goes for disputes over legal rights. Courts function as trusted third parties who resolve disputes (ultimately via the threat of violence) about contractual rights and property rights (to give just two examples).

But that is not the only way to solve the trust problem. Another way would be to use a distributed consensus ledger. In other words, instead of logging transactional data with a trusted third party, you could require all the islanders to keep an ongoing, updated, record of transactions. Then, when a dispute arises, you go with either the majority or unanimous view of this network of ledger-keepers. As far as I am aware (and this is where my caveat about historical accuracy needs to be borne in mind) this is what the Islanders of Yap seem to have done. Each islander kept a mental record of who owned what, and this distributed mental record could be used to resolve transactional disputes.




Blockchain technology follows this distributed consensus method. It tries to create a computer-based protocol for resolving the trust problem through a distributed and publicly verifiable ledger. This is known as the blockchain. We can define it in the following way (from Wright and De Filippi, 2015):

Blockchain = A distributed, shared, encrypted database which serves as an irreversible and incorruptible public repository of information.


2. How the Blockchain is Built
But how exactly does the technology build the ledger? This is where things can get quite technical. In essence, the blockchain works by leveraging the networking capabilities of modern computers and by using a variety of cryptographic tools for verifying transactional data.

A network is established consisting of many different computers located in many different places. Each computer is a node in the network. You could have one node in South Africa, one in England, one in France, one in the USA, one in Yemen, one in Australia and so on. The network can, in theory, be distributed across the entire world. This network is then used for logging, recording and verifying transactional information. Every computer on the network keeps a record of all transactions taking place on the network. This record is known as the blockchain. It is comprehensive, permanent, public and distributed across all nodes in the network. The network can thus function as a decentralised authority for managing and maintaining records of transactions.

It is easy enough to see how this works in the case of two people exchanging money. Suppose Person A wants to transfer 100 bitcoin (or whatever) to Person B. Person A has a digital ‘wallet’ which contains a record of how much bitcoin they currently own. They sign into this and agree to transfer a certain sum to Person B. They do this by broadcasting to the network that they wish to transfer the money to Person B’s digital wallet. Details of this proposed transaction are then added to a ‘block’ of transactional data that is stored across the network. The ‘block’ is like a temporary record that is in the process of being added to the permanent record (the blockchain). The ‘block’ represents all the transactions that took place on the network during a particular interval of time. In the case of bitcoin, the block includes information about all the transactions taking place in a ten minute interval.

At this stage, the transaction between A and B has not been verified and does not form part of the permanent distributed ledger. What happens next is that once all the data has been collected for a given interval of time, the network works on verifying the details in those transactions (i.e. does A really have that amount of money to send to B? Did A really initiate the transaction? etc). Each computer on the network participates in a competition to verify the transactional data. The winner of this competition gets to add the ‘block’ to the ‘blockchain’ (i.e. they get to update the ledger). When they do so, they broadcast their ‘proof of work’ to the rest of the network. This shows the network how the winning computer verified the transactional data. The other computers on the network then check that proof of work and confirm that the record is correct. This is where the ‘distributed consensus’ comes in. It is only if the winning ‘solution’ is confirmed by the majority that it becomes a permanent part of the blockchain.

This verification process is technically tricky. I have a given a simple descriptive account. For the full picture, you would need to engage with the cryptographic concepts underpinning it.

There are a couple of interesting things about this, over and above its ‘distributed consensus’ nature. The first has to do with the role of trust. Some people refer to the blockchain as a ‘trustless’ system. I think people say this because it is the computer protocol and its combination of cryptographic verification methods that underpin the ledger. Thus, when you are using the system, you do not have to trust or place faith in another human being. This makes it seem very different from, say, the situation faced by the islanders of Yap, who really do have to trust one another when using their distributed ledger. But clearly there is trust of a kind involved in the process. You have to trust the technology, and the theory underpinning it. Maybe that trust is justified, but it still seems to be there. Also, since most people lack the technical know-how to fully understand the system, there is a stronger sense of trust involved for most users: they have to trust the technical experts who establish and maintain the network.

The other interesting thing has to do with the incentive to maintain the network. You may wonder why people would be willing to give up their computing resources to maintain such an elaborate system. The technologically-inclined might do so initially out of curiosity, or maybe some sense of idealism, but to have a widespread network you probably need something more enticing. The solution used by most blockchain systems is to reward members of the network with some digital token that can be used to conduct exchanges on the network. In the case of bitcoin, the winner of the verification competition receives newly minted bitcoin for their troubles. This makes it attractive for people to join and maintain the network. Bitcoin adopts a particular economic philosophy in its reward system: the winner takes all the newly-minted bitcoin. This doesn’t have to be the case. You could adopt a more egalitarian or socialist system in which all members of the network share whatever token of value is being used.


3. Smart Contracts and Smart Property
To this point, I have stuck with the example of bitcoin and illustrated how it uses blockchain technology. But as I noted at the outset, this is merely one use-case. The really interesting thing about blockchain technology is how it can be used to manage and maintain other kinds of transactional data. In essence, the blockchain is a decentralised database that can maintain a record of any and all machine-to-machine communications. And since smart devices, involving machine-to-machine communication, are now everywhere, this makes the blockchain a potentially pervasive technology. Smart contracts and smart property are two illustrations of this potential. I’ll try to explain both.

A contract is an agreement between two or more people involving conditional commitments, i.e. “If you do X for me, I will do Y for you”. A legal contract makes those conditional commitments legally enforceable. If you fail to do X for me, I can take you to court and have you ordered to do X, or ordered to pay me compensation for failing to do X. A smart contract is effectively the same, only you use some technological infrastructure to ensure that conditions have been met and/or to automatically enforce commitments. This can be done using blockchain technology because the distributed ledger system can be used to confirm whether contractual conditions have been met.

Suppose I am selling drugs illegally via the (now-defunct) Silk Road. We agree that you will pay me X bitcoin if you receive the drugs by a particular date. That condition could be built into the initial transaction that is logged on the blockchain platform. In this case, the system will only release the bitcoin to me if the relevant condition is met. How will it know? Well, suppose the drugs are of a certain weight and have to be delivered to a certain locker that you use for these purposes. The locker is equipped with a ‘smart’-weighing scales. Once a package of the right weight is delivered to the locker, the weighing scales will broadcast the fact to the network, which then confirms that the relevant contractual condition has been met. This results in the money being released to me.

Notice how the contract here is enforced automatically. I do not have to wait for you to release the bitcoin to me and you do not have to worry about losing your bitcoin and never receiving the drugs. The relevant conditions are coded into the original smart contract and once they are met the contract is automatically executed. There is no need for recourse to the courts (though you could build in conditional recourse to courts if you liked). The increasing number of ‘smart’ devices makes smart contracts enticing. Why? Because these devices allow for more ways in which to record, implement, and confirm the performance of relevant contractual conditions. The advantage of the blockchain is that it provides a way to manage and coordinate these devices without relying on trusted third parties.

Smart property is really just a variation on this. Tangible, physical property in the real world (e.g. cars, houses, cookers, fridges etc) can have smart technology embedded in them. Indeed, this is already true for many cars. Information about these physical objects can then be registered on the blockchain along with details of who stands in what type of ownership relationship to those physical objects. Smart keys could then be used to facilitate ownership rights. So, for example, you might only be able to access and use a car if you had the right smart key stored on your phone. The same could be true for a smart-house. These keys can then be exchanged and the exchanges verified using the blockchain. The blockchain thus becomes a system for recording and managing property rights.

Hopefully, these two examples give some sense of the excitement surrounding blockchain technology.


4. Conclusion
To sum up, the blockchain is a distributed, publicly verifiable and encrypted ledger used for recording and updating transactional data. It helps to solve the trust problem associated with most forms of social cooperation and coordination by obviating the need for trusted third parties. The technology is exciting because it can be used to manage and maintain networks of smart devices. As such devices become more and more widespread, there is the potential for blockchain technology to become pervasive. I’ll try to explore some of the more philosophically and legally interesting questions this throws up in future posts.

Wednesday, November 11, 2015

The Campaign Against Sex Robots: A Critical Analysis

Logo from the Campaign's Website


The Campaign Against Sex Robots launched to much media fanfare back in September. The brainchild of Dr. Kathleen Richardson from De Montfort University in Leicester UK, and Dr. Erik Brilling from University of Skovde in Sweden, the campaign aims to highlight the ways in which the development of sex robots could be ‘potentially harmful and will contribute to inequalities in society’. What’s more, despite being a relative newcomer, the campaign may have already achieved its first significant ‘scalp’. The 2nd International Conference on Love and Sex with Robots, organised by sex robot pioneer David Levy was due to be held in Malaysia this month (November 2015) but was cancelled by Malaysian authorities shortly after the campaign was launched.

Now, to be sure, it’s difficult to claim a direct causal relationship between the campaign and the cancellation of the conference, but there is no doubting the media success of the campaign: it has been featured in major newspapers, weblogs and TV shows around the world. Most recently, Dr Richardson participated in a panel at the Web Summit conference in Dublin, and this was discussed in the national media here in Ireland. Furthermore, the actions of the Malaysian authorities suggest that there is the potential for the campaign to gain some traction.

And yet, I find the Campaign Against Sex Robots somewhat bizarre. I’m puzzled by the media attention being given to it, especially since the ethics and psychology of human-robot relationships (including sexual relationships) has been a topic of serious inquiry for many years. And I’m also puzzled about the position of the campaign and the arguments its proponents proffer. I say this as someone with a bit of form in this area. I have written previously about the potential impact of sex robots on the traditional (human) sex work industry; I have also written about the case for legal bans of certain types of sex robot; and, with my friend and colleague Neil McArthur, I am currently co-editing a collection of essays on the legal, ethical and social implications of sex robots for MIT Press. So I am not unsympathetic to the kinds of issues being raised. But I cannot see what the campaign is driving at.

In this post, I want to provide some support for my puzzlement by analysing the goals of the campaign and the ‘position paper’ it has published in support of these goals. I want to make two main arguments: (i) the goals of the campaign are insufficiently clear and much of its media success may be trading on this lack of clarity; and (ii) the reasons proffered in support of the campaign are either unpersuasive or insufficiently strong to merit a ‘campaign’ against sex robots. I appreciate that others have done some of this critical work before. My goal is to do so in a more thorough way.

(Note: this post is long -- far longer than I originally envisaged. If you want to just get the gist of my criticisms, I suggest reading section one and the conclusion, and then having a look at the argument diagrams.)


1. What are the goals of the campaign against sex robots?
Let me start with a prediction: sex robots will become a reality. I say this with some confidence. I am not usually prone to making predictions about the future development of technology. I think people who make such predictions are routinely proved wrong, and hence forced into some awkward backtracking and self-amendment. Nevertheless, I feel pretty sure about this one. My confidence stems from two main sources: (i) history suggests that sex and technology have always gone together, hence if there is to be a revolution in robotics it is likely to include the development of sex robots; and (ii) sex robots already exist (in primitive and unsophisticated forms) and there are several companies actively trying to develop more sophisticated versions (perhaps most notably Real Doll). In making this prediction, I won't make specific claims about the likely form or degree of intelligence that will be associated with these sex robots. But I’m still sure they will exist.

Granting this, it seems to me that there are three stances one can take towards the existence of such robots:

Liberation: i.e. adopt a libertarian attitude towards the creation and deployment of such robots. Allow manufacturers to make them however they see fit, and sell or share them with whoever wants them.

Regulation: i.e. adopt a middle-of-the-road attitude towards the creation and deployment of such robots. Perhaps regulate and restrict the manufacture and/or sale of some types; insist upon certain standards for consumer/social protection for others; but do not implement an outright ban.

Criminalisation: i.e. adopt a restrictive attitude towards the creation and deployment of such robots. Ban their use and manufacture, and possibly seek criminal sanctions for those who breach the terms of those bans (such sanctions need not include incarceration or other forms of harsh treatment).

These three stances define a spectrum. At one end, you have extreme forms of liberation, which would enthusiastically welcome any and all sex robots; and at the other end you would have extreme forms of criminalisation, which would ban any and all sex robots. The great grey middle of ‘regulation’ lies in between.





For what it is worth, I favour a middle-of-the-road attitude. I think there could be some benefits to sex robots, and some problems. On balance, I would lean in favour of liberation for most types of sex robots, but might favour strict regulation or, indeed, restrictions, for other types. For instance, I previously wrote an article suggesting that sex robots used for rape fantasies and shaped like children could be plausibly criminalised. I did not strongly endorse that argument (it rested on a certain moralistic view of the criminal law that I dislike); I did not favour harsh punishment for potential offenders; and I would never claim that this policy would be successful in actually preventing the development or use of such technologies. But that’s not the point: we often criminalise things we never expect to prevent. I was also clear that the argument I made was weak and vulnerable to several potential defeaters. My goal in presenting it was not to defend a particular stance, but rather to map out the terrain for future ethical debate.

Anyway, leaving my own views to the side, the question arises: where on this spectrum do the proponents of the Campaign Against Sex Robots fall?

The answer is unclear. Obviously, they are not in favour of liberation, but are they are in favour of regulation or criminalisation? The naming of the campaign suggests something more towards the latter: they are against sex robots. And some of their pronouncements seem to reinforce this more extreme position. For instance, on their ‘About’ page, they say that “an organized approach against the development of sex robots is necessary”. On the same page, they also list a number of relatively unqualified objections to the development of sex robots. These include:

We believe the development of sex robots further sexually objectifies women. 
We propose that the development of sex robots will further reduce human empathy that can only be developed by an experience of mutual relationship. 
We challenge the view that the development of adult and child sex robots will have a positive benefit to society, but instead further reinforce power relations of inequality and violence.

On top of this, in her ‘position paper’, Richardson notes how she is modeling her campaign on the ‘Stop Killer Robots’ campaign. That campaign works to completely ban autonomous robots with lethal capabilities. If Richardson means for that model to be taken seriously, it suggests a similarly restrictive attitude motivates the Campaign Against Sex Robots.

But despite all this, there is some noticeable equivocation and hedging in what the campaign and its spokespeople have to say. Elsewhere on their “About” page they state that:

We propose to campaign to support the development of ethical technologies that reflect human principles of dignity, mutuality and freedom.

And that they wish:

To encourage computer scientists and roboticists to examine their own conscience when asked to provide code, hardware or ideas to develop this field.

Throughout the position paper, Richardson also makes clear that it is the fact that current sex robot proposals are modeled on a ‘prostitute-john’ relationship that bothers her. This suggests that if sex robots could embody an alternative and more egalitarian relationship she might not be so opposed.

On top of all this, Richardson appears to have disowned the more restrictive attitude in her recent statements. In an article about her appearance at the Web Summit, she is reported to have said we should “think about what it means” to create sex robots, not that we shouldn’t make them at all. That said, in the very same article she is reported to have called for a “ban” on sex robots. Maybe the journalist is being inaccurate in the summary (I wasn’t at the event) or maybe this reflects some genuine ambiguity on Richardson’s part. Either way, it seems problematic to me.

Why? Because I think the Campaign Against Sex Robots is currently trading on an equivocation about its core policy aims. Its branding as a general campaign “against” sex robots, along with the more unqualified objections to their development, seem to suggest that the core aim is to completely ban sex robots of all kinds. This provides juicy fodder for the media, but would require a very strong set of arguments in defence. As I hope to make clear below, I don’t think that the proponents of the campaign have met that high standard. On the other hand, the more reserved and implicitly qualified claims seem to suggest a more modest aim: to encourage creators of sex robots to think more clearly about the ethical risks associated with their development, in particular the impact it could have on gender inequality and objectification. This strikes me as a reasonably unobjectionable aim, one that would not require such strong arguments in defence, but would not be anywhere near as interesting. There are many people who already share this modest aim, and I think most people would not need much to be persuaded of its wisdom. But then the campaign would need to be more honest in its branding. It would need to be renamed something like “The Campaign for Ethical Sex Robots’.

In any event, until the Campaign provides more clarity about its core policy aims, it will be difficult to know what to make of it.


2. Why Campaign Against Sex Robots in the First Place?
Granting this difficulty, I nevertheless propose to evaluate the main arguments in favour of the campaign, as presented by its proponents. For this, I turn to the “Position Paper” on the Campaign’s website, which was written by Richardson. With the exception of its conclusion (which as I just noted is somewhat obscure) this paper does present a reasonably clear argument “against” sex robots. The argument is built around an analogy with human sex worker-client relationships (or, as Richardson prefers, ‘prostitute-john’ relationships). It is not set out explicitly anywhere in the text of the article. Here is my attempt to make its structure more explicit:


  • (1) Prostitution is bad (e.g. because it reinforces gender inequality, contributes to the objectification of women, denies the subjectivity of the sex worker etc.)
  • (2) Sex robots will be like prostitution in all these relevant bad-making ways (perhaps worse).
  • (3) Therefore, sex robots will be bad.
  • (4) Therefore, we ought to campaign against them.



This is an analogical argument, so it is not formally valid. I have tried to be reasonably generous in this reconstruction. My generosity comes in the vagueness of the premises and conclusions. The idea is that this vagueness allows the argument to work for either the strong or weak versions of the Campaign that I outlined above. So the first premise merely claims that there are several bad or negative features of prostitution; the second premise claims that these features will be shared by the development of sex robots; the first conclusion confirms the “badness” of sex robots; and the second conclusion is tacked on (minus a relevant supporting principle) in order to link the argument to the goals of the Campaign itself. It is left unclear what these goals actually are.

Vagueness of this sort is usually a vice, but in this context I’m hoping it will allow me to be somewhat flexible in my analysis. So in what follows I will evaluate each premise of the argument and see what kind of support they lend the conclusion(s). It will be impossible to divorce this analysis from the practical policy questions (i.e. should we campaign for regulation or criminalisation?). So I will try to evaluate the argument in relation to both strong and weak versions of the policy aims. To remove any sense of mystery from this analysis, I will state upfront that my conclusion will be that the argument is too weak to support a strong version of the campaign. It may suffice to support a weaker version, but this would have to be very modest in its aims, and even then it wouldn’t be particularly persuasive because it ignores reasons to favour the creation of sex robots and reasons to doubt the wisdom of interventionist policies.


3. Is Prostitution Bad?
Let’s start with premise (1) and the claim that prostitution is bad. I have written several pieces about the ethics of sex work. Those pieces evaluate most of the leading objections to the legalisation/normalisation of sex work. Richardson’s article recapitulates many of these objections. It initially expresses some disapproval for the “sex work” discourse, viewing the use of terms like ‘sex work’ and ‘sex worker’ as part of an attempt to legitimate an oppressive form of labour. (I should qualify that because Richardson doesn’t write with the normative clarity of an ethicist; she is an anthropologist and the detached stance of the anthropologist is apparent at times in her paper, despite the fact that the paper and the Campaign clearly have normative aims). She then starts to identify various bad-making properties of prostitution. These include things like the prevalence of violence and human trafficking in the industry, along with reference to statistics about the relative youth of its workers (75% are between 13 and 25, according to one source that she cites).

Her main objection to prostitution, however, focuses on the asymmetrical relationship between the prostitute and the client, the highly gendered nature of the employment (predominantly women and some men providing the service for men), and the denial of subjectivity (and corresponding objectification) the commercialisation entails. To support this view, Richardson quotes from a study of consumers of prostitution, who said things like:

‘Prostitution is like masturbating without having to use your hand’, 
‘It’s like renting a girlfriend or wife. You get to choose like a catalogue’, 
‘I feel sorry for these girls but this is what I want’ 
(Farley et al 2009)

Each of these views seems to reinforce the notion that the sex worker is being treated as little more than an object and that their subjectivity is being denied. The client and his needs are all that matters. What’s happening here, according to Richardson, is that the client is elevating his status and failing to empathise with the prostitute: substituting his fantasies for her real feelings. This is a big problem. The failure or inability to empathise is often associated with higher rates of crime and violence. She cites Baron-Cohen’s work on empathy and evil in support of this view.

To sum up, we seem to have two main criticisms of prostitution in Richardson’s article:


  • (5) Prostitution is bad because the (predominantly) female workers suffer from violence at the hands of their clients, can be victims of trafficking and are, often, quite young.

  • (6) Prostitution is bad because it thrives on an asymmetrical relationship between the client and prostitute, denies the subjectivity of the prostitute, compromises the ability of the client to empathise, and reinforces gender inequalities.


Are these criticisms any good? I have my doubts. Two points jump out at me. First, I think Richardson is being extremely selective and biased in her treatment of the evidence in relation to prostitutes and their clients. Second, even if she is right about these bad-making properties, there is no direct line from these properties to the appropriate policy response. In particular, there is no direct line from these properties to the criminalisation or restriction of prostitution. Let me briefly expand on these points.

On the first point, Richardson does cite evidence supporting the view that violence and trafficking are common in the sex work industry, and that clients deny the subjectivity of sex workers. But she ignores countervailing evidence. I don’t want to get too embroiled in weighing the empirical evidence. This is a complex debate, and there are certainly many negative features of the sex work industry. All I would say is that things are not as unremittingly awful as Richardson seems to suggest. Sanders, O’Neill and Pitcher, in their book Prostitution: Sex Work, Policy and Politics offer a more nuanced summary of the empirical literature. For instance, in relation to violence within the industry, they note that while the incidence is “high” and probably under-reported, it tends to be more prevalent for street-based sex workers, and that violence is usually associated with a minority of clients:

While clients are the most commonly reported perpetrators of violence against female sex workers, Kinnell (2006a) suggests that a minority of clients commit violence against sex workers and that often men who attack or murder sex workers frequently have a past history of violence against sex workers and other women….It must be remembered that the majority of commercial transactions take place without violence or incidence. 
(Sanders et al 2009, 44)

On the lack of empathy and the denial subjectivity, they offer a similarly nuanced view. First, they note how a highly conservative view of sexuality is often embedded in critiques of sex work:

There is generally a taboo about the types of sex involved in a commercial contact. The idea of time-limited, unemotional sex between strangers is what is often conjured up when commercial sex is imagined… The ‘seedy’ idea of commercial sex preserves the notion that only emotional, intimate sex can be found in long-term conventional relationships, and that other forms of sex (casual, group, masturbatory, BDSM, etc.) are unsatisfying, abnormal and also immoral. 
(Sanders et al 2009, 83)

They then go on to paint a complex picture of the attitude of clients toward sex workers:

[T]he argument is that general understandings of sex work and prostitution are based on false dichotomies that distinguish commercial sexual relationships as dissonant from non-commercial ones. Sanders (2008b) shows that there is mutual respect and understanding between regular clients and sex workers, dispelling the myth that all interactions between sex workers and clients are emotionless. There is ample counter-evidence (such as Bernstein 2001, 2007) that indicates that clients are ‘average’ men without any particular or peculiar characteristics and increasingly seeking ‘authenticity’, intimacy and mutuality rather that trying to fulfil any mythology of violent, non-consensual sex. 
(Sanders et al 2009, 84).

I cite this not to paint a rosy and pollyannish view of sex work. Far from it. I merely cite it to highlight the need for greater nuance than Richardson seems willing to provide. It is simply not true that all forms of prostitution involve the troubling features she identifies. Furthermore, in relation to an issue like trafficking, while I would agree that certain forms of trafficking are unremittingly awful, there is still a need for nuance. Trafficking-related statistics sometimes conflate general illegal labour migration (i.e. workers moving for better opportunities) with the stereotypical view of trafficking as a modern form of slavery.

This brings me to the second criticism. Even if Richardson is right about the bad-making properties of prostitution, there is no reason to think that those properties are sufficient to warrant criminalisation or any other highly restrictive policy. For instance, denials of subjectivity and asymmetries of power are rife throughout the capitalistic workplace. Many of the consumer products we buy are made possible by, arguably, exploitative international trade networks. And many service workers in our economies have their subjectivity denied by their clients. I often fail to care about the feelings of the barista making my morning coffee. But in these cases we typically do not favour criminalisation or restriction. At most, we favour a change in regulation and behaviour. Likewise, many of the negative features of prostitution could be caused (or worsened) by its criminalisation. This is arguably true of violence and trafficking. It is because sex workers are criminalised that they fail to obtain the protections afforded to most workers and fail to report what happens to them. This is why many sex worker activists — who are in no way unrealistic about the negative features of the job — favour legalisation and regulation. So Richardson will need to do more than single out some negative features of prostitution to support her analogical argument. I have tried to summarise these lines of criticism in the diagram below.




In the end, however, it is not worth dwelling too much on the bad-making properties of prostitution. The analogy is important to Richardson’s argument, but it is not the badness of prostitution that matters. What matters is the claim that these properties will be shared by the development of sex robots. This is where premise (2) comes in.


4. Would the development of sex robots be bad in the same way?
Premise (2) claims that the development of sex robots will replicate and reinforce the bad-making properties of prostitution. There are two things we need to figure out in relation to this claim. The first is how it should be interpreted; the second is how it is supported.

In relation to the interpretive issue, we must ask: Is the claim that, just as the treatment and attitude toward prostitutes is bad, so too will be the treatment and attitude toward sex robots? Or is it that the development of sex robots will increase the demand for human prostitution and/or thereby encourage users of sex robots to treat more real human (females) as objects? Richardson’s paper supports the latter interpretation. At the outset, she states that her concern about sex robots is that they:

[legitimate] a dangerous mode of existence where humans can move about in relations with other humans but not recognise them as human subjects in their own right. 
(Richardson 2015)

The key phrase here seems to be “in relations with other humans”, suggesting that the worry is about how we end up treating one another, not how we treat the robots themselves. This is supported in the conclusion where she states:

In this paper I have tried to show the explicit connections between prostitution and the development and imagination of human-sex robot relations. I propose that extending relations of prostitution into machines is neither ethical, nor is it safe. If anything the development of sex robots will further reinforce relations of power that do not recognise both parties as human subjects. 
(Richardson 2015)

Again, the emphasis in this quote seems to be on how the development of sex robots will affect inter-human relationships. Let’s reflect this in a modified version of premise (2):


  • (2*) Sex robots will add to and reinforce the bad-making properties of prostitution (i.e. they will encourage us to treat one another with a lack of empathy and exacerbate existing gender/power inequalities).


How exactly is this supported? As best I can tell, Richardson supports it by referring to the work of David Levy and then responding to a number of counter-arguments. In his book Love and Sex with Robots, David Levy drew explicit parallels between the development of sex robots and prostitution. The idea being that the relationship between a user and his/her sex robot would be akin to the relationship between a client and a prostitute. Levy was quite explicit about this and spent a good part of his book looking at the motivations of those who purchase sex and how those motivations might transfer onto sex robots. He was reasonably nuanced in his discussion of this literature, though you wouldn’t be able to tell this from Richardson’s article (for those who are interested, I’ve analysed Levy’s work previously). In any event, the inference Richardson draws from this is that the development of sex robots is proceeding along the lines that Levy imagines and hence we should be concerned about its potential to reinforce the bad-making properties of prostitution.


  • (9) Levy models the development of sex robots on the relationship between clients and prostitutes, therefore it is likely that the development of such robots will add to and reinforce the bad-making properties of prostitution.


I have to say I find this to be a weak argument, but I’ll get back to that later because Richardson isn’t quite finished with the defence of her view. She recognises that there are at least two major criticisms of her claim. The first holds that if robots are not persons (and for now we will assume that they are not) then there is nothing wrong with treating them as objects/things which we can use for our own pleasure. In other words, the technology is a morally neutral domain in which we can act out our fantasies. The second criticism points to the potentially cathartic effect of these technologies. If people act out negative or violent sexual fantasies on a robot, they might be less inclined to do so to a real human being. Sex robots may consequently help to prevent the bad things that Richardson worries about.


  • (10) Sex robots are not persons; they are things: it is appropriate for us to treat them as things (i.e. the technology is a morally neutral domain for acting out our sexual fantasies)
  • (11) Use of sex robots could be cathartic, e.g. using the technology to act out negative or violent sexual fantasies might stop people from doing the same thing to a real human being.


Richardson has responses to both of these criticisms. In the first instance, she believes that technology is not a value-neutral domain. Our culture and our norms are reflected in our technology. So we should be worried about how cultural meaning gets incorporated into our technology. Furthermore, she has serious doubts about the catharsis argument. She points to the historical relationship between pornography and prostitution. Pornography has now become widely available, but this has not led to a corresponding decline in prostitution nor, in the case of child pornography, abuse of real children. On the contrary, prostitution actually appears to have increased while pornography has increased. The same appears to be true of the relationship between sex toys/dolls and prostitution:

The arguments that sex robots will provide artificial sexual substitutes and reduce the purchase of sex by buyers is not borne out by evidence. There are numerous sexual artificial substitutes already available, RealDolls, vibrators, blow-up dolls etc., If an artificial substitute reduced the need to buy sex, there would be a reduction in prostitution but no such correlation is found. 
(Richardson 2015)

In other words:


  • (12) Technology is not a morally neutral domain: societal values and ethics are inflected in our technologies.
  • (13) There is no evidence to suggest that the cathartic argument is correct: prostitution has not decreased in response to the increased availability of pornography and/or sex toys.



Is this a robust defence of premise (2)? Does it support the overall argument Richardson wishes to make? Once again, I have my doubts. Some of the evidence she adduces is weak and even if it is correct it in no way supports a strongly restrictive approach to the development of sex robots. At best, it supports a regulative approach. Furthermore, in adopting that more regulative approach we need to be sensitive to both the merits and demerits of this technology and the costs proposed regulative strategy. This is something that Richardson neglects because she focuses almost entirely on the negative. In this vein, let me offer five responses to her argument, some of which target her support of premise (2*), others of which target the relationship between any putative bad-making properties of sex robots and the need for a ‘campaign’ against them.

First, I think Richardson’s primary support for premise (2) - viz. that it is reflected in the model of sex robot development used by David Levy — is weak. True, Levy is a pioneer in this field and may have a degree of influence (I cannot say for sure). But that doesn’t mean that all sex robot developers have to adopt his model. If we are worried about the relationship between the sex robot user and the robot, we can try to introduce standards and regulations that reflect a more positive set of sexual norms. For instance, the makers of Roxxxy (billed as the world’s first sex robot) claim to include a personality setting called ‘Frigid Farah’ with their robot. Frigid Farah will demonstrate some reluctance to the user’s sexual advances. You could argue that this reflects a troubling view of sexual consent: that resistance is not taken seriously (i.e. that ‘no’ doesn’t really mean ‘no’). But you could try to regulate against this and insist that every sex robot be required to give positive, affirmative signals of consent. This might reflect and reinforce a more desirable attitude toward sexual consent. And this is just an illustration of the broader point: that sex robots need not reflect negative social attitudes toward sex. We could demand and enforce a more positive set of attitudes. Maybe this is all Richardson really wants her campaign to achieve, i.e. to change the models adopted in the development of sex robots. But in that case, she is not really campaigning against them, she is campaigning for a better version of them.

Second, I think it is difficult to make good claims about the likely link between the use of a future technology like sex robots and actions toward real human beings. In this light, I find her point about the correlation between pornography and an increase in prostitution relatively unpersuasive. Unlike her, I don’t believe sex work is unremittingly bad and so I am not immediately worried about this correlation. What would be more persuasive to me is whether there was some correlation (and ultimately some causal link) between the increase in pornography/prostitution and the mistreatment of sex workers. I don’t know what the evidence is on that, but I think there is some reason to doubt it. Again, Sanders et al discuss ways in which the mainstreaming and legalisation of prostitution is sometimes associated with a decrease in mistreatment, particularly violence. This might give some reason for optimism.

A better case study for Richardson’s argument would probably be the debate about the link between pornography (adult hardcore or child) and real-world sexual violence/assault (toward adults or children). If it can be shown that exposure to pornography increases real-world sexual assault, then maybe we do have reason to worry about sex robots. But what does that evidence currently say? I reviewed the empirical literature in my article on robotic rape and robotic child sexual abuse. I concluded that the evidence at the moment is relatively ambiguous. Some studies show an increase; some show a decrease; and some are neutral. I speculated that we may be landed in a similarly ambiguous position when it comes to evidence concerning a link between sex robot usage and real-world sexual assault. That said, I also speculated that sex robots may be quite different to pornography: there may be a more robust real-world effect from using a sex robot. It is simply too early and too difficult to tell. Either way, I don’t see anything in this to support Richardson’s moral panic.

Third, if the evidence in relation to sex robot usage does end up being ambiguous, then I suspect the best way to argue against the development of sex robots is to focus on the symbolic meaning that attaches to their use. Richardson doesn’t seem to make this argument (though there are hints). I explored it in my paper on robotic rape and robotic child sexual abuse, and others have explored it in relation to video games and fiction. The idea would be that a person who derives pleasure from having sex with a robot displays a disturbing moral insensitivity to the symbolic meaning of their act, and this may reflect negatively on their moral character. I suggested that this might be true for people who derive sexual pleasure from robots that are shaped like children or that cater to rape fantasies. The problem here is not to do with the possible downstream, real-world consequences of this insensitivity. The problem has to do with the act itself. In other words, the argument is about the intrinsic properties of the act; not its extrinsic, consequential properties. This is a better argument because it doesn’t force one to speculate about the likely effects of a technology on future behaviour. But this argument is quite limited. I think it would, at best, apply to a limited subset of sex robot usages, and probably would not warrant a ban or, indeed, campaign against any and all sex robots.

Fourth, when thinking about the appropriate policy toward sex robots, it is important that we weigh the good against the bad. Richardson seems to ignore this point. Apart from her references to the catharsis argument, she nowhere mentions the possible good that could be done by sex robots. My colleague Neil McArthur has looked into some of these possibilities. There are several arguments that could be made. There is the simple hedonistic argument: sex robots provide people with a way of achieving pleasurable states of consciousness. There is the distributive argument: for whatever reason, there are people in the world today who lack access to certain types of sexual experience, sex robots could make those experiences (or, at least, close approximations of them) available to such people. This type of argument has been made in relation to the value of sex workers for persons with disabilities. Indeed, there are charities set up that try to link persons with disabilities to sex workers for this very reason. There is also the argument that sex robots could ameliorate imbalances in sex drive between the partners in existing relationships; or could add some diversity to the sex lives of such couples, without involving third parties (and the potential interpersonal strife to which they could give rise). It could also be the case that sex robots allow for particular forms of sexual self-expression to flourish, and so, in the interests of basic sexual freedom, we should permit it. Finally, unlike Richardson, we shouldn’t completely discount the possibility of sex robots reducing other forms of sexual harm. This is by no means an exhaustive list of positive attributes. It simply highlights the fact that there is some potential good to the technology and this must be weighed against any putative negative features when determining the appropriate policy.

Fifth, and finally, when thinking about the appropriate policy you also need to think about the potential costs of that policy. We might agree that there are bad-making properties to sex robots, but it could be that any proposed regulatory intervention would do more harm than good. I can see plausible ways in which this could be true for regulatory interventions into sex robots. Regulation of pornography, for instance, has historically involved greater restrictions toward pornography from sexual minorities (e.g. gay and lesbian porn). Regulatory intervention into sex robots may end up doing the same. I think it is particularly important to bear this in mind in light of Sanders et al’s comments about stereotypical views of unemotional commercialised sex feeding into prohibitive policies. It may also be the case that policing the development and use of sex robots requires significant resources and significant intrusions into our private lives. I’m not sure that we should want to bear those costs. Less instrusive regulatory policies — e.g. one that just encourage manufacturers to avoid problematic stereotypes or norms in the construction of sex robots — might be more tolerable. Again, maybe that’s all Richardson wants. But she needs to make that clear and to avoid simply emphasising the negative.




5. Conclusion
This post has been long. To sum up, I find the Campaign Against Sex Robots puzzling and problematic. I do so for three main reasons:

A. I think the current fanfare associated with the Campaign stems from its own equivocation regarding its core policy aims. Some of the statements by its members, as well as the name of the campaign itself, suggest a generalised campaign against all forms of sex robots. This is interesting from a media perspective, but difficult to defend. Some other statements suggest a desire for more ethical awareness in the creation of sex robots. This seems unobjectionable, but a lot less interesting and in need of far more nuance. It would also necessitate some re-branding of the Campaign (e.g. to ‘The Campaign for Ethical Sex Robots”).

B. The first premise of the argument in favour of the campaign focuses on the bad-making properties of prostitution. But this premise is flawed because it fails to factor in countervailing evidence about the experiences of sex workers and the attitudes of their clients, and because, even if it were true, it would not support a generalised campaign against sex work. Indeed, sex worker activists often argue the reverse: that the bad-making properties of prostitution are partly a result of its criminalisation and restriction, and not intrinsic to the practice itself.

C. The second premise of the argument focuses on how the bad-making properties of prostitution might carry over to the development of sex robots. But this premise is flawed for several reasons: (i) it is supported by reference to the work of one sex robot theorist and there is no reason why his view must dominate the development process; (ii) it relies on dubious claims about the likely causal link between the use of sex robots and the treatment of human beings; (iii) it fails to make the strongest argument in support of a restrictive attitude toward sex robots (the symbolic meaning argument), but even if it did, that argument would be limited and would not lend support to a general campaign; (iv) it fails to consider the possible good-making properties of sex robots; and (v) it fails to consider the possible costs of regulatory intervention.

None of this is to suggest that we shouldn’t think carefully about the ethics of sex robots. We should. But the Campaign Against Sex Robots does not seem to be contributing much to the current discussion.

Thursday, November 5, 2015

Is there Trouble with Algorithmic Decision-making? Fairness and Efficiency-based Objections



Tal Zarsky’s work has featured on this blog before. He is an expert in the legal aspects of big data and algorithmic decision-making. He recently published a paper entitled “The Trouble with Algorithmic Decision-Making” in which he tries to identify, categorise and respond to some of the leading objections to the use of algorithmic decision-making processes. This is a topic that interests me too, so I was eager to see what he had to say.

This post is my attempt to summarise and comment on some of the key themes from Zarsky’s paper. Its primary aim is to construct a diagram which will categorise the main objections found within Zarsky’s paper. Its secondary aim is to consider Zarsky’s responses to each of these objections. This will not be an exhaustive treatment of the core issues; it will be a high-level summary only. In this respect, it might be useful to people who are new to this debate.


1. What is interesting about algorithmic decision-making?
In one sense, algorithms are a mundane phenomenon: they are simply sets of instructions for taking an input and producing an output. There is probably some trivial sense in which all decision-making is algorithmic. After all, whenever you make a decision — say a decision about what food to buy — you are taking some set of inputs — e.g. information about your level of hunger, financial resources, food preferences and so on — and using them to produce an output — i.e. a decision about what you will actually buy. In most cases, the ruleset that you use to produce the output is implicit, but you could probably reconstruct it if you put enough thought into it. (Note: some people in the philosophy of mind might dispute the claim that all decision-making is algorithmic, but I won’t engage with that point of view in this post).

Given this mundanity and triviality, one may wonder why anyone at all is interested in algorithmic decision-making. The answer, of course, lies in the technology used in the more explicit forms of algorithmic decision-making that now govern our lives. With the rise of surveillance and big data, there are increasing opportunities for computer-coded algorithms to take advantage of large datasets to produce (potentially) socially useful outputs. Recognition of this fact, has led companies and governments to incorporate algorithmic decision-making into their pre-existing decision-making processes. There are so many examples of this nowadays that it is hard to pick just one.

The one Zarsky settles upon in his article is the use of credit-scoring algorithms by banks and other financial services providers. These algorithms use financial (and other) data to construct credit-scores. These scores are supposed to tell the banks the likely credit-risk of any particular customer. The most popular of these systems in the US is the FICO rating system, which relies on a proprietary (i.e. legally protected) algorithm and can be decisive in determining whether or not a person can access credit. Similar scoring systems are used in other countries, many of them also relying on the FICO system (at least in part).

One can make a good case for the use of such algorithms: they are quick, cost-effective ways to take advantage of large swathes of information. There is limited scope for humans to knit together this information in a useful way. Nevertheless, many people are disturbed and think these systems are deeply problematic. Zarsky suggests that these objections fall into two main categories (he admits that these are not exhaustive, but thinks they address the main areas of concern):


Efficiency-Based Objections: These objections target the claims often made on behalf of these systems by their creators, namely that they are more effective and accurate than human decision-makers would be.

Fairness Based Objections: These objections argue that algorithmic decision-making processes are unfair in one or more respects. The unfairness here can be substantive (i.e. concerned with the differential impact of the process on different groups of people) or procedural (i.e. concerned with the way in which the process engages with the people who are ultimately affected).


Of course, these kinds of objections can be levelled against any decision-making system. This raises the question: what is so special about algorithmic decision-making? The answer to that might be “nothing”, but there are two properties of algorithmic decision-making that are alleged to make it unique:

Automation: Algorithmic decisions can sometimes be made with no or limited human input and oversight.

Opacity: Algorithmic decisions can lack the transparency we desire, either because the algorithms are protect by secrecy laws or because of their inherent complexity.

One Zarsky’s goals is to see whether automation and opacity increase the potency of the efficiency and fairness-based objections, and whether transparency can help to address some of the concerns.
Acknowledging all this allows us to construct a diagram of the potential objections to algorithmic decision-making. As you can see below, there are two main branches (efficiency and fairness) which then sub-divide into a number of more specific objections. We’ll work our way through the various branches over the remainder of this post.





2. Efficiency-Based Objections
We start with efficiency-based objections. These are both the easiest to understand and the easiest to analyse. An efficiency-based objection holds that an algorithmic decision-making process is problematic due to inaccuracy. In the case of credit-scoring, the argument would be that the credit-scoring system does not provide an accurate representation of the likely credit-risk of the particular customer. There is some evidence that this is true. The bond-rating conducted by agencies like Fitch, Moody and Standard and Poor prior to the 2008 financial crisis were infamously inaccurate. There is also evidence that some credit-scoring systems draw faulty inferences from certain types of behaviour. I commented on one example — seeking more information about your mortgage being an indicator of credit risk as opposed to prudence — in a previous post.

The particular examples do not matter so much here. What matters is the arguments people adduce in support of the efficiency-based objection. Zarsky suggests that there are two main arguments:

Defective Dataset: The actual dataset upon which the algorithms rely is defective in some respect, i.e. contains inaccurate or misleading information.  

Predictive Problems: The systems try to predict future human behaviour but there are often serious practical hurdles to accurate predictions. This can manifest as a tendency to draw misleading conclusions from the data.

Are these criticisms plausible? And how are they linked to the automated and opaque nature of the decision-making systems?

Zarsky suggests that these criticisms are relatively weak. There are three reasons for this. First, the problems with inaccurate data may be corrected over time or at an aggregate level. In other words, misleading information from one source could be cancelled out or swamped by accurate information from other sources. The accuracy of the overall prediction could still be (probabilistically) valid. That said, Zarsky acknowledges the need for ongoing research into this matter. Theoretical possibilities and anecdotal evidence will not be sufficient to either prove or disprove the accuracy of an algorithm.

Second, even if these systems are inaccurate in certain respects, you need to compare their inaccuracy with the accuracy of alternative decision-making systems. For example, it could be that systems which assess credit risk based entirely on the subjective assessment of an individual bank employee are much more inaccurate. In that case, the inaccuracies of the algorithm might be acceptable. There is a good methodological point here: Whenever you assess policy changes you should do so comparatively, i.e. by comparing the policy with the status quo and some reasonable alternatives. When you do so, you might find that it is less objectionable than it first seems.

Third, transparency could be leveraged to improve the accuracy of such systems. For instance, people could be given the legal right to investigate and challenge the information used by the algorithm and, potentially, the source code of the algorithm itself. But Zarsky is not entirely convinced about the success of such transparency initiatives. One reason for this is that many people already have the right to scrutinise the information on their credit scores but don’t exercise those rights. Another is that making these systems more transparent may enable people to ‘game the system’. This is something I discussed in much greater detail in a previous post about Zarsky’s work.


3. Unfair Wealth Transfer Objections
Let’s move on now to fairness-related objections. These are more complex. They break down into three main subgroups. The first of these subgroups is concerned with the impact of algorithmic decision-making on the distribution of wealth (where ‘wealth’ is defined broadly to include social goods and opportunities of all kinds). The objection is based on the belief that algorithmic decision-making systems could result in wealth being unfairly distributed away from those who deserve it to those who really don’t. Zarsky notes three distinct ways in which this could happen:

From Consumers to Firms: Corporate enterprises could take unfair advantage of consumers, resulting in a wealth transfer from the consumers to the enterprises. For instance, a bank could use a credit score as the basis for manufacturing a sophisticated financial product that seems attractive to an at-risk customer but actually favours the bank in the long run. This could result in undeserved hardship to the customer.

Between Consumers: Certain consumers could take unfair advantage of these systems, resulting in a wealth transfer in their favour, to the detriment of others. So, for instance, in the case of credit-scoring and other financial algorithms, wealthy people, with teams of advisers, might be in a better position to game these systems to their advantage. This could result in further inequalities of income and wealth.

Away from Protected Groups: The algorithms could work in such a way that they have a disparate impact on groups with certain characteristics (e.g. gender, race, ethnicity, religion, sexual orientation). In most countries, these groups are explicitly protected from discrimination by law. The concern is that algorithmic decision-making could unfairly target them due to implicit or explicit biases affecting the coding process, or due to some other unknown factor.

How serious are these concerns and what role do automation and opacity have to play? Let’s take them one by one.

In relation to transfers from consumers to firms, there is no doubt that businesses may be incentivised to take advantage of less fortunate customers. The whole sub-prime mortgage crisis is a classic example. The temptation is there irrespective of automation, but there may be ways in which the complexity and opacity of algorithmic systems make it more alluring. Again, the sub-prime mortgage crisis provides some powerful lessons. The complex methods used for weighting and calculating the risk attached to mortgage bonds fueled the speculation that led to the eventual crash. Transparency may reduce the risk, but is probably insufficient by itself. Regulation and strict scrutiny of the systems used by private (and public) bodies may be needed.

In relation to transfers between consumers, this could also certainly happen. We are witnessing a significant recrudescence in wealth inequality. If people like Thomas Piketty and Anthony Atkinson are to be believed — and I believe they are — then we are now returning to levels of inequality not seen since the late 19th century. It seems plausible that wealthy elites will be well-positioned to take advantage of complex and opaque algorithmic decision-making systems, if for no other reason than that they can expend considerable resources trying to get to grips with them.

Transparency could help by levelling the playing field to some extent. But Zarsky is not convinced. Transparency could heighten the advantage of the wealthy elites. One reason why people think budgetary decision-making should be conducted in secret, and all decisions simply announced at one time, is that they worry about elite lobbying groups taking advantage of transparency to push their agendas. Furthermore, Zarsky thinks that the automated and inhuman nature of algorithmic decision-making could actually help to resolve these inequities. Current elites are propped up by a system of implied and explicit biases among human decision-makers. Removing the human element could remove these implicit and explicit biases and result in greater equality.

Finally, when it comes to the impact on protected groups, we need to bear in mind the three different ways in which this could happen: (i) because protected characteristics (like race) are explicitly used by the algorithms when making unfair allocations; (ii) because the implicit biases of the designers results in a system that goes against the interests of the protected group and (iii) because, for some unknown reason, the algorithm has a disparate impact on the protected group when put into practice. If (i) is happening, it should simply be banned: the whole ethos of anti-discrimination law is that you cannot use such characteristics when making allocative decisions. If (ii) is happening, then greater transparency and scrutiny of the coding process is required. And if (iii) is happening, transparency is still necessary but needs to be combined with careful empirical studies of how the systems work. Furthermore, all of this must be balanced against the possibility of using algorithmic decision-making as a way to avoid human biases that disfavour the protected groups.


4. Arbitrariness and Autonomy-Based Objections
The second fairness-based objection has to do with arbitrariness. The concern is that an algorithmic decision could affect a person (negatively) for a seemingly arbitrary reason, i.e. a reason that is unconnected to any factor that should lead to them being legitimately singled out by the algorithm. Take two seemingly identical people, one of whom receives a positive credit score and the other who receives a negative one. As best we can tell, there is nothing in their behaviour or personal data to explain why one should be favoured over the other, but this is what the algorithm does. In such a scenario, the decision would be arbitrary and hence unfair.

You might think this is really an efficiency-based objection, but there is a subtle difference. In the scenario being imagined, the algorithmic decision-making process as a whole could be quite efficient. In other words, in the aggregate, it might be that the process works well and is effective in distinguishing high risk from low risk customers. It is just that in this particular case it seems to have singled someone out for an arbitrary reason.

In such a scenario, it seems pretty clear that the automated and opaque nature of the decision-making process would be partly to blame. It is true that human decision-making systems could also single people out for arbitrary reasons, but in those cases it will usually be easier to figure out where the system broke down. In the case of an automated and opaque algorithmic process, it will be more difficult to conduct the investigation into what went wrong. Faith in the algorithm, despite its flaws, could be tempting. Transparency may help to alleviate this concern, but again its effectiveness may be limited since it may be impossible to deconstruct the algorithm and figure out why the error arose. All that said, the negative impact in one individual case would need to be balanced against the aggregate gains. It could be that the individual is negatively affected on one occasion, but benefits on nearly all others. As a result, the arbitrariness in the one case may be offset.

This brings us to the final fairness-based objection. This one focuses on autonomy-based harms. Here, we switch focus from the fairness of the outcome to the fairness of the procedure itself. The concern is that algorithmic decision-making processes might fail to respect the dignity and autonomy of the individuals affected by their outputs. There are several ways in which this could happen. The system could rely on data that is collected without informed consent, or it may fail to allow for meaningful human participation and scrutiny due to its intrinsic complexity.

Interestingly, Zarsky finds this type of objection to be the most intractable. Transparency could help to mitigate some of the autonomy-based harms, but not all. Procedural due process rights for algorithmic decision-making systems could also help. But, to some extent, “these concerns are inescapable when opting for an (often automated) algorithmic analysis with inherent complexities” (Zarsky 2015, 13). This is something I have spoken about at length in my various ‘threat of algocracy’ posts and talks.

Okay, that’s it. As I said, this was merely intended to provide a high level summary of some of the key debates and issues surrounding algorithmic decision-making systems. For more detailed analyses, as well as potential solutions, you should read the other posts in my series on Algocracy and the Problems of Big Data (LINK).

Wednesday, November 4, 2015

Understanding the Threat of Algocracy




On 2nd November, I gave a talk entitled "The Threat of Algocracy: Reality, Resistance and Accommodation" to the Programmable City Project at Maynooth University. You can watch the video of my presentation (minus the Q&A) above.

The talk defended one central thesis: That the increase in algorithm-based decision making poses a threat to the legitimacy of our political and legal system. The threat in question is relatively unique (due to its technological basis) and difficult to resist and accommodate.

In order to defend this thesis, I tried to ask and answer four questions:

1. What is 'algocracy'? Broadly speaking, to me 'algocracy' is the phenomenon whereby algorithms takeover public decision-making systems. More precisely, the term 'algocracy' can be used to describe decision-making systems in which computer-coded algorithms structure and constrain the way in which human beings interact with these decision-making processes (see, generally, Aneesh 2009). There are many different possible algocratic systems. I focus on algocratic systems made possible by the rise of big data, the internet of things, surveillance, data-mining and predictive analytics.
2. What is the 'threat of algocracy'?  Public decision-making processes ought to be legitimate. Most people take this to mean that the processes should satisfy a number of proceduralist and instrumentalist conditions. In other words, the processes should be fair and transparent whilst at the same time achieving good outcomes. The problem with algocratic systems is that they tend to favour good outcomes over transparency and fairness. This is the threat they pose to political legitimacy.
3. Can we (or should we) resist the threat? I argue that it is difficult to resist the threat of algocracy (i.e. to dismantle or block the creation of algocratic systems) due to the ubiquity of the technology and the strength of the political and economic forces favouring the creation of algocratic systems. I also argue that, in many cases, it may not be morally desirable to dismantle or block the creation of such systems.
4. Can we accommodate the threat? I argue that it is difficult to accommodate the threat of algocracy (i.e. to allow for meaningful participation in and comprehension of these systems). I examine three possible accommodationist solutions and find them lacking in several respects.

The talk provides more detail on these four questions. I find it difficult to watch and listen to myself give presentations of this sort, but other people may find it more tolerable. And if you can't get enough of this topic, I did an interview on the Review the Future podcast about it last year and I also wrote a short post describing the nature of the threat a couple of years back.