Monday, November 16, 2015

Is Anyone Competent to Regulate Artificial Intelligence?




Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

In outlining these problems, I was drawing from the work of Matthew Scherer and his soon-to-be-published article “Regulating Artificially Intelligent Systems: Risks, Challenges, Competencies and Strategies”. Today I want to return to that article and consider the next step in the regulatory project. Once we have a handle on the basic problems, we need to consider who might competent to deal with them. In most Western countries, there are three main regulatory bodies:

Legislatures: The body of elected officials who enact general laws and put in place regulatory structures (e.g. in the US the Houses of Congress; in the UK the Houses of Parliament; in Ireland the Houses of the Oireachtas).

Regulatory Agencies: The body of subject-area specialists, established through legislation, and empowered to regulate a particular industry/social problem, often by creating, investigating and enforcing regulatory standards (there are many examples, e.g. the US Food and Drug Administration, the UK Financial Conduct Authority; the Irish Planning Board (An Bord Pleanala))

Courts: The judges and other legal officials tasked with arguing, adjudicating and, ultimately, enforcing legal standards (both civil and criminal).

To these three bodies, you could perhaps add “The Market” which can enforce a certain forms of discipline on private commercial entities, and also internal regulatory bodies within those commercial entities (though such bodies are usually forced into existence by law). For the purposes of this discussion, however, I’ll be sticking to the three bodies just outlined. The question is whether any of these three bodies are competent to regulate the field of artificial intelligence. This is something Scherer tries to answer in his article. I’ll follow his analysis in the remainder of this post, but where Scherer focuses entirely on the example of the United States I’ll try to be a little more universal.

Before I get underway, it is worth flagging two things about AI that could affect the competency of anyone to regulate its development and deployment. The first is that AI is (potentially) a rapidly advancing technology: many technological developments made over the past 50 years are now coming together in the form of AI. This makes it difficult for regulatory bodies to ‘keep up’. The second is that advances in AI can draw on many different fields of inquiry, e.g. engineering, statistics, linguistics, computer science, applied mathematics, psychology, economics and so on. This makes it difficult for anyone to have the relevant subject-area expertise.


1. The Competencies of Legislatures
Legislatures typically consist of elected officials, appointed to represent the interests of particular constituencies of voters, with the primary goal of enacting policy via legislation. Legislatures are set up slightly differently around the world. For example, in some countries, there are non-elected legislatures working in tandem with elected legislatures. In some countries, lobbyists have significant influence over legislators; in others this influence is relatively weak. In some countries, the executive branch of government effectively controls the legislature; in others the executive is an entirely distinct branch of government.

Scherer argues that three things must be remembered when it comes to understanding the regulatory role of a legislature:

Democratic Legitimacy: The legislature is generally viewed as the institution with the most democratic legitimacy, i.e. it is the institution that represents the people’s interests and answers directly to them. Obviously, the perceived legitimacy of the legislature can wax and wane (e.g. it may wane when lobbying power is excessive). Nevertheless, it will still tend to have more perceived democratic legitimacy than the other regulatory bodies.

Lack of Expertise: Legislatures are generally made up of career politicians. It is very rare for these career politicians to have subject matter expertise when it comes to a proposed regulatory bill. They will have to rely on judgments from constituents, advisors, lobbyists and experts called to give evidence before a legislative committee.

Delegation and Oversight: Legislatures have the ability to delegate regulatory power to other agencies. Sometimes they do this by creating an entirely new agency through a piece of legislation. Other times they do so by expanding or reorganising the mission of a pre-existing agency. The legislature then has the power to oversee this agency and periodically call it account for its actions.

What does all this mean when it comes to the AI debate? It means that legislatures are best placed to determine the values and public interests that should go into any proposed regulatory scheme. They are directly accountable to the people and so they can (imperfectly) channel those interests into the formation of a regulatory system. Because they lack subject matter expertise, they will be unable to determine particular standards or rules that should govern the development and deployment of AI. They will need to delegate that power to others. But in doing so, they could set important general constraints that reflect the public interest in AI.

There is nothing too dramatic in this analysis. This is what legislatures are best-placed to do in virtually all regulatory matters. That said, the model here is idealistic. There are many ways in which legislatures can fail to properly represent the interests of the public.


2. The Competencies of Regulatory Agencies
Regulatory agencies are bodies established via legislation and empowered to regulate a particular area. They are quite variable in terms of structure and remit. This is because they are effectively designed from scratch by legislatures. In most legal systems, there are some general constraints imposed on possible regulatory structures by constitutional principles (e.g. a regulatory agency cannot violate or undermine constitutionally protected rights). But this still gives plenty of scope for regulatory innovation.

Scherer argues that there are four things about regulatory agencies that affect their regulatory competence:

Flexibility: This is what I just said. Regulatory agencies can be designed from scratch to deal with particular industries or social problems. They can exercise a variety of powers, including policy-formation, rule-setting, information-collection, investigation, enforcement, and sanction. Flexibility often reduces over time. Most of the flexibility arises during the ‘design phase’. Once an agency comes into existence, it tends to become more rigid for both sociological and legal reasons.

Specialisation and Expertise: Regulatory agencies can appoint subject-matter experts to assist in their regulatory mission. Unlike legislatures who have to deal with all social problems, the agency can keep focused on one mission. This enhances their expertise. After all, expertise is a product of both: (a) pre-existing qualification/ability and (b) singular dedication to a particular task.

Independence and Alienation: Regulatory agencies are set up so as to be independent from the usual vagaries of politics. Thus, for example, they are not directly answerable to constituents and do not have to stand for election every few years. That said, the independence of agencies is often more illusory than real. Agencies are usually answerable to politicians and so (to some extent) vulnerable to the same forces. Lobbyists often impact upon regulatory agencies (in some countries there is a well-known ‘revolving door’ for staff between lobbying firms, private enterprises, and regulatory agencies). Finally, independence can come at the price of alienation, i.e. a perceived lack of democratic legitimacy.

The Power of Ex Ante Action: Regulatory agencies can establish rules and standards that govern companies and organisations when they are developing products and services. This allows them to have a genuine impact on the ex ante problems in any given field. This makes them very different from the courts, who usually only have ex post powers.


What does this mean for AI regulation? Well, it means that a bespoke regulatory agency would be best placed to develop the detailed, industry-specific rules and standards that should govern the research and development of AI. This agency could appoint relevant experts who could further develop their expertise through their work. This is the only way to really target the ex ante problems highlighted previously.

But there are clearly limitations to what a bespoke regulatory agency can do. For one thing, the fact that regulatory structures become rigid once created is a problem when it comes to a rapidly advancing field like AI. For another, because AI potentially draws on so many diffuse fields, it may be difficult to recruit an appropriate team of experts. Relevant insights that catapult AI development into high gear may come from unexpected sources. Furthermore, people who have the relevant expertise may be hoovered up by the enterprises they are trying to regulate. Once again, we may see a revolving door between the regulatory agency and the AI industry.


3. The Competencies of Courts
Courts are judicial bodies that adjudicate on particular legal disputes. They usually have some residual authority over regulatory agencies. For instance, if you are penalised by a regulatory agency you will often have the right to appeal that decision to the courts. This is a branch of law known as administrative law. Although legal rules vary, most courts adopt a pretty deferential attitude toward regulatory agencies. They do so on the grounds that the agencies are the relevant subject-matter experts. That said, courts can still use traditional legal mechanisms (e.g. criminal law or tort law) to resolve disputes that may arise from the use of a technology or service.

Scherer focuses on the tort law system in his article. So the scenario lurking in the background of his analysis is a case in which someone is injured or harmed by an AI system and tries to sue the manufacturer for damages. He argues that four things must be kept in mind when assessing the regulatory competence of the tort law system in cases like this:

Fact-Finding Powers: Rules of evidence have been established that give courts extensive fact-finding powers in particular disputes. These rules reflect both a desire to get at the truth and to be fair to the parties involved. This means that courts can often acquire good information about how products are designed and safety standards implemented, but that information is tailored to a particular case and not to what happens in the industry more generally.

Reactive and Reactionary: Courts can only intervene and impose legal standards after a problem has arisen. This can have a deterrent effect on future activity within an industry. But the reactive nature of the court also means that it has a tendency to be reactionary in its rulings. In other words, courts can be victims of “hindsight bias” and assume that the risk posed by a technology is greater than it really is.

Incrementalist: Because courts only deal with individual cases, and because the system as a whole moves quite slowly, it can really only make incremental changes.

Misaligend Incentives: In common law systems, the litigation process is adversarial in nature: one side prosecutes a claim; the other defends. Lawyers only take cases to court that they think can be won. They call witnesses that support their side. In this, they are concerned solely with the interests of their clients, not with the interests of the public at large. That said, in some countries class actions are possible, which allow for many people to bring the same type of case against a defendant. This means some cases can represent a broader set of interests.

What does all this mean for AI regulation? Well, it suggests that the court system cannot deal with any of the ex ante problems alluded to earlier on. It can only deal with ex post problems. Furthermore, in dealing with those problems, it may move too slowly to keep up with the rapid advances in the technology, and may tend to overestimate the risks associated with the technology. If you think those risks are great (bordering on the so-called “existential” risk-category proposed by Nick Bostrom), this reactionary nature might be a good thing. But, even still, the slowness of the system will count against it. Scherer thinks this tips the balance decisively in favour of some specially constructed regulatory agency.




4. Conclusion: Is there hope for regulation?
Now that we have a clearer picture of the regulatory ecosystem, we can think more seriously about the potential for regulation in solving the problems of AI. Scherer has a proposal in his article, sketched out in some reasonable detail. It involves leveraging the different competencies of the three bodies. The legislature should enact an Artificial Intelligence Development Act. The Act should set out the values for the regulatory system:

[T]o ensure that AI is safe, secure, susceptible to human control, and aligned with human interests, both by deterring the creation of AI that lack those features and by encouraging the development of beneficial AI that include those features. 
(Scherer 2015)

The act should, in turn, establish a regulatory agency with responsibility for the safe development of AI. This agency should not create detailed rules and standards for AI, and should not have the power to sanction or punish those who fail to comply with its standards. Instead, it should create a certification system, under which agency members can review and certify an AI system as “safe”. Companies developing AI systems can volunteer for certification.

You may wonder why any company would bother to do this. The answer is that the Act would also create a differential system of tort liability. Companies that undergo certification will have limited liability in the event that something goes wrong. Companies that fail to undergo certification will face strict liability standards in the event of something going wrong. Furthermore, this strict liability system will be joint and several in nature: any entity in the design process could face full liability. This creates an incentive for AI developers to undergo certification, whilst at the same time not overburdening them with compliance rules.

In a way, this is a clever proposal. It tries to balance the risks and rewards of AI. The belief is that we shouldn’t stifle creativity and development within the sector, and that we should encourage safe and beneficial forms of AI. My concern is that this system misses some of the unique properties of AI that make it such a regulatory challenge. In particular, the proposal seems to ignore the difficulty of (a) finding someone to regulate and (b) the control problem.

This is ironic given that Scherer was quite good at outlining those challenges in the first part of his article. There, he noted how AI developers need not be large, well-integrated organisations based in a single jurisdiction. But if they are not, then it may be difficult to ‘reach’ them with the proposed regulatory regime. I am guessing the joint and several liability proposal is designed to address this problem as it creates an incentive for anyone involved in the process to undergo certification, but it assumes that diffuse networks of developers have the end goal of producing a ‘consumer’ type device. This may not be true.

Furthermore, earlier in the article, Scherer noted how AI systems can do things that are beyond the control or anticipation of their original designers. This creates liability problems but these problems can be addressed through the use of strict liability standards. At the same time, however, it also creates problems in the certification process. Surely if AI systems can act in unplanned and unanticipated ways, it follows that members of a putative regulatory agency would not be well-equipped to certify an AI system as “safe”? That could be concerning. The proposed system would probably be better than nothing, and we shouldn’t make the perfect the enemy of the good, but anyone who is convinced of the potential for AI to pose an “existential threat” to humanity is unlikely to think that regulation of this sort can play a valuable role in mitigating that risk.

Scherer is aware of this. He closes by stating that his goal is not to provide the final word but rather to start a conversation on the best legal mechanisms for managing AI risk. That’s certainly a conversation that needs to continue.

No comments:

Post a Comment