Wednesday, December 4, 2019

Will we ever have fully autonomous vehicles? Some reasons for pessimism




What is the future of the automotive industry? If you’ve been paying attention over the past decade, you’ll know the answer: self-driving (a.k.a autonomous) vehicles. Instead of relying on imperfect, biased, lazy and reckless human beings to get us from A to B, we will rely on sophisticated and efficient computer programs. This future may not be that far away. We already rely on computers to fly planes and drive trains. All we will be doing is extending our reliance on them to the roads and public highways.

There are, of course, some technical hurdles to overcome. The public highways are more unpredictable than the skies and railways. But impressive strides have been made with driverless technology in the recent past and it doesn’t seem implausible to think that it will become widespread within the next 10-15 years. Once it does, the benefits will be great — at least if you believe the hype — there will be fewer accidents and we will all have more time to focus on the things we love to do during our daily commutes: catch up on work or TV, post to social media and so on. There will also be other beneficial side effects. Less space will need to be allocated to carparks in our cities and towns, allowing us to create more pleasant urban living spaces, the traffic system might become more efficient and less crowded, there may even be a drop in light pollution.

Will any of this come to pass? In this article, I want to argue for a slightly unusual form of scepticism about the future of self-driving vehicles. This scepticism has two elements to it. First, I will argue that a combination of ethical, legal and strategic factors will encourage us not to make and market fully autonomous vehicles. Second, I will argue that despite this disincentive, many of us will, in fact, treat vehicles as effectively fully autonomous. This could be very bad for those of us expected to use such vehicles.

I develop this argument in three stages. I start with a quick overview of the five different ‘levels’ of automated driving that has been proposed by the Society of Automotive Engineers. Second, I argue that concerns about responsibility and liability ‘gaps’ may cause us to get stuck on the middle levels of automated driving. Third, and finally, I consider some of the consequences of this.


1. Getting Stuck: The Levels of Autonomous Driving
If you have spent any time reading up about autonomous vehicles you will be familiar with the ‘levels’ of autonomy framework. First proposed and endorsed by the Society of Automotive Engineers, the framework tries to distinguish between different types of vehicle autonomy. The diagram below illustrates the framework.



This framework has been explained to me in several different ways over the years. I think it is fair to say that nobody thinks the different levels are obvious and discrete categories. The assumption is that there is probably a continuum of possible vehicles ranging from the completely non-autonomous at one end of the spectrum* to the fully autonomous at the other. But it is hard for the human mind to grasp a smooth continuum of possibility and so it helps if we divide it up into discrete categories or, in this case, levels.

What of the levels themselves? The first level — so called ‘Level 0’ — covers all traditional vehicles: the ones where the human driver performs all the critical driving functions like steering, braking, accelerating, lane changing and so on. The second level (Level 1) covers vehicles with some driver assist technologies, e.g. enhanced or assisted braking and parking. Many of the cars we buy nowadays have such assistive features. Level 2 covers vehicles with some automated functions, e.g. automated steering, acceleration, lane changing, but in which the human driver is still expected to play an active supervisory and interventionist role. Tesla’s enhanced autopilot is often said to be an example of Level 2 automation. The contract Tesla user’s sign when they download the autopilot software stipulates that they must be alert and willing to take control at all times. Level 3 covers vehicles with more automated functionality than Level 2. It is sometimes said to involve ‘conditional autonomy’, which means the vehicle can do most things by itself, but a human is still expected to be an alert supervisor of the vehicle and has to intervene when requested to do so by the vehicle (usually if the vehicle encounters some situation involving uncertainty). The Waymo vehicles that Uber uses are claimed to be Level 3 vehicles (though there is some dispute about this). Level 4 covers vehicles with the capacity for full automation, but with a residual role for human supervisors. Finally, Level 5 covers vehicles that involve full automation, with no role for human intervention.

The critical point is that all the levels of automation between 1 and 4 (and especially between 2-4) assume that there is an important role for human ‘drivers’ in the operation of autonomous vehicles. That is to say, until we arrive at Level 5 automation, the assumption is that humans will never be ‘off the hook’ or ‘out of the loop’ when it comes to controlling the autonomous vehicle. They can never sit back and relax. They have to be alert to the possibility of taking control. This, in turn, means that all autonomous vehicles that fall short of Level 5 will have to include some facility or protocol for handing over control from the vehicle to the human user, in at least some cases.

While this ‘levels’ of automation model has been critiqued, it is useful for present purposes. It helps me to clarify my central thesis, which is that I think there are important ethical, legal and strategic reasons for trying to prevent us from ever getting Level 5 automation. This means we are most likely to get stuck somewhere around Levels 3 and 4 (most likely Level 3), at least officially. Some people will say that this is a good thing because they think it is a good thing for humans to exercise ‘meaningful control’ over autonomous driving systems. But I think it might be a bad thing because people will tend to treat these vehicles as effectively fully autonomous.

Let me now explain why I think this is the case.


2. Why we might get stuck at Level 3 or 4
The argument for thinking that we might get stuck at level 3 or 4 is pretty straightforward and I am not the first to make it. In the debate about autonomous vehicles, one of the major ethical and legal concerns arising from their widespread deployment is that they might create responsibility or liability gaps. The existence, or even the perceived existence, of these gaps creates an incentive not to create fully autonomous vehicles.

Our current legal and ethical approach to driving assumes that, in almost all cases, the driver is responsible if something goes wrong. He or she can be held criminally liable for reckless or dangerous driving, and can be required to pay compensation to the victims of any crashes resulting from this. The latter is, of course, usually facilitated through a system of insurance, but, except in countries like New Zealand, the system of insurance still defaults to the assumption of individual driver responsibility. There are some exceptions to this. If there was a design defect in the car then liability may shift to the manufacturer, but it can be quite difficult to prove this in practice.

The widespread deployment of autonomous vehicles throws this existing system into disarray because it raises questions as to who or what is responsible in the event of an accident. Is the person sitting in the vehicle responsible if the autonomous driving program does something wrong? Presumably not, if they were not the ones driving the car at the time. This implies that the designers and manufacturers should be held responsible. But what if the defect in the driving program was not reasonably foreseeable or if it was acquired as a result of the learning algorithm used by the system? Would it be fair, just and reasonable to impose liability on the manufacturers in this case? Confusion as to where responsibility lies in such cases gives rise to worries about responsibility ‘gaps’.

There are all sorts of proposals to plug the gap. Some people think it is easy enough to ‘impute’ driverhood to the manufacturers or designers of the autonomous vehicle program. Jeffrey Gurney, for example, has made this argument. He points out that if a piece of software is driving the vehicle, it makes sense to treat it as the driver of the car. And since it is under the ultimate control of the manufacturer, it makes sense to impute driverhood to them, by proxy. What it doesn’t make sense to do, according to Gurney, is to treat the person sitting in the vehicle as the driver. They are really just a passenger. This proposal has the advantage of leaving much of the existing legal framework in place. Responsibility is still applied to the ‘driver’ of the vehicle; the driver just happens to no longer be sitting in the car.

There are other proposals too, of course. Some people argue that we should modify existing product liability laws to cover defects in the driving software. Some favour applying a social insurance model to cover compensation costs arising from accidents. Some like the idea of extending ‘strict liability’ rules to prevent manufacturers from absolving themselves of responsibility simply because something wasn’t reasonably foreseeable.

All these proposals have some merit but what is interesting about them is that (a) they assume that the responsibility ‘gap’ problem arises when the car is operating in autonomous mode (i.e. when the computer program is driving the car) and (b) that in such a case the most fair, just and reasonable thing to do is to apply liability to the manufacturers or designers of the vehicle. This, however, ignores the fact that most autonomous vehicles are not fully autonomous (i.e. not level 5 vehicles) and that manufacturers would have a strong incentive to push liability onto the user of the vehicle, if they could get away with it.

This is exactly what the existence of Levels 2 to 4 autonomous driving enables them to exploit. By designing vehicles in such a way that there is always some allowance for handover of control to a human driver, manufacturers can create systems that ‘push’ responsibility onto humans at critical junctures. To repeat the example already given, this is exactly what Tesla did when it initially rolled out its autopilot program: it required users to sign an agreement stating that they would remain alert and ready to take control at all times.

Furthermore, it’s not just the financial and legal incentives of the manufacturers that might favour this set-up. There are also practical reasons to favour this arrangement in the long run. It is a very difficult engineering challenge to create a fully autonomous road vehicle. The road environment is too unpredictable and messy. It’s much easier to create a system that can do some (perhaps even most) driving tasks but leave others to humans. Why go to the trouble of creating a fully autonomous Level 5 vehicle when it would be such a practical challenge and when there is little financial incentive for doing so? Similarly, it might even be the case that policy-makers and legal officials favour sticking with Levels 2 to 4. Allowing for handover to humans will enable much of the existing legal framework to remain in place, perhaps with some adjustments to product liability law to cover software defects. Drivers might also like this because allows them to maintain some semblance of control over their vehicles.

That said, there are clearly better and worse ways to manage the handover from computer to human. One of the problems with the Tesla system was that it required constant vigilance and supervision, and potentially split-second handover to a human. This is tricky since humans struggle to maintain concentration when using automated systems and may not be able to do anything with a split-second handover.

Some engineers refer to this as the ‘unsafe valley’ problem in the design of autonomous vehicles. In a recent paper on the topic, Frank Flemisch and his colleagues have proposed a way to get out of this unsafe valley by having a much slower and safer system of handover to a human. Roughly, they call for autonomous vehicles that handle the more predictable driving tasks (e.g. driving on a motorway), have a long lead-in time for warning humans when they need to take control of the vehicle, and go to a ‘safe state’ (e.g. slow down and pull in to the hard shoulder or lay-by) if the human does not heed these warnings.

This model of autonomous driving is interesting. If it works, it could make Level 3 type systems much safer. But either way, the momentum seems to be building toward a world in which we never get to fully autonomous vehicles. Instead we stuck somewhere in-between.


3. The Consequences of Getting Stuck
Lots of people will be happy if we stuck at Level 3 or 4. Getting stuck means that we retain some illusion of meaningful human control over these systems. Even if the motives for getting stuck are not entirely benevolent, it still means that we get some of the benefits of the technology, while at the same time respecting the dignity and agency of the human beings who use these systems. Furthermore, even if we might prefer it if manufacturers took more responsibility for what happened with these systems, getting stuck at Level 3 or 4 means we still get to live in a world where some human is in charge. That sounds like a win-win.

But I’m a little more sceptical. I think getting stuck might turn out to be a bad thing. To make the case for this I will use the legal distinction between de jure and de facto realities. The de jure reality is what the law says should be the case; the de facto reality is what actually happens on the ground. For example, it might say in a statute somewhere that people who possess small quantities of recreational drugs are doing something illegal and ought to be sentenced to jail as a result. That’s the de jure reality. In practice, it might turn out that the legal authorities turn a blind eye to anyone that possesses a small quantity of such drugs. They don’t care because they have limited resources and bigger fish to fry. So the de facto reality is very different from the de jure reality.

I think a similar divergence between the official, legal, reality and what’s happening on the ground might arise if we get stuck at Level 3 or 4. The official position of manufacturers might be that their vehicles are not fully autonomous and require human control in certain circumstances. And the official legal and policy position might be that fully autonomous vehicles cannot exist and that manufacturers have to create ‘safe’ handover systems to allow humans to take control of the vehicles when needs be. But what will the reality be on the ground? We already know that drivers using Level 2 systems flout the official rules. They sit in the back seat or watch movies on their phones when they should be paying attention to what is happening (they do similar things in non-autonomous vehicles). Is this behaviour likely to discontinue even in a world with safer handover systems? It’s hard to see why it would. So we might end up with a de facto reality in which users treat their vehicles as almost fully autonomous, and a de jure world in which this is not supposed to happen.

Here’s the crucial point: the users might be happy with this divergence between de facto and de jure reality. They might be happy to treat the systems as if they are fully autonomous because this gives them the most of the benefits of the technology: their time and attention can be taken up by something else. And they might be happy to accept the official legal position because they don’t think that they are likely to get into an accident that makes the official legal rules apply to them in a negative way. Many human drivers already do this. How many people reading this article have broken the speed limits whilst driving, or have skirted with driving while borderline on the legal limit for alcohol, or have driven when excessively tired? Officially, most drivers know that they shouldn’t do these things; but in practice they do because they doubt that they will suffer the consequences. The same might be true in the case of autonomous vehicles. Drivers might treat them as close to fully autonomous because the systems are safe enough to allow them to get away with this most of the time. They discount the possibility that something will go wrong. What we end up with then is a world in which we have an official illusion of ‘meaningful control’ that disadvantages the primary users of autonomous vehicles, but only when something goes wrong.

Of course, there is nothing inevitable about the scenario I am sketching. It might be possible to design autonomous driving systems so that it is practically impossible for humans to flout the official rules (e.g. perhaps facial recognition technology could be used to ensure humans are paying attention and some electric shock system could be used to wake them up if they are falling asleep). It might also be possible to enforce the official position in a punitive way that makes it very costly for human users to flout the official rules (though we have been down this path before with speeding and drink-driving laws). The problem with doing this, however, is that we have to walk a very fine line. If we go too far, we might make using an autonomous vehicle effectively the same as using a traditionally human-driven vehicle and thus prevent us from realising the alleged benefits of these systems. If we don’t go far enough, we don’t resolve the problem.

Alternatively, we could embrace the idea of autonomous driving and try not to create an incentive to get stuck at Level 3 or 4. I’m not sure what the best outcome is but there are tradeoffs inherent in both.

* Although I do have some qualms about referring to any car or automobile as non-autonomous since, presumably, at least some functions within the vehicle are autonomous. For example, many of the things that happen in the engine of my car happen without my direct supervision or control. Indeed, if you asked me, I wouldn’t even know how to supervise and control the engine.



3 comments:

  1. We need to decide whether the concept of "responsibility" is still viable or whether the improvements in safety given by full automation warrant going to a concept where accidents are treated more like natural occurrences. Personal insurance could cover the expenses associated with a single accident in this scenario, similar to fire or flood insurance.

    ReplyDelete
  2. Interesting discussion but I think it misses a central line of development. There will be very strong economical incentives for fully autonomous delivery vehicles in order to skip paying salaries to delivery drivers.

    The technology and law for these vehicles will then transfer to the passenger sector.

    ReplyDelete
  3. I haave serious doubts whether Level 3/4 is practical. This level relies on a human being available to take control of the vehicle in an instant. When a driver is operating a vehicle at Level 0, all capabilities of the driver, including particularly, action capabilities are being continuouly involved. At Level 3/4, the driver is asked to act as a passive monitor, yet be ready to take full contol at any instant. The "de jure" position is that the driver is capable of switching modes in milli-seconds when needed. The "de-facto" position is that this may not be reliably possible. It may need several seconds to switch between modes. This position also assumes that the driver is capable of constantly maintaining concentration and awareness of the total environment and is mntally reay to make the switch in mode. Some people may be capable of doing this but I suspect that most will not be able to maintain a high enough level of concentraion on the driving situation, during a perod potentially many hours long. Such people
    may need several minutes to make the necessary mental transition. I feel that this type of person will not be capable of driving a Level 3/4 vehicle safely. I consider myself one of those people and there is no way that I would attempt to drive a Level3/4 vehicle. A good Level 5 implementation, however, would be very attractive to me.

    ReplyDelete