Lots of people are interested in the ethics of autonomous vehicles. Indeed, the philosophical literature on this topic has grown unwieldy in the past few years. Whereas once upon time it was possible for one person to read and understand everything that had been published on this issue, I suspect that there is now so much written, and being written, that it has become impossible to keep up.
This is, in some ways, unfortunate. While there is lot of good work being done, there is a tendency for popular discussions of the ethical issues to fixate on simplistic thought experiments such as the infamous ‘trolley’ dilemmas. This creates the impression that figuring out what an autonomous vehicle should do in such a case is the be-all and end-all of the ethical debate. This isn’t true. While there is some value to considering such hypothetical cases, they are edge cases that do not provide the best guide to thinking about how autonomous vehicles should react in all dilemmatic cases. Furthermore, there are other ethical issues arising from the use of such vehicles that need to considered and are often overlooked.
I say all this by way of apology for what you are about to read. Although, I agree with the conclusion reached at the end of the preceding paragraph, I have to confess that I enjoy thinking about hypothetical edge cases. They bring into sharp relief some of the most fascinating ethical concepts and questions with which we must contend. I am going to discuss one such hypothetical edge case in the remainder of this article. The edge case concerns whether we should design a system of autonomous driving vehicles in such a way that it allows for individuals to voluntarily sacrifice themselves in the case of unavoidable crashes.
Let me first explain what I mean by this and then consider the arguments for and against it.
1. The Self Sacrifice Device
To explain the idea, I have to say something about the nature of unavoidable crash scenarios. This may be familiar to some readers; they should feel free to skip ahead to the next paragraph. An unavoidable crash scenario is a scenario in which a car is going to collide with someone or something and must choose between potential sites of collision. The typical set-up is a modified version of the trolley dilemma. A car is driving down a road when it is suddenly confronted with two sets of pedestrians occupying both sides of the road. On one side is an elderly couple; on the other side a group of children (or any other set of pedestrians). It is impossible for the car to avoid colliding with one set of pedestrians and so a split-second decision must be made as to which set of pedestrians should be saved and which sacrificed. Many variations of this basic set-up are possible. For example, instead of choosing between sets of pedestrians perhaps the car has to choose between colliding with a crash barrier (thereby injuring/killing the driver and passengers) and a group of pedestrians. Either way, the important point is that in these cases a harmful outcome is unavoidable (they are genuine dilemmas); the key ethical issue is not to prevent harm but to select between harmful outcomes. Sometimes it will be possible to minimise the amount of harm, other times the harmful outcomes may be equally weighted. If a human is driving the car, then the human must make the split-second decision. If a computer program is in control, then its programming must instruct it what to do in such a case.
Truly unavoidable crash scenarios of this sort are probably quite rare. I am not familiar with any studies that have been done on the matter, but my guess is that many real-world crash scenarios don’t involve such stark and equally weighted choices. There is much more uncertainty and imbalance in practice. This is one reason why some people think it is a mistake for the ethical debate about autonomous vehicles to become dominated by their discussion. Nevertheless, I persist.
I do not persist in the hope of discussing all possible resolutions of such cases. Instead, I persist in the hope of discussing the role that self-sacrifice might play in addressing such cases. In a previous article, I looked at a thought experiment from Hin Yan Liu concerning the creation of “immunity devices” that could be used in unavoidable crash scenarios. Liu’s idea was that it would probably be possible to create a device (just a small RFID chip perhaps) that would emit a signal that told a self-driving vehicle that the person wearing this device should not be sacrificed in the event of an unavoidable crash scenario. The effect of such a device might not be dissimilar to other forms of immunity that are granted to people by law (e.g. diplomatic immunity) or to a kind of extra health/safety insurance that people purchase at will.
To be clear, Liu didn’t think that the creation of immunity devices was a good idea. He just argued that their creation did not seem implausible and so it was important to think about the ethical and social ramifications. Here, I want to suggest a simple variation on Liu’s thought experiment. What if instead of immunity devices we allow people to create self sacrifice devices? These devices would also send a signal to a self-driving vehicle, but the meaning of the signal would be very different. It would inform the vehicle that the wearer of the device is willing to be sacrificed in the event of an unavoidable crash. This might be analogised to carrying an organ donor card, albeit with the not inconsiderable difference being that instead of signalling your willingness to give up your organs after death you are signalling your willingness to sacrifice your life for the lives of others.
What should we think about the creation of such a device?
2. The Arguments for and against a Self-Sacrifice Device
You might think that the idea of a self-sacrifice device is absurd or abhorrent. But let’s just consider for a moment whether there are any good reasons to endorse the creation of such a device.
I can think of two. First, as you may know, there is a rich experimental literature on people’s attitudes to trolley dilemmas. In these experiments, the dilemmas are usually structured in such a way that the experimental subject has to chose between harming two or more people other than themselves. But in some experimental studies people have indicated that if they had the option, they would prefer to sacrifice themselves instead of sacrificing some other party (e.g. Sachdeva et al 2015; Di Nucci 2013). In other words, if someone has to be harmed in such a case people would prefer if they could bear the brunt of the harm themselves (though there are some inconsistencies in this). For what it is worth, whenever I discuss trolley-type dilemmas with students, I find that a significant proportion of students agree that self-sacrifice, if possible, would be the ‘right’ thing to do in such a case. One advantage of the self-sacrifice device is that it allows people to exercise this preference in unavoidable crash scenarios. So you could argue that the creation of such a device is a good thing because it gives people an option that they want to be able to exercise.
Second, and perhaps more importantly, there is a rich moral tradition suggesting that self sacrifice is a noble deed. Think of the soldier who saves his/her comrades by diving on a grenade; think of the medical worker who cares for ebola sufferers only to be struck down by the disease themselves. These people are celebrated in our culture. They went above and beyond the call of moral duty. They are moral heroes and heroines. We might argue that it would be a good thing to give people the option of noble self sacrifice because it would allow them to exercise this extreme form of moral virtue. We might argue that this would be a particularly good thing in light of the fact that other suggested solutions to unavoidable crash scenarios are not hugely compelling (e.g. forcing some moral theory such as consequentialism on everyone; deciding by majority preference; or selecting outcomes at random).
But, but, but…There is also, clearly, a dark side to the idea of self-sacrifice device. Indeed, there are several dark sides: reasons to think that the creation of such devices would not be a good thing. Let’s review some of them.
First, we might worry that the creation of a self-sacrifice device undermines the goodness of noble self-sacrifice. A noble self-sacrifice is a supererogatory act. It’s goodness lies, to some extent, in the fact that it is an unforced, often spontaneous, decision. A self sacrifice device might undermine this unforced spontaneity. People using the device would have to pre-commit to sacrificing themselves at some unknown (perhaps never-to-be-realised) future moment. Their capacity for spontaneous virtue might thus be compromised. More importantly, in some societies, the existence of such a device might pressure or force some people into sacrificing themselves against their will. For example, the historical norm in (Western) societies is that adult men ought to sacrifice themselves in order to protect women and children. If this norm continues to apply, we might expect adult men to face a strong social pressure to use self-sacrifice devices. Thus we might worry that in wearing such devices they are not authentically expressing their moral agency but, rather, conforming to social stereotyping.
Second, in addition to social pressures, there may be a strong temptation to create legal pressures that force some people into wearing self-sacrifice devices. This is particularly true if such devices become commonplace and it is necessary to create a ranking system to differentiate between different wearers (i.e. to decide who gets sacrificed first in the event of an unavoidable crash). This would presumably require a points-based ranking and it would be tempting to some governments to tie this into a system of social punishment. This might work like the Chinese social credit system(s). People might get docked points if they do something wrong thus making it marginally more likely that they will be sacrificed in the event of an unavoidable crash. Of course, in this case we have moved beyond the world of self-sacrifice into the world of authoritarian social control: everyone might end up being required to wear a device that signals their social worth to machines that may use this information to distribute risks away from high value individuals and onto low value individuals. The point is that there is, arguably, a slippery slope from creating a self-sacrifice device to enabling such a system of social control. This might be one compelling reason not to create such a device.
Third, there would, presumably, be some formidable practical difficulties with the implementation of self sacrifice devices. How do we guarantee that the signal sent from the device to the car is reliable and high speed? Would the car have enough time to use the information in the crash scenario? Could the person wearing the device be singled out from other potential crash victims? What if they are embedded in a group of pedestrians? What if they are with their children? Practical engineering solutions would need to be found for each of these issues and each involves important ethical choices.
Fourth, there would, presumably, be significant cybersecurity challenges raised by the existence of such devices. They could be hacked. A malicious agent could play around with the signals being sent back and forth between the cars and the devices, perhaps directing the car to collide with wearers even when there is no unavoidable crash. In other words, the mere existence of the device makes possible a whole range of malicious interferences. (Cybersecurity issues of a similar nature plague the entire field of autonomous vehicles).
Fifth, and finally, even if we grant that self sacrifice is good thing (and I grant that it is in certain cases) it’s not obvious that you need a self-sacrifice device to enable it. It would presumably still be open to some pedestrians (or drivers/passengers) to exercise a preference for self-sacrifice through other means. A pedestrian could jump in front of a car, for example, or a driver/passengers could take control of the steering wheel and crash the car into a wall (assuming the autonomous vehicle allows for such driver-takeover). The opportunities for self-sacrifice might be more limited in these cases, but that might not be a bad thing given the other risks discussed above.
So where does that leave us? There are probably more arguments that could be mustered on both sides, but based on this quick review I think, on balance, that the arguments against self-sacrifice devices are more compelling than the arguments in their favour. There is an prima facie case to made for the creation of such devices, but this is negated by the many risks posed by their creation and by the fact opportunities for self-sacrifice can be accessed in other ways.