Pages

Tuesday, April 6, 2021

From Mind-as-Computer to Robot-as-Human: Can metaphors change morality?




Over the past three years, I have returned to one question over and over again: how does technology reshape our moral beliefs and practices? In his classic study of medieval technology, Lynn White Jr argues that simple technological changes can have a profound effect on social moral systems. Consider the stirrup. Before this device was created, mounted warriors had to rely largely on their own strength (the “pressure of their knees” to use White’s phrase) to launch an attack while riding horseback. The warrior’s position on top of the horse was precarious and he was limited to firing a bow and arrow or hurling a javelin.

The stirrup changed all that:


The stirrup, by giving lateral support in addition to the front and back support offered by pommel and cantle, effectively welded horse and rider into a single fighting unit capable of violence without precedent. The fighter’s hand no longer delivered the blow: it merely guided it. The stirrup thus replaced human energy with animal power, and immensely increased the warrior’s ability to damage his enemy. Immediately, without preparatory steps, it made possible mounted shock combat, a revolutionary new way of doing battle. 
(White 1962, p 2)

 

This had major ripple effects. It turned mounted knights into the centrepiece of the medieval army. And since the survival and growth of medieval society was highly dependent on military prowess, these knights needed to be trained and maintained. This required a lot of resources. According to White, the feudal manor system, with its associated legal and moral norms relating to property, social hierarchy, honour and chivalry, was established in order to provide knights with those resources.

This is an interesting example of technologically induced social moral change. The creation of a new technology afforded a new type of action (mounted shock combat) which had significant moral consequences for society. The technology needed to be supported and sustained, but it also took on new cultural meanings. Mounted knights became symbols of strength, valour, honour, duty and so forth. They were celebrated and rewarded. The entire system of social production was reoriented to meet their needs. There is a direct line that can traced from the technology through to this new ideological moral superstructure.

Can something similar happen with contemporary technologies? Is it already happening? In the remainder of this article I want to consider a case study. I want to look at social robots and the changes they may induce in our moral practices. I want to argue that there is a particular mechanism through which they may change our moral practices that is quite subtle but significant. Unlike the case of stirrup — in which the tool changed the social moral order because of the new possibilities for action that it enabled — I want to argue that social robots might change the social moral order by changing the metaphors that humans use to understand themselves. In particular, I want to argue that the more humans come to view themselves as robot-like (as opposed to robots being seen as human-like), the more likely it is that we will adopt a utilitarian mode of moral reasoning. I base this argument on the theory of hermeneutic moral mediation and some recent findings in human-robot interactions. This argument is highly speculative but, I believe, worth considering.

Terminological note: By ‘robot’ I mean any embodied artificial agent with the capacity to interpret information from its environment and act in response to that information. By ‘social robot’ I mean any robot that is integrated into human social practices and responds to human social cues and behaviours, e.g. care robots, service robots. Social robots may be very human like in appearance or behaviour but they need not be. For example, a robot chef or waiter in a restaurant might be very unhuman-like in appearance but may still respond dynamically and adaptively to human social cues and behaviours.


1. Hermeneutic Moral Mediation

In arguing that technology might alter human moral beliefs and practices, it is important to distinguish between two different understandings of morality. On the one hand, there is ‘ideal morality’. This is the type of morality studied by moral philosophers and ethicists. It consists of claims about what humans really ought to value and ought to do. On the other hand, there is ‘social morality’. This is the type of morality practiced by ordinary people. It consists in people’s beliefs about what they ought to value and what they ought to do. Social morality and ideal morality may not align with each other. Indeed, moral philosophers often lament the fact that they don’t. In considering how technology might change human morality, I am primarily interested in how it might change social morality, not ideal morality.

That said, there can be connections between ideal morality and social morality. Obviously, moral philosophers often use claims about ideal morality to criticise social morality. The history of moral reform is replete with examples of this including anti-slavery arguments in the Enlightenment era, pro-suffragette arguments in the late 1800s, and pro-same sex marriage arguments in the late 1990s early 2000s. But changes in social morality may also affect ideal morality, or at least our understanding of ideal morality. If people adopt a certain moral practice in reality, this can encourage moral philosophers to reconsider their claims about ideal morality. There is often a (suspicious?) correlation between changes in social morality and changes in theories of ideal morality.

How can technology induce changes in social morality? There are several theories out there. Peter Paul Verbeek’s theory of technological moral mediation is the one I will rely on in this article. Verbeek argues that technologies change how humans relate to the world and to themselves. To use the academic jargon: they mediate our relationships with reality. This can have moral effects.

Verbeek singles out two forms of mediation, in particular, for their moral impact: (i) pragmatic mediation and (ii) hermeneutic mediation. Pragmatic mediation arises when technology adds to, or subtracts from, the morally salient choices in human life. This forces us to consider new moral dilemmas and new moral questions. The impact of the stirrup on medieval warfare is an example of this. It made mounted knights more effective in battle and military commanders were thus forced to decide whether to use these more effective units. Given the overwhelming value attached to military success in that era, their use became a moral necessity: to not use them would be morally reckless and a dereliction of duty. Hermeneutic mediation is different. It arises when technology changes how we interpret the world, adding a new moral perspective to our choices. Verbeek argues that obstetric ultrasound is a classic example of hermeneutic moral mediation in action because the technology presents the foetus-in-utero to us as an independent being, situated inside but still distinct from its mother, and capable of being treated or intervened upon by medical practitioners. This alters our moral understanding of pre-natal care.

[If you are interested, I wrote a longer explanation of Verbeek’s theory here]

The widespread diffusion of social robots will undoubtedly pragmatically mediate our relationship with the world. We will face choices as to whether to deploy care robots in medical settings, whether to outsource tasks to robots that might otherwise have been performed by humans, and so on. But it is the hermeneutic effects of social robots that I want to dwell on. I think the diffusion of social robots could have a profound impact on how we understand ourselves and our own moral choices.

To make this point, I want to consider the history of another technology.


2. The Mind-as-Computer Metaphor

The computer was the defining technology of the 20th century. It completely reshaped the modern workplace, from the world of high finance, to scientific research, to graphic design. It also enabled communication and coordination at a global scale. In this way, the computer has pragmatically mediated our relationship with the world. We now think and act through the abilities that computers provide. Should I send that email or not? Should I use this bit of software to work on this problem?

Not only has the computer pragmatically mediated our relationship to the world, it has also hermeneutically mediated it. We now think of many processes in the natural world as essentially computational. Nowhere is this more true than in the world of cognitive science. Cognitive scientists try to figure out how the human mind works: how is it that we perceive the world, learn from it and act in it? Cognitive scientists have long used computers to help model and understand human cognition. But they go further than this too. Many of them have come to see the human mind as a kind of computer — to see thinking as a type of computation.

Gerd Gigerenzer and Daniel Goldstein explore this metaphorical turn in detail in their article ‘The Mind as Computer: The Birth of a Metaphor’. They note that it is not such an unusual turn of events. Scientists have always used tools to make sense of the world. Indeed, they argue that the history of science can be understood, at least in part, as the emergence of theories from the applied use of tools. They call this the ‘tools-to-theories’ heuristic. A classic example would be the development of the mechanical clock. Not long after this device was invented, scientists (or natural philosophers as they were then called) started thinking about physical processes (motion, gravitation etc) in mechanical terms. Similarly, when statistical tools were adopted for use in psychological experimentation in the 1900s, it didn’t take too long before psychologists starting to see human psychology as a kind of statistical analysis:


One of the most widely used tools for statistical inference is analysis of variance (ANOVA). By the late 1960s, about 70% of all experimental articles in psychological journals already used ANOVA (Edgington 1974). The tool became a theory of mind. In his causal attribution theory, Kelley (1967) postulated that the mind attributed a cause to an effect in the same way that a psychologist does — namely, by performing an ANOVA. Psychologists were quick to accept the new analogy between mind and their laboratory tool. 
(Gigerenzer and Goldstein 1996, 132).

 

The computational metaphor suffered from a similar fate. The story is a fascinating one. As Gigerenzer and Goldstein note, the early developers of the computer, such as Von Neumann and Turing, did work on the assumption that the devices they were building could be modeled on human thought processes (at either a biological or behavioural level). But they saw this as a one-way metaphor: the goal was to build a machine that was something like a human mind. In the 1960s and 70s, the metaphor turned around on itself: cognitive scientists started to see the human mind as a computational machine.

One of the watershed moments in this shift was the publication of Allan Newell and Herbert Simon’s Human Problem Solving in 1972. In this book, Newell and Simon outlined an essentially computational model of how the human mind works. In an interview, Simon documented how, through his use of computers, he started to think of the human mind as a machine that ran programs:


The metaphor I’d been using, of a mind as something that took some premises and ground them up and processed them into conclusions, began to transform itself into a notion that a mind was something that took some program inputs and data and had some processes which operated on the data and produced some output. 
(quoted in Gigerenzer and Goldstein 1996, 136)

 

This theory of the mind was initially resisted but, as Gigerenzer and Goldstein document, when the use of computational tools to simulate human cognition became more widespread it was eventually accepted by the mainstream of cognitive scientists. So much so that some cognitive scientists find it hard to see the mind as anything other than a computer.


3. The Human-as-Robot Metaphor

What significance does this have for robots and the moral transformations they might initiate? Well, in a sense, the robot is a continuation of the mind-as-computer metaphor. Robots are, after, all essentially just embodied computational devices, capable of receiving data inputs and processing them into actions. If the mind is seen as a computer, is it not then natural to see the whole embodied human as something like a robot?

We can imagine a similar metaphorical turn to the one outlined by Gigerenzer and Goldstein taking root, albeit over a much shorter timeframe since the computational metaphor is already firmly embedded in popular consciousness. We begin by trying to model robots on humans (already the established practice in social robotics), then, as robots become common tools for understanding human social interactions, the metaphor flips around: we start to view humans as robot-like themselves. This is already happening to some extent and some people (myself included) are comfortable with the metaphor; others much less so.

This thought is not original to me. Henrik Skaug Sætra, in a series of papers, has remarked on the possible emergence of ‘robotomorphy’ in how we think about ourselves. Many people have noted how humans tend to anthropomorphise robots (see e.g. Kate Darling’s work), but as robots become common Saetra argues that we might also tend to ‘robotomorphise’ ourselves. In a paper delivered to the Love and Sex with Robots Conference in December 2020, he remarks:


Roboticists and robot ethicists may similarly lead us to a situation in which all human phenomena are understood according to a computational, mechanistic and behaviourist logic, as this easily allows for the inclusion of robots in such phenomena. By doing so, however, they are changing the concepts. In what follows, our understanding of the concept, and of ourself, changes accordingly. 
(Saetra 2020, 10)*

 

But how does it change? Saetra has some interesting thoughts on how robot ethicists might use our interactions with robots to encourage a behaviourist understanding of human social interactions. This could lead to an impoverished (he says ‘deficient’) conception of certain human relationships, including loving relationships. Humans might favour efficient and psychologically simple robot partners over their more complex human alternatives. Since I am a defender of ‘ethical behaviourism’, I am, no doubt, one of the people that is guilty of encouraging this reconceptualisation of human relations (for what it’s worth, I don’t think this necessarily endorses an impoverished conception of love; what I do think is that it is practically unavoidable when it comes to understand our relationships with others).

Fascinating though that may be I want to consider another potential transformation here. This transformation concerns the general moral norms to which we are beholden. As moral psychologists have long noted, the majority of humans tend to follow a somewhat confusing, perhaps even contradictory, moral code. When asked to decide what the correct course of action is in moral dilemmas, humans typically eschew a simple utilitarian calculus (avoid the most suffering; do the most good) in favour of a more complex, non-consequentialist moral code. For example, in Joshua Greene’s various explorations of human reasoning in trolley-like scenarios (scenarios that challenge humans to sacrifice one person for the greater good), it is found that humans care about primary intentions, the physical proximity to a victim and other variables that are not linked to the outcome of our actions. In short, it seems that most people think they have act-related duties — not to intentionally harm another, not to intentionally violate trust, not intentionally violate an oath or duty of loyalty to another etc — that hold firm even when following this duty will lead to a worse outcome for all. This isn’t always true. There are some contexts in which outcomes are morally salient and override the act-related duties, but these are relatively rare.

Recent investigations into human moral judgment of robots paints a different picture. Studies by Bertram Malle and his colleagues, for example, suggests that we hold robots to different moral standards. In particular, we expect them to adopt a more utilitarian logic in their moral decision-making. They should aim for the greater good and they are more likely to be negatively evaluated if they do not. We do not (as readily) expect them to abide by duties of loyalty or community. Malle et al’s findings have been broadly confirmed by other studies into the asymmetrical moral norms that humans apply to robots. We think robots should focus on harm minimisation; we don’t judge them based on their perceived intentions or biases. For example, Hidalgo et al’s recent book-length discussion of a series of experiments done on over 6000 US subjects, How Humans Judge Machines, seems to be broadly consistent with the moral asymmetry thesis.

Now, I would be the first to admit that these findings are far from watertight. They are, for the most part, based on vignette studies in which people are asked to imagine that robots are making decisions and not on interactions with real-world robots. There are also many nuances to the studies that I cannot do justice to here. For instance, there are some tentative findings suggesting that the more human-like a robot’s actions, and/or the more harm it causes, the more inclined we are to judge it in a human-like way. This might indicate that the asymmetry holds, in part, because we currently dissociate ourselves from robots. 

Nevertheless, I think these findings are suggestive and they do point the way toward a hermeneutic moral effect that the widespread deployment of robots might have. If it becomes common wisdom for us to interpret and understand our own behaviour in a robot-like manner, then we may start to hold ourselves to the same moral standards as machines. In other words, we may start to adopt a more outcome-oriented utilitarian moral framework and start to abandon our obsession with intentions and act-related duties.

Three factors convince me that this is a plausible potential future. First, there is a ready-made community of consequentialist utilitarian activists that would welcome such a moral shift. Utilitarianism has been a popular moral framework since the 1800s and has resonance in corporate and governmental sectors. The effective altruist movement, with its obsessive focus on doing the most good through evidence-based personal decision-making, may also welcome such a shift. Second, there is some initial evidence to suggest that humans to adapt their moral behaviours in response to machines. Studies by Ryan Jackson and colleagues on natural language interfaces, for instance, suggest that if a machine asks a clarificatory question implying a willingness to violate a moral norm, humans are more willing to violate the same norm. So we can imagine that if machines both express and act in way that violates non-consequentialist norms, we may, in turn, be more willing to do the same. Finally, there are now some robot ethicists that encourage us to take the metaphorical flip, i.e. to make human moral behaviour more robot-like and not to make robot moral behaviour more human-like. One interesting example of this comes from Sven Nyholm and Jilles Smids article on the ethics of autonomous vehicles in ‘mixed traffic’ scenarios, i.e. where the machines must interact with human-driven vehicles. A common approach to the design of mixed traffic scenarios is to assume that the machines must adapt to human driving behaviour but Nyholm and Smids argue that sometimes it might be preferable for the adaptation to go the other way. Why? Because machine driving, with its emphasis on harm-minimisation and strict adherence to the rules of the road, might be morally preferable. More precisely, they argue that if automated driving is provably safer than human driving, then humans face a moral choice, either they use automated driving or they adapt to automated driving standards in their own behaviour:


If highly automated driving is indeed safer than non-automated conventional driving, the introduction of automated driving thereby constitutes the introduction of a safer alternative within the context of mixed traffic. So if a driver does not go for this safer option, this should create some moral pressure to take extra safety-precautions when using the older, less safe option even as a new, safer option is introduced. As we see things, then, it can plausibly be claimed that with the introduction of the safer option (viz. switching to automated driving), a new moral imperative is created within this domain [for human drivers]. 
(Nyholm and Smids 2018)

 

If we get more arguments like this, in more domains of human and robot interaction, then the net effect may be to encourage a shift to a robot-like moral standard. This would complete the hermeneutic moral mediation that I am envisaging.


4. Conclusion

None of this is guaranteed to happen nor is it necessarily a good or bad thing. Once we know about a potential moral transformation we can do something about it: we can either speed it up (if we welcome it) or try to shut it down (if we do not). Nevertheless, speculative though it may be, I do think that the mechanism I have discussed in this article is a plausible one and worth taking seriously: by adopting the robot-as-human metaphor we may be inclined to favour a more consequentialist utilitarian set of moral norms.


* I’m not sure if this paper is publicly accessible. Saetra shared a copy with me prior to the conference.

 

No comments:

Post a Comment