Friday, August 2, 2019

The Robotic Disruption of Morality




We increasingly collaborate and interact with robots and AIs. We use them to perform tasks and we also find that our choices and opportunities are affected by their operations. The increasing prevalence of such interactions has led to an explosion of interest in AI ethics and robo-ethics. Squads of academics, technologists and policy-makers are frantically asking how we should use ethical principles to guide and constrain the operation of robots and AIs. The prevailing belief amongst most of these actors is that long-standing human moral beliefs and practices should constrain the operation of these new technologies.

There is, however, another kind of inquiry we can conduct into the impact of robotics and AI on morality. Instead of asking how our moral beliefs and practices should constrain the operation of the technology we can ask whether and to what extent the technology is changing our moral beliefs and practices. Admittedly, there are plenty of people interested in asking this question, but it seems to me to be the road that is currently less travelled. That’s why, in the remainder of this article, I want share some thoughts that contribute to this second inquiry.

To be more precise, I want to outline one naturalistic theory of how human morality came into being (Michael Tomasello’s theory). I then want to consider how this could be disrupted or undermined by the growing prevalence of robotics and AI. I’m trying to be tentative not dogmatic. I’m very interested in feedback. If you think this is an interesting line of inquiry, and have thoughts on how it could be developed further, please leave a comment at the end.


1. Tomasello’s Theory of Human Morality
I’ll start by setting out Tomasello’s theory. The theory comes from the book The Natural History of Human Morality. It is an attempt to explain how human morality came into being over the course of our evolutionary and cultural history. The theory is interesting and probably represents the best current attempt to come up with a naturalistic account of the origins of human morality. One of the most impressive things about it is the range of empirical evidence that Tomasello draws upon to support his theory, much of it coming from his own lab.

Unfortunately, I am not going to discuss any of that empirical evidence (read the book! It’s good). Instead, I’m going to focus on the general structure of the theory. What exactly does Tomasello think happened in order for humans to develop their contemporary moral beliefs and practices?

To answer that, you first need to know something about what Tomasello understands by the phrase ‘human morality’. Tomasello’s main focus is on moral norms and the practices associated with them. Humans believe that they have duties and obligations; that they ought to fulfil their duties; that people deserve to be blamed if they don’t live up to their duties; and that people deserve to be treated fairly if they do. These beliefs, and their associated practices, are what Tomasello is interested in when it comes to explaining human morality. How did they come into being?

Tomasello argues that they came into being as the result of two key transitions. The first key transition was the rise of cooperative hunting and foraging. One of the distinctive features of humans is our willingness to cooperate with one another to achieve joint goals. This sets us apart from our ape cousins. For example, chimpanzees will sometimes form hunting parties that appear to work together towards a common end, but these alliances are usually feeble and easily broken down; humans form more sustained cooperative partnerships (Tomasello has performed several experiments showing that our ape cousins are not ‘natural born’ cooperators in the same way that we are).

But how do human cooperative partnerships work? Take the case of two hunters working together to track and kill a deer. Tomasello argues that their partnership is an exercise in joint agency. They imagine that they are both part of a joint ‘mind’ that is working together toward to a common goal. They each have their own distinctive roles in relation to that common goal, but these roles are interchangeable and conceived as being equally important. This gives rise to a distinctive ‘second personal’ psychology. Each hunter sympathises with the position of the other hunter and treats them as they would treat themselves. In other words, they each think that the other deserves a fair share of the spoils of the hunt; they don’t just grab all they can for themselves. In addition to this, each of them understands that they have duties with respect to the common goal (‘role responsibilities’) and if they fail to live up to those duties the other hunter can hold them to account. I’ve tried to illustrate this model below.




A lot of what we need to sustain normative beliefs and practices is present as a result of this first transition. Nevertheless, Tomasello argues that there is another important transition responsible for modern moral norms. After the transition to cooperative partnerships, humans also started to form cooperative groups. These groups also worked together through joint agency but, crucially, they sometimes competed with other cooperative groups. To survive this competition, the groups had to form institutional superstructures that promulgated, policed and enforced a common set of normative beliefs and practices.

This, in turn, gave rise to the complex moral psychology that most of us now share. This psychology consisted in a range of moral emotions that reinforced the institutional superstructure. Some of these moral emotions were self-directed, e.g. feelings of guilt and shame when norms were broken and feelings of self-respect when norms were upheld. Some were other-directed, e.g. feelings of trust and respect when others upheld the norms, and feelings of resentment and blame when they did not.

In short, Tomasello argues that modern human morality emerged from two important developments in human psychology (a) our capacity to take the second personal stance, i.e. to sympathise with the other and view them as an equivalent agent and (b) the complex suite of moral emotions that goes with this. Suffice to say there is a lot more detail in the book about how these things work and how they came into being. Hopefully, this overview is enough to give you the gist of the theory.


2. The Robotic Disruption of Human Morality
From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

At the other extreme end of the spectrum we have the perception of robots/AIs as fully autonomous agents, independent of their human creators and users (if, indeed, they even have readily identifiable creators and users). This could be quite disruptive to our second personal morality since it means we cannot look directly to those human creators and users to sustain our moral norms. But this all depends on how we understand the autonomous agency of robots/AIs. If we understand it to be essentially the same has human agency — in other words, if we assume that robots/AIs have the same kinds of intentional states (beliefs, desires etc) underlying their agency — then the disruption may be quite minimal. We will not be forced to deal with ontologically distinct collaborative partners. Robots/AIs will be just like the human collaborative partners we are used to: we can continue to apply our familiar second personal morality to them.

Many people, however, are uncomfortable with this idea. They do not think that robots can (perhaps ever) share our intentional psychology. This means robots should never be perceived as being equivalent to human collaborative partners. So if robots/AIs do attain autonomous agency it must be a wholly different and unfamiliar kind of autonomous agency. This is the form of perceived autonomous agency that could be most disruptive to our second personal morality. It would mean that we end up collaborating and interacting with robots/AIs on a regular basis but we cannot apply our familiar moral frameworks to those interactions. We cannot respect or trust robots to uphold our duties, we cannot resent them or blame them when they do wrong. The traditional moral norms find no purchase. This might be seen as a good thing by those who dislike our traditional moral frameworks (particularly people who dislike the psychology of blame and retribution that seems to go with it) but others will be more disconcerted.

In between these two extremes there are, of course, a range of intermediate states. These are states in which robots/AIs are perceived as being partly tool-like and partly autonomous, (and also as, perhaps, sharing perhaps some of our intentional psychology but not all of it). For what it is worth, I believe we are currently somewhere in this intermediate range. I cannot pinpoint our exact location, and it probably varies depending on the specific form of robot/AI in which we are interested, but I can see some tensions emerging for our traditional second personal morality. You can see this most clearly in the debate about the ‘responsibility gap’ in relation to robotic weapons and cars. Some people cling to the traditional model and urge us to see these technologies as essentially tool-like in nature. Thus we can continues to focus our moral energies on the humans that control and shape these technologies. No disruption to worry about. Others, admittedly a minority, urge us to accept robots/AIs as potentially autonomous agents and then differ on the disruptive consequences of this, depending on how they understand machine autonomy.




3. Conclusion
That’s the gist of the idea. Does it make sense? I’m not sure. Clearly more work would need to be done on the exact mechanisms underlying our second personal morality and how exactly they might be disrupted by robots/AI. Furthermore, it would be worth addressing the longer-term consequences of this disruption. Is it really a deep problem or is it an opportunity? I would welcome further exploration of this idea.

Before I wrap up, though, I want to make two interpretive points. First, in case it wasn’t clear from the foregoing, I don’t think the analysis I have offered hinges on the actual ontological status of robots/AIs. In other words, I don’t think it really matters whether robots/AI actually are fully autonomous agents or have an intentional psychology. What I think matters is what they are perceived to be. Obviously, there is some relationship between perception and reality, but it is not tight and its looseness could create problems for our moral frameworks even if the actual reality does not.

Second, I want to be clear that I don’t think developments in robotics and AI are the only things that threaten our second personal morality. Philosophical theories of human behaviour that are naturalistic, deterministic and reductionistic also pose challenges for the legitimacy of second personal morality. These challenges have been widely-debated and discussed. But they remain reasonably esoteric and divorced from people’s everyday lives. What interests me about the disruptive impact of robots/AIs is that it is more immediate and practically salient. People now have to interact with and collaborate with these technologies. This means questions about the ontological status of those technologies need to be resolved on a day-to-day basis. This means the disruptive impact of those technologies on our moral frameworks could be much more real than abstract philosophical concepts and debates.




3 comments:

  1. No deep thoughts, I'm afraid. Merely that we already interact with other animals on this exact dimension, from tool to dignified, and we already know reasonably well that they don't take a "second personal stance" but give them a certain amount of standing wrt autonomy and intentionality. I quite like Karl Schroeder's idea that AIs can borrow intentionality from animals they are designed to identify with (I was considering the analogy of those sheepdogs who take up the life of being a sheep) or inanimate natural objects, say a landform or an environment, where they will become a preserving ecological "force" (he now uses the term "deodand" for such an AI).

    ReplyDelete
    Replies
    1. It's a good point and I am embarrassed to have not mentioned it in the post.

      I suspect animals exist largely in the intermediate range (at least nowadays since we take a less and less instrumentalist approach to them). Given this, and given that we have coexisted with animals for such a long time, it might put paid to the notion that significant disruption occurs as a result of this mode of interaction. That said, I'd be interested in reading more about how people who rely heavily on animals for day to day activities (e.g. blind people with guide dogs) understand their relationships with them. I'm sure work must have been done on this.

      Delete
  2. There are rich pickings for an empirical research program here. I'd be particularly interested in how the introduction of smart speakers in many homes affects the development of moral intuitions in children. Children can give orders to smart speakers in ways that would be frowned upon when talking to the adults whose voices those speakers imitate. Children refer to the speaker as "the robot" and expect it to obey without question. This is quite different to the way they understand animals. There are well established experimental paradigms for examining moral development that could be applied here.

    ReplyDelete