Wednesday, June 24, 2020

Technological Change and Human Obsolescence: An Axiological Analysis




I have a new paper coming out. This one is about how rapid changes in technology might induce human obsolescence. Is this a good thing or a bad thing? I try to argue that, contrary to first impressions, it might be a good thing.

Details, including links to the pre-print version, are below.

Title: Technological Change and Human Obsolescence: an Axiological Analysis
Journal: Techné: Research in Philosophy and Technology
Links: Official; Philpapers; Researchgate; Academia
Abstract: Can human life have value in a world in which humans are rendered obsolete by technological advances? This article answers this question by developing an extended analysis of the axiological impact of human obsolescence. In doing so, it makes four main arguments. First, it argues that human obsolescence is a complex phenomenon that can take on at least four distinct forms. Second, it argues that one of these forms of obsolescence (‘actual-general’ obsolescence) is not a coherent concept and hence not a plausible threat to human well-being. Third, it argues that existing fears of technologically-induced human obsolescence are less compelling than they first appear. Fourth, it argues that there are two reasons for embracing a world of widespread, technologically-induced human obsolescence. 





Saturday, June 20, 2020

Robots, AI and the Moral Culture of Patiency

[This is the text version of a talk I delivered to the Swedish AI Society Conference, via Zoom, on the 17th of June 2020]

Will the increased use of robotics and AI change our moral culture? In this talk I want suggest that it will. Specifically, I want to argue that the diffusion of robots and AIs into our political, social and economic lives will cause us to shift away from a moral culture of responsibility towards a culture of moral patiency.

The argument I put forward is tentative and suggestive only. I am not trying to predict the future today. I am, instead, trying to sketch a way of looking at it and understanding the impact that technology might have on our moral beliefs and practices. In some ways, it is this style of thinking about the future that I hope to defend, not the specific claims I make about the future, but I do this by showing how this style of thinking works and not by just talking about it.

I have three things I need to cover in the remainder of my talk: (a) what is a moral culture?; (b) how can we think about changes in moral cultures?; and (c) how might robots/AI cause a shift to a culture of moral patiency?


1. What is a moral culture?

The concept of a moral culture is something that is common parlance among sociologists and social psychologists. That said, it is not always well or precisely defined. Different theorists and commentators seem to mean slightly different things when they use the terminology. For present purposes, I will define “moral culture” in the following way:


Moral Culture = “A reasonably stable set of moral beliefs and practices, associated with an identifiable social collectivity/group, usually defined by a common core of key moral concepts”
 

This definition is inspired by, but not the same as, the definition offered by Vygautus Kovalis in his 1977 article “Moral Cultures and Moral Logics”. One thing that Kovalis claims in that article is that there is a distinction to be drawn between moral cultures and moral moods. The former a relatively stable and long-term equilibria in moral beliefs and practices; the latter are more short-term fashions. The distinction seems useful to me but it a matter of degree, not type. Today’s moral mood may, under the right conditions, become tomorrow’s moral culture. Contrariwise, today’s moral mood might just be an aberration: a momentary destabilisation in social morality before the dominant culture reasserts itself. This is something worth keeping in mind throughout the following discussion. When I talk about the changes in social morality that might be brought about by robots/AI, am I highlighting a shift in short-term moral moods or a longer-term shift in moral cultures?

So far I have been talking about moral cultures in the abstract. Are there some actual examples I can point to in order to make the idea more meaningful? Indeed there are. Perhaps the most widely discussed contrast in moral cultures is the distinction drawn between moral cultures based on honour and those based on dignity:


Cultures of Honour: These are moral cultures in which the most important thing for individuals is attaining and maintaining the respect of their peers. This respect (or “honour” as it is usually called) is something that is fragile and can be lost. Individuals must constantly protect their honour (and possibly the honour of their families) from insult or attack. They must do this largely by themselves (i.e. through “self-help” mechanisms) and not by calling on the support of their peers or the state. Indeed, it may be cowardly or dishonourable to do so. The practice of duelling is one of the more famous manifestations of the culture of honour.
 
Culture of Dignity: These are moral cultures in which everyone shares an equal, innate and inalienable moral status called “dignity”. In other words, everyone is owed some basic respect and moral protection, irrespective of who they are or what they have done. It is not something that is fragile and susceptible to attack. These cultures tend to be more tolerant (partly because people feel less insecure in their moral standing) and place a greater emphasis on state intervention to resolve conflict.
 

One of the claims that has been made is that people living in Western, developed nations have, since the late 1700s, shifted from living in cultures of honour to living in cultures of dignity. We once lived in the world of gentlemanly duelling; we now live in the world of universal rights and equal protection of the law. That said, there are still some honour based sub-cultures within these societies, and there are many other societies around the world that are still heavily focused on honour.

In some recent work, Bradley Campbell and Jason Manning, have argued that we are undergoing another shift in our moral culture, away from a culture of dignity to a culture of victimhood. In this new culture, people worry about being victimised by others. They are concerned about any threat to self-esteem or self-worth. Such threats must be neutralised by the help of some third party authority. In some senses, the culture of victimhood is like a culture of honour, insofar as people are constantly worried about their moral standing. Where they differ is in how they resolve threats to moral standing. Honour cultures leave it up to individuals; victimhood cultures rely on third parties.

Campbell and Manning claim that the seeds of victimhood culture are being sown on today’s college campuses, with students increasingly emphasising their vulnerability and need for safe spaces. In this way, Campbell and Manning’s argument feeds into certain narratives about contemporary youth culture — that it is full of narcissistic snowflakes etc — that I don’t believe are entirely fair. That said, I don’t need to pass judgment on their argument in this talk. There is, clearly, some evidence for it, even if there is also countervailing evidence. All I will say is that even if they are right, victimhood may just be a short-term shift in the moral mood and not a sustained shift in moral culture.

For Campbell and Manning, the defining features of the different moral cultures is how they understand the moral status of individuals and how they negotiate and resolve threats to moral status. Not all theories of moral cultures see things in the same way. The aforementioned Kovalis, for example, in his 1977 paper on moral cultures argued that there were four distinct modern moral cultures: (i) liberal; (ii) romantic-anarchist; (c) nationalist and (d) ascetic-revolutionary. Now, you can certainly find evidence for these four moral outlooks in the historical records, but it is not obvious what structural properties they share.

The question then becomes: is there any way we can think more systematically about what a moral culture is and how a moral culture might change over time?


2. A Framework for Thinking about Moral Cultures

Here’s one suggestion. In his book, Moral Psychology, the philosophy/psychologist Mark Alfano suggests that there are five key structural elements to human morality. Although Alfano seems to think of these as distinctive features of human life that are morally salient, I think we can think of them as different variables or parameters that get filled in and prioritised in different ways by different moral cultures.

The five elements are:


Patiency: Some people/entities in the world are moral patients, i.e. they can be harmed and benefitted by actions; they can suffer; they can flourish. They have basic moral standing or considerability.
 
Agency: Some people/entities in the world are moral agents, i.e. they have duties they must perform in order to respect moral patients; they can be held accountable or responsible for their behaviour toward others.
 
Sociality: Moral agents and patients live in groups and not as isolated individuals. Their actions can affect one another (e.g. harm and benefit one another). Alfano argues that sociality has degrees of iterative complexity. One agent P2 can do something that affects P1 (in some morally salient way); P3 can do something to P2 (who does something to P1); P4 can do something to P3 (who does something to P2 (who does something to P1)); and so on. A lot of the complexity to our moral lives stems from the importance we attach to these nested forms of sociality. Who do we count as being socially relevant? How far down the iterated nest of complexity must we go when we make moral judgments?
 
Reflexivity: Agents and patients don’t just interact with others, they interact with themselves. In other words, what they do has some moral relevance for themselves. For example, I can harm and benefit myself through my own actions; I can reflect on my own nature as an agent and my possible duties to myself.
 
Temporality: Agents and patients exist through time and relate to themselves and others over time. How they do so, and how they conceive of those temporal relations, affects moral beliefs and practices. For example, is my future self more important than another future person? Do I relate to my future self in the same way that I relate to a stranger? Derek Parfit wrote about the moral significance of how we answer those questions in his famous work Reasons and Persons.
 

 

Alfano claims that different moral theories vary in how they approach these five structural elements. Some think that only humans count as moral agents/patients, some think that animals or other entities (gods etc) also count. Some attach great importance to our social and temporal relations with others; others do not.

One of the more interesting parts of Alfano’s book is when he argues that the leading Western moral theories — Kantianism, utilitarianism, virtue ethics etc — can be categorised and understood in terms of this framework. Take utilitarianism as the starting point. Alfano argues that utilitarianism is primarily concerned with moral patients. Utilitarians want to know who counts as a moral patient and how can their happiness/pleasure be maximised. Utilitarians are also concerned with sociality and social relations. Classical utilitarians think that all moral patients count equally in the utilitarian calculus. There can be no morally justified preferential treatment for someone in virtue of your social proximity to them. Some utilitarians, particularly in the wake of Derek Parfit’s work, are also deeply concerned with the future of moral patients. Indeed, a lot of the existential risk debate, for example, is taken up with people espousing a form of utilitarianism that is focused on the long-term future well-being of moral patients. Agency and reflexivity are not emphasised in utilitarian moral theory.


Contrast that with Kantianism. Kantian moral theory is primarily concerned with agency and reflexivity. It is focused on who counts as a moral agent and what their duties and responsibilities to themselves and each other might be. The basic idea in Kantian theory is that our moral duties and responsibilities can be derived through a process of self-reflection on what it means to be an agent. Kantians do care about sociality as well, insofar as they care about what we owe each other as members of a shared moral community, but it is a form of sociality that is viewed through the perspective of agency. Kantians do not care so much about temporality. Indeed, some strict forms of Kantianism suggest that we should not focus on the longer term consequences of our actions when figuring out our moral duties. Furthermore, Kantians care about moral patients to the extent that they are moral agents. To be a moral patient you must first be an agent, and then you are afforded a basic kind of moral dignity and respect. Thus, Kantianism is an agency-based moral theory through and through. Most of its key features are derived from its primary focus on agency.


I could go on. Alfano also categorises virtue ethics and care ethics using this framework. The former, he argues, is concerned with all five structural elements to some extent: the well lived life requires some moderate concern for everything. The latter, he argues, focuses on sociality and patiency, particularly on how moral patients depend on and care for one another. It takes issue with the Kantian focus on individual agency and responsibility.

You may agree or disagree with Alfano’s categorisations. What I think is interesting, however, is the framework itself. Can it be used to understand the potential shift in moral cultures that might be precipitated by robotics/AI? I think it can.


3. How Might the Rise of Robotics/AI Cause a Change in Moral Cultures?

Let me set up the argument first by making a strong claim about the kind of moral culture we currently inhabit. As you may have intuited from the discussion of Alfano’s framework, it’s likely that none of us lives in a single, pure moral culture. Instead, we live inside a space of morally possible cultures whose limits are defined by the five structural elements. Within that space, different moral theories/cultures are constantly jostling for supremacy. Some dominate for a period of time, but there are always seeds of alternative possible moral cultures lying around waiting the germinate.

This complexity notwithstanding, I think it is reasonably fair to say that we — and by “we” I mean those of us living in Western, developed nations — live in moral cultures that are broadly Kantian in their nature. In other words, we live in cultures in which individual agency, responsibility and dignity are the key moral concepts. We view ourselves as fundamentally equal moral agents who owe each other a basic, unconditional duty of respect in virtue of that fact. We hold each other to account for failing to live up to our duties toward one another. We care about moral patiency too, of course. We view ourselves as moral patients — entities that can experience joy and suffering and so on — and we want to live “flourishing” lives. Nevertheless, one of the key feature of what we think it takes to live a flourishing life is that we continue to exist as agents.

I’m sure some people will disagree with this. They will say that this moral culture does not describe their own personal moral views or, perhaps, the views of the people they interact with. That may well be the case. But, remember, I’m making a claim here about what I take the dominant moral culture to be, not what I think specific individuals or sub-cultures might believe. I feel about 75% confident that the moral culture in, say, European countries is broadly Kantian in its flavour. This appears to be reflected in the core human rights documents and legal frameworks of the EU.

If I am right about this, then one of the defining features of our current moral cultures is how we conceive of and relate to ourselves and one another as agents, first, and patients, second. Robots and advanced AIs disrupt this moral culture by our normal moral relationships. They do so first by changing the kinds of agents that we interact with and second by changing how we conceive of our own agency.

Robots and AIs are artificial agents. They take information from the world, process it and then use it to make some prediction or judgment about the world or, in the case of robots, perform some action in the world. They often do this by substituting for or reducing the need for human agency in the performance of certain function. Take, as an obvious example, an automated vehicle (“self-driving car). This piece of technology substitutes for a human agent in all, or at least some, key driving tasks. In a world in which there are many automated vehicles there are fewer interactions between human moral agents. This forces us to reconsider many of our default moral assumptions about this aspect of our lives. As robots and AI proliferate into other domains of life, we are forced to do more reconsidering.

The claim I wish to defend in this talk is that the rise of the robots/AI could cause a shift away from our basically Kantian, agency-centric moral culture to one in which moral patiency becomes the more important and salient feature of our moral lives. There would seem to be three major reasons for thinking that such a shift is likely.

First, the proliferation of robotic/AI agents has a tendency to corrode and undermine human agency. This is something I have written about quite a lot in some of my previous work. To be clear, I think the actual impact of these technologies on our agencies is multifaceted and doesn’t necessarily have to undermine our agency. Nevertheless, there are certain features of the technology that have a tendency to corrode and undermine human agency. In particular, reliance on such technologies tends to obviate the need for at least some forms of human agency. In other words, we typically use artificial agents when we don’t want or are unable to do things for ourselves. Similarly these technologies are increasingly being used in ways that nudge, manipulate, control or constrain human agency. As I put it in one of my previous papers, this could mean that our agency-like properties are undervalued and underemphasised, while our patiency-like attributes become our defining moral characteristic.

Second, robotic/AI agents occupy an uncertain moral status. Things wouldn’t change all that much if instead of interacting with human moral agents, we interacted with machine moral agents. We would just be trading one moral partner for another. But it seems plausible to suppose that this won’t happen. Machines will be treated as agents but not fully moral ones — or, at least, not equal members of the Kantian kingdom of ends. This means we won’t tend to view them as a responsible moral agents and we won’t view them as equivalent, duty-bearing members of our moral communities. To be clear, I am fully aware that the present and future moral status of robots/AIs is contested. There is lots of interesting work being done in social psychology on how people apply moral standards to machines and how you might design a robot to be viewed as a responsible moral agent. There is also a heated debate in philosophy about whether robots could be moral agents. Some people think they could be (at least someday) while others actively resist this suggestion. In a way, the existence of this controversy proves my point. It seems like you have to do a lot of technical design work and philosophical work to convince people that robots/AI might count as equal moral agents. Therefore, it is plausible to suppose that they pose a major disruption to any culture that presupposes the presence of moral agents.

Third, even if it is possible to design robots/AI in a way that they do not undermine human moral agency and in which they are treated as human-equivalent moral agents, there may not be any desire or motivation to do so. It may be that if we create robots/AIs that support human agency and can be viewed as moral agents in their own right, then we lose many of the benefits that they bring to our lives. This is something that I have written about a few times in relation to explainable AI and self-driving vehicles. Others have presented similar analyses, particularly in relation to explainable AI. If this view is correct, then society may face a hard tradeoff: we can either have the benefits of robots/AI or our existing moral culture, but not both. If we choose the former, then we cannot avoid a disruption to our moral culture.

But what might that disruption look like? If I am right in thinking that it is our agency-centric moral culture that is challenged by robots/AI, then I think the end result will be a culture in which moral patiency becomes the dominant moral concern. I don’t have time to sketch out the full consequences of this change in this talk, nor, if I am honest, have I actually attempted to do so. Nevertheless, I think a patiency-centric moral culture would have the following defining features:


  • (a) It would be a more utilitarian culture, i.e. one in which optimising for benefit over harm would be the primary focus (and not ensuring responsible, accountable agency)
  • (b) It would be a culture in which we (humans) became more conscious of ourselves as moral patients, i.e. more conscious of the pleasures and pains we might be suffering, as well as the harms and, crucially, risks to which we are exposed. In other words, it would exaggerate the risk averse trend we already see in our present society.
  • (c) It would be a culture in which there is more emphasis on technology as the solution to our problems as moral patients, i.e. we turn to technology to protect us from risk and ensure our well-being.

In this last respect, the culture of patiency might be a little bit like the culture of victimhood that Campbell and Manning talk about in their work except instead of turning to other third party humans/institutions to resolve our moral problems, we turn to machines.

There is, already, an active resistance to this culture of moral patiency. In some ways, the entire field of AI ethics is an attempt to protect our agency-centric moral culture from the disruption I have described above. These efforts may be fully justified. But in this talk I am not trying to ascertain which moral culture is the best. I am just trying to suggest that it might be useful to think about our current predicament as one in which the dominant moral culture is being disrupted.