Pages

Friday, April 30, 2021

Virtual Reality and the Meaning of Life




Here's a new draft paper. This one is about whether it is possible to live a meaningful life in virtual reality. It is set to appear in the Oxford Handbook on Meaning in Life, which is edited by Iddo Landau. I'm not sure when this book will be published, but you can access a final draft version of the chapter at the links provided below.


Title: Virtual Reality and the Meaning of Life

Links: Philpapers; Researchgate; Academia

Abstract: It is commonly assumed that a virtual life would be less meaningful (perhaps even meaningless). As virtual reality technologies develop and become more integrated into our everyday lives, this poses a challenge for those that care about meaning in life. In this chapter, it is argued that the common assumption about meaninglessness and virtuality is mistaken. After clarifying the distinction between two different visions of virtual reality, four arguments are presented for thinking that meaning is possible in virtual reality. Following this, four objections are discussed and rebutted. The chapter concludes that we can be cautiously optimistic about the possibility of meaning in virtual worlds.


 

Wednesday, April 28, 2021

90 - The Future of Identity



What does it mean to be human? What does it mean to be you? Philosophers, psychologists and sociologists all seem to agree that your identity is central to how you think of yourself and how you engage with others. But how are emerging technologies changing how we enact and constitute our identities? That's the subject matter of this podcast with Tracey Follows. Tracy is a professional futurist. She runs a consultancy firm called Futuremade. She is a regular writer and speaker on futurism. She has appeared on the BBC and is a contributing columnist with Forbes. She is also a member of the Association of Professional Futuriss and the World Futures Studies Federation. We talk about her book The Future of You: Can your identity survive the 21st Century?

You can download the podcast here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

 

Show Notes


Topics covered in this episode include:

  • The nature of identity
  • The link between technology and identity
  • Is technology giving us more creative control over identity?
  • Does technology encourage more conformity and groupthink?
  • Is our identity being fragmented by technology?
  • Who controls the technology of identity formation?
  • How should we govern the technology of identity formation in the future?

Relevant Links



Wednesday, April 21, 2021

Has Technology Changed the Moral Rules Regarding Sex and Marriage?


Did the invention of the pill change the social moral rules about premarital sex?


We all face decision problems in our daily lives. We have preferences and we have options. These options come with different ratios of costs and benefits, measured relative to our preferences. If we are broadly rational, we will tend to pick the options in which the benefits outweigh the costs — if not in the short term then at least in the long-term. These preferred options will become our personal norms.

But we don’t make decisions in a vacuum. Our choices affect others and others face similar problems to us. If we all face similar decision problems, and we all pick the same options in those problems (or put in place mechanisms to enforce the same options) then those options may end up forming the basis for our social moral norms. This is, in fact, a popular theory for understanding how humans developed social moral norms.

Technologies alter our decision problems. When a new technology comes along, it can change our options and the ratio of benefits to costs associated with those options. This can give rise to new preferred choices and hence to new social moral norms.

Is it possible to think about the relationship between decision dynamics and social moral change in a more systematic way? That’s the question I try to answer in the remainder of this article. I do so by examining some economic models of human decision problems associated with sex and marriage and some claims made by economists concerning the effect of technology on those decision problems. To be more precise, I will look at how changes to the technology of contraception (allegedly) increased the social permissibility of premarital sex. I will also look at how the development of labour-saving household machines (washing machines) may have changed the moral purpose of marriage.

(Full disclosure: I originally encountered these two examples in Marina Adshade's work on the economics of sex and marriage. I looked at the original research papers after reading her account).


1. Social Norms and Decision Problems

Before I get into the specific examples I want to consider the impact of technology on decision problems at a more abstract level. This will provide a general model for how technology might induce social moral change. This model will be made more concrete with the two specific case studies.

As noted in the introduction, we all face different decision problems. Some are more profound than others. Should I call my partner after I get off the plane? Should I have another cup of coffee? Should I respond to that email now or later? In each of these decision problems, three variables are key: (i) the available options; (ii) the outcomes associated with these options and (iii) our preferences/values over these outcomes (which one gives us more of what we want/value?). Another important variable is the probability attaching to each outcome. Since not everything in life is a dead cert, we often need to factor probabilities into how we weigh up the options. If I knew I was going to win the lottery, then I would definitely buy a ticket, but since my chance of winning the lottery is infinitesimally small, I’d be a fool to do so.

Although many people are sceptical of rational choice theory these days, this is largely because we conflate an ideal form of rationality (what people ought to prefer if they thought like economists and had a perfect grasp of probability theory) with the actual form of rationality adhered to by most people. With modifications in place to account for how people actually decide, I think rational choice theory can be a reasonably good first approximation for understanding how people can and do make decisions. In particular, if you factor in the fact that people’s preferences over outcomes can differ wildly from what classical economic theorists might suppose (e.g. people can value long-term friendship over making money), and that people act on their own perceptions of what is likely to occur and not on what will really happen, then you have a rough-and-ready model for understanding human decision-making.

That model looks something like this. The decision maker is represented by a node marked with D (for decision-maker). They face branching choices represented by lines emanating out from D. These are their options. For illustrative purposes, two options are displayed in the diagram below but, in principle, a decision-maker could face a large number of options. Each of these options leads to some outcomes. These outcomes contain a mixture of benefits and costs (from the perspective of D). If D is broadly rational, they will tend to choose the options with the best ratio of benefits to costs, i.e. the most beneficial or the least costly.



One point worth noting about this model is that the decision-maker need not be an individual human being. It’s natural to think about them this way since we often use models like this to understand our own choices, but in principle D could be a collection of individuals or, perhaps, an entire society. This becomes important when we look at the example of marriage, below.

How does technology change our decision problems? Technology changes our decision problems by doing two things: (i) it gives us new options and/or (ii) it changes the ratio of costs to benefits associated with our options. Most commonly, technology will lower the costs and raise the benefits associated with an option. This is because most technologies are invented with the aim of making it cheaper, easier and more convenient for people to do something. Consider the example I introduced earlier on involving the decision to call your partner when you get off a plane. Before the days of the cell phone, this would have been a costly and inconvenient. You would probably need to go to a payphone, get some money to put into the payphone, remember your partner’s number and the international dial code. This would interrupt your journey and possibly make you late in getting to your destination. For some people, I am sure that these inconveniences would not deter them from making the phone call; but for many others I am pretty sure they would. The cell phone changed that. It added a new option (call your partner using a cell phone instead of a payphone) and thereby changed the ratio of costs to benefits associated with making the call. Indeed, so convenient and easy did it become to do this that I’m sure that some people within some relationships shifted to a new norm as a result: instead of seeing the phone call as a nice surprise they began to see it as a moral necessity. If you didn’t call, it might indicate that something was wrong.

Of course, it’s not that simple. Technologies don’t always make things cheaper and more convenient. Sometimes there are learning curves associated with new technologies. This can make people reluctant to adopt them in the short term because the subjective cost of becoming competent in the use of the technology is initially too high. Sometimes technologies are prohibitively expensive when they first come on the scene meaning that only the privileged few can truly benefit from them. The cell phone was like that. When it was originally invented it was expensive and relatively few people had one. It is only in more recent times that it has become cheap enough and ubiquitous enough for post-landing phone calls to become the norm. Finally, there are also potential hidden costs associated with technologies. These can come in the form of downstream costs that are not properly felt by the original decision-maker (an economist would call these externalities) and/or collective costs that are only felt when everyone starts to make use of the new technology. More on these later on.

With this abstract model in place, we can move on to consider some actual real-world examples relating to sex and marriage. Before I begin to discuss those examples, however, I want to issue a warning: there are many criticisms you could make of these examples. Some will probably occur to you as you read through them. I would ask you to hold off on those criticisms until the examples have been fully explained. I will try to address some of them towards the end of this article.


2. Contraceptive Technology and Premarital Sex

Premarital sex has long been a taboo in many cultures, particularly for women. There are a variety of reasons for this: obsession with female ‘purity’ and chastity, religiously-inspired sexual morality, property rules and marriage norms. The precise reasons for this taboo are not that important. What is important is that it was a very real phenomenon. Here is just one illustration of this: according to data collected by Greenwood and Guner, in 1900 only 6% of unwed women in the US engaged in premarital sex.

Although the taboo lingers in some cultures, the practical reality has changed. Nowadays many women engage in premarital sex. According to the same data collected by Greenwood and Guner, by the year 2002 approximately 75% of unwed women in the US engaged in premarital sex. This is not surprising. Many people now delay marriage into their late 20s and 30s but still maintain active sex lives in their late teens and early 20s. Furthermore, many people don’t get married. Premarital and extramarital sex has been normalised and tolerated. Some people celebrate it as part of a healthy and flourishing lifestyle; some people, who might otherwise consider it sinful or shameful, have become more accepting, viewing it as a matter of personal choice and consent.

Why has this happened? Greenwood and Guner use a simple decision model to understand the cultural shift. This decision model focuses on the role that contraceptive technology has played in changing how people, particularly young women, think about the decision to engage in extramarital sex. In the year 1900 there were several available methods of contraception: withdrawal, rhythm method, condoms, cervical barriers and some IUDs (technically invented in the first decade of the 20th century, I believe). Many of these were either ineffective, expensive or awkward to use. Extrapolating from existing data sources, Greenwood and Guner estimate that the failure rate of these contraceptive methods ranged from about 45% (for condoms) to about 60% for withdrawal. And this overlooks the fact that most people (about 60%) that engaged in extramarital sex in the year 1900 did not use any contraceptive method. For them, the likelihood of getting pregnant in a given year was about 85%.



In other words, in the year 1900, women facing the decision to engage in extramarital sex faced some stark choices. If they had sex, they might reap some short term benefit from this (pleasing their partner and, perhaps, themselves) but this would come with a significant cost. If they had contracepted sex, they had about a 50-50 chance of getting pregnant in any given year. If they had unprotected sex, they had about an 85% chance of getting pregnant. If they got pregnant they probably would not have access to abortion and if they did it would be illegal and dangerous to their health and well-being. If they carried the child to term they would probably suffer from significant social stigma and financial hardship, unless they married their sexual partners (which was common but not always guaranteed). Furthermore, all this ignores the risks of sexually transmitted diseases, which although perhaps not as salient in people's minds at the time, were another important factor affecting the decision problem.

The dynamics of this decision problem changed over the course of the 20th century. Greenwood and Guner argue this was due to two major technological breakthroughs: (i) the invention of cheaper and more effective contraceptive methods and (ii) access to safe, legal abortion. The changes in contraception are quite startling. With the invention of latex condoms, contraceptive pills and more effective IUDs, the failure rates of contraceptive methods declined quite precipitously. The failure rate of condoms went from 45% to around 15%. And the failure rate of the pill was around 5%. Furthermore, these failure rates include people that misuse these contraceptive methods. For many people, the actual failure rates are now much lower. The risks associated with extramarital sex have, consequently, declined. Furthermore, if a woman did get pregnant, access to safe and legal abortion could further mitigate the risk of doing so. This is to say nothing of the benefits of certain contraceptive methods for mitigating the risk of sexually transmitted diseases. Over time, and thanks to these technological changes, the short term benefits of extramarital sex started to outweigh the potential costs.


The claim made by Greenwood and Guner is that this can explain, at least in part, the shift in behaviour regarding extramarital sex, from taboo to norm, in the couse of 100 years. It is not a full explanation, of course. Other factors, including the incorporation of women into the workforce and the delay in marriage and starting a family also impacted on extramarital sex. But the impact of technology is surprisingly robust. In another paper, Fernandez-Villaverde, Greenwood and Guner, highlight how many institutions (families and churches) continue to stigmatise both extramarital sex and the use of contraception, but that their efforts have largely been in vain. As a matter of practice and belief, many young people both tolerate and willingly engage in both. The Catholic Church, for example, continues to condemn premarital sex and contraception, but young Catholic couples routinely ignore them. Fernandez-Villaverde, Greenwood and Guner provide additional empirical data to support this idea, looking at attitude surveys among young people which confirm their acceptance of extramarital sex. In day-to-day life extramarital sex has gone from being stigmatised to being normalised.

What we have here, then, is a neat example of the decisional model in practice. New technologies changed the decisional calculus when it came to engaging in extramarital sex. By massively reducing the risk of having a child out of wedlock, it made people perceive that the benefits of extramarital sex outweighed the potential costs. More people started to engage in this behaviour as a result. As this behaviour became routine, it filtered down into norms and attitudes.


3. Labour Saving Technologies and Marriage

Another example of the interplay between technology and social morality comes with the invention of labour-saving technologies in the home and women’s working norms. This example also comes from the work of Jeremy Greenwood, this time in collaboration with Ananth Seshadri and Mehmet Yorukoglu.

Under traditional conceptions of marriage, a woman’s place was said to be ‘in the home’. In the country in which I live — Ireland — this norm was actual inscribed into the constitution in 1937. Article 41.2 of Bunreacht na hEireann states that a woman, by her ‘life within the home’ gives the state a support for the common good that could not otherwise be achieved. The same article goes on to state that the government should try to endeavour to ensure that women do not face the economic necessity of working outside the home. This article is still part of the law today, though it is largely ignored and seen as something of a national embarrassment.*

Why did this norm regarding a woman’s place become established? There are, again, many potential reasons. In his book Foragers, Farmers and Fossil Fuels, Iain Morris suggests that a gendered division of labour has been a feature of all human societies to some extent but that it became particularly pronounced when humans made the shift from hunting and gathering to agriculture. When we made the shift to agriculture, men worked in the field and women worked in the homestead. Furthermore, control over female sexuality became more important in agricultural societies due to the emphasis on property ownership being passed down through family lines. In addition to this, women had more children in agricultural societies and so had more care responsibilities as a result.

Why did this gendered division of labour persist through to the post-industrial era? Why did women only start to work outside the home, en masse, in the latter part of the 20th century?** Greenwood et al argue that it had to do with the perceived costs and benefits of working outside the home. Running a household, managing food preparation, cleaning, clothes-washing and childcare take an enormous amount of time and effort. This is true to this day. But before the invention of labour-saving domestic technologies — such as washing machines, vacuum cleaners, and microwave ovens — these tasks took even more time than they do today. Indeed, the labour involved was almost back-breaking.

One of the most evocative descriptions that I have ever read of what life was like back then comes from Robert Caro’s biographies of Lyndon B Johnson. In the first volume of this series of biographies, Caro describes what life was like in the Texas Hill Country pre-electrification (Johnson played a crucial role in the electrification of this region). In one chapter — ’The Sad Irons’ — he describes in detail the working day of the women in the Hill Country. In a more recent book reflecting back on his research, Caro explains the genesis of this chapter:


[In my interviews with these women] I realized I was hearing, just in the general course of long conversations, about something else: what the lives of the women of the Hill Country had been like before…Johnson brought electricity to that impoverished, remote, isolated part of America — how the lives of these women had, before “the lights” came, been lives of unending toil. Lives of bringing up water, bucket by bucket, from deep wells, since there were no electric pumps; of carrying it on wooden yokes — yokes like those that cattle wore — that these women wore so they could carry two buckets at a time; of doing the wash by hand, since without electricity there were no washing machines, of lifting heavy bundle after heavy bundle of wet clothes from washing vat to rinsing vat to starching vat and then to rinsing vat again; of spending an entire day doing loads of wash, and the next day, since there were no electric irons, doing the ironing with heavy wedges of iron that had to be continually reheated on a hot blazing wood stove, so that they ironing was also a day-long job. 
(Caro 2019, p xxii)

 

The women of the Hill Country may have had it particularly bad, but quantitative estimates of the average time spent on domestic chores during the early part of the 20th century tell a similar story. Reviewing some studies done by the Rural Electrification Authority in the 1940s, Greenwood et al estimate that it took about 4-5 hours to do a load of laundry (by hand) and about 4-5 hours to iron it. On average, in the year 1900, households spent about 58 hours per week on domestic chores (food preparation, cooking, cleaning etc).

What did this mean in practice? Well, one thing it meant is that when households were deciding on how to distribute their labour, they didn’t have the luxury of allowing both partners to work outside the home. One of them had to stay at home to do the domestic chores. The costs (in terms of lost time for domestic chores) outweighed the benefits (in terms of additional income). Only the very wealthy — who could afford extensive domestic staff — would have the option to do otherwise.

That all changed with the invention of labour-saving technologies. Greenwood et al estimate that even in the 1940s the amount of time taken for laundry with an electrical washer reduced from approximately 5 hrs to under 1hr. Furthermore, by 1975 households were spending 18 hours on domestic chores per week. This helped to change the cost-benefit ratios for work outside the home. The opportunity cost in terms of lost time was significantly reduced

However, it wasn’t just technology within the home that made the difference. As Marina Adshade points out in her work on female labour-force participation, the use of technologies within the workforce also played a vital role. The incorporation of typewriters, telephones and computers into offices created a skills gap that could be occupied by women taking clerical training courses. Women not only had more time for work outside the home, they also had a technology-related skill that proved attractive to the labour market. The combination of both technological forces thus changed how households perceived the decision problem: work outside the home became an attractive option.

This, in turn, had an effect on behaviours and norms. Instead of work outside the home being viewed as a corruption of women’s social role, it became normalised and legally sanctioned. According to Adshade, this even had an effect on how people understood the value of marriage. Instead of being viewed as an economically convenient division of labour, people started to see marriage more as an affair of the heart. Marriage for love, not for money or security, became the dominant social norm (at least in Western countries).

Of course, technology cannot provide the full explanation for this moral shift, but it might provide part of the explanation.


4. Criticisms and Reflections

These decisional models are interesting but not without their flaws. In the original presentation neither Greenwood nor his colleagues claims that their models actually explain the history of premarital sex and work outside the home. Rather, they present them as formal models that can explain at least some of the data with respect to these phenomena.

The obvious criticism of decisional models like this is that people don’t reason about their options in the way that the models assume. This is the classic critique of all rational choice models. In the preceding sections, I went through some estimated figures with respect to the amount of time it took to do the laundry and the actual risk of pregnancy from premarital sex (with and without contraception). Very few people have those figures to hand or would care to acquire them. They might have some very rough estimate of the time taken or risks involved. But for the most part they will rely on intuition and general perception. This might be very misleading. For example, some people routinely underestimate risk; others routinely overestimate it. Some people have a poor internal timeclock; some people are more precise.

Still, this criticism is probably less significant than it first appears, at least when it comes to these two examples. Although it is true that most people don’t reason about things in a precise, quantitative way, they often have a rough sense of the costs and benefits associated with their choices. If the impact of technology on those costs and benefits is as dramatic as Greenwood et al claim, then it seems reasonable to suppose that this will impact on people’s perceptions of cost and benefit. For example, if modern contraceptive methods really do reduce the risk of unwanted pregnancy from 50% to less than 10%, it’s hard to imagine that this wouldn’t be generally known and wouldn’t have an impact on choices. If it only reduced the risk from, say, 50% to 45% then it would be more plausible to suppose that people would not factor this into their decision-making. Most people are not that finely calibrated when it comes to perceptions of risk.

Another criticism is that these decisional models assume that key actors have a free choice when the practical reality is that they do not. This strikes me as being a plausible critique in this instance. Both of the examples above relate to changes in behaviour and norms associated with women, but in neither case is it necessarily fair to suppose that women had some free choice between the options. Women were (and still are) often pressured or forced into sex (and shamed into thinking they are sinful and promiscuous if they succumb). And women within traditional family units often didn’t think they had a meaningful choice to work outside the home: the weight of cultural expectation was against this. To suppose that they could choose certain options based on perceived costs and benefits might stretch credulity too far.

But this criticism might also be misplaced. There are two reasons for this. First, as noted above, these decisional models are agnostic with respect to the actor making the decision. It might be the woman but it could also be her male partner or the pair of them as a unit. For instance, in the marriage case you could imagine that it is the pair of them making the choice as to whether it makes sense to work outside the home. Second, even though cultural norms and attitudes might have affected whether women had a choice in these matters, one of the claims underlying these models is that these attitudes and norms could change over time. I imagine that this could happen in line with something like a Granovetter-type threshold model. In other words, once a critical mass of women (or ‘actors’) made the choices in favour of more extramarital sex and/or work outside the home, others followed suit and there was an associated change in cultural attitudes that created a positive feedback loop. Eventually options that were once taboo or unavailable to women, became available.

Finally, and this is not a criticism but a reflection, it is important to think about the second-order effects of decisional models such as the ones outlined above. Technological changes to the costs and benefits associated with one type of decision might also affect other related decisions. In her analysis of the economics of sex and marriage, Marina Adshade argues that we should pay particular attention to these second-order effects. The impact of changing attitudes toward work outside the home on the perceived purpose of marriage (love versus financial convenience) is one example of a second-order effect. Another example is the impact of changing attitudes toward extramarital sex toward children born out of wedlock. Adshade claims that as the cultural taboo around extramarital sex was slowly relaxed, more women were inclined to have it. Some of these women experienced contraceptive failure, while others were more erratic in their use of contraception. As a result, more children were born out of wedlock. This eventually led to the de-stigmatisation of childbirth outside of marriage. As she puts it:


Increased access to contraceptive technology changed society’s views of sex outside of marriage. This had the unanticipated effect of contributing to the increase in births to unmarried women. It also contributed to the de-stigmatization of childbirth outside of marriage. 
(Adshade 2017, 291)

 

In his book The Structure of Moral Revolutions, Robert Baker makes a similar claim about changing attitudes to ‘illegitimate’ children in the US. But he makes the claim as a specific contrast with what happened in the UK. In the UK there was a formal campaign to end the stigma associated with children born outside of marriage (spearheaded by the National Council for the Unmarried Mother and Her Child). In the US, social attitudes appear to have naturally evolved to become more tolerant in the wake of changes in social behaviour:


In America destigmatization was carried on the tides of moral drift: an unintended consequence of factors such as the increase of single-parent war widows after WWII, the availability of effective female-controlled contraception after 1957, the more generous welfare systems created in the 1960s, and the liberalization of divorce laws (which led to the doubling of the American divorce rate from 1965 to 1975). These changes eroded the stigma attaching to single mothers, since, to cite one factor, it would have been unconscionable to stigmatize war widows as “loose” or to call their children “bastards”. 
(Baker 2019, 28)

 

As you can see, Baker’s explanation for the change is a richer one. It is not just contraception and premarital sex but, rather, a confluence of technological and sociological factors. Nevertheless, the point holds: none of these factors by itself was primarily responsible for (nor intended to effect) attitudes toward children born outside of marriage; rather, this was an indirect consequence of these factors working together.

This particular example doesn’t matter too much. The important point is to be on the lookout for these second-order effects.


5. Conclusion

In conclusion, these decisional models of social moral change strike me as being useful and somewhat plausible. They don’t provide a complete explanation for how our moral attitudes and norms change over time, but they do provide some insight into how technology, in particular, might affect choices that in turn affect norms. As we think about all the new technologies emerging into human life at the moment, it is useful to reflect on how they change the decision problems we all face.


* This isn’t quite true. The constitutional provision has been routinely referred to in divorce proceedings to justify the allocation of assets to women who worked in the home but did not otherwise contribute financially to their families.

* To be clear, we are speaking in generalities here. Many women did work outside the home before that, particularly women from more deprived socio-economic backgrounds.

Wednesday, April 14, 2021

The Self as Narrative: Is it good to tell stories about ourselves?


Is the self divided? Since the time of Freud the notion of a fragmented self has taken deep root in how we think of ourselves. Freud thought we were subject to competing forces, some conscious and some unconscious. It was from the conflict of these forces that the self emerged. Enduring conflicts between different elements of the self could lead to mental breakdown and illness.

Many other psychologists and psychotherapists have proposed similar theories, suggesting that we split ourselves into different roles and identities and sometimes struggle to integrate the competing elements into a coherent picture. In his book, The Act of Living, the psychotherapist Frank Tallis argues that achieving integration between the elements of the self is one of the primary therapeutic goals of psychotherapy. Why? Because personal fragmentation is thought to lie at the root of human unhappiness:


The idea that fragmentation or division of the self is a major determinant of human unhappiness, anxiety and discomfort appears in the writings of many of the key figures in the history of psychotherapy. Our sense of self accompanies all our perceptions, so when the self begins to crack and splinter everything else begins to crack and splinter too. The world around us (and our place in it) becomes unreliable, uncertain, frightening, and in some instances untenable. We experience ourselves as a unity and threats to cohesion are deeply distressing. 
(Tallis 2021, 150)

 

You may have felt this distress yourself. I know I have. There are parts of my identity that I struggle to reconcile with others. There is, for instance, the conflict between the ideals of my working self and parenting self. I often ask myself how can I justify spending time writing articles like this when I could be spending time with my daughter, particularly when she is so young. I don’t know how to reconcile the two sides of myself.

How can we resolve the distress into something more psychologically appealing? One solution is to tell ourselves a story. As Joan Didion famously remarked: we tell ourselves stories to live. We are narrative creatures. If we can knit the elements of our lives into a satisfying narrative perhaps we can achieve some psychological stability. It’s an attractive idea and one that has resonated with many thinkers over the years.

Or is it? Some people disagree. One of the most prominent is the philosopher Galen Strawson. Echoing millennia of Buddhist thinking on the matter, Strawson has argued that he (at least) does not experience his life as a narrative, nor does he think it is a good thing to do so. On the contrary, he thinks that integrating one’s life into a narrative may be an impediment to insight and ethical decision-making.

In this article I want to consider the conflict between Strawson and defenders of narrativity. I start by outlining some of the research that has been done on the psychological value of self-narratives. I then move on to consider Strawson’s critique of this idea.


1. The Value of Self-Narrative

Those who think of the self as narrative are proponents of something Strawson calls the ‘narrativity thesis’. There are many different construals of the narrativity thesis. Some people argue that the self is, necessarily or quintessentially narrative. Daniel Dennett, for instance, has developed a theory of self that maintains that what we call the self is the ‘centre of narrative gravity’ in our conscious minds. In other words, our consciousnesses are constantly writing multiple draft stories about who we are. The self is the draft that emerges from the melee. Others have adopted a similar view suggesting that we are essentially narrative beings who experience and make sense of our lives in a story-like way.

There is, however, another construal of the thesis. Instead of maintaining that there is something essentially narrative about us we could hold that we are often fragmented but that it would be a good thing for us to integrate the elements into a coherent narrative. In other words, even though we may struggle to tell a coherent story about ourselves, we need to do this in order to maintain psychological stability and well-being. This normative or axiological version of the narrativity thesis is the one that interests me.

Is there any reason to endorse it? There are some intuitive reasons to do so. As noted in the introduction, the failure to integrate various aspects of your life into a common framework can be psychologically unsettling. It can lead to distress, procrastination and anxiety. For instance, when I think about my own personal ideals as both a parent and an academic, I find myself landed in an internal conflict not dissimilar to the dilemma faced by Buridan’s Ass (the mythical creature that was equally poised between the water and the food). The net result of this conflict is that I default to procrastination and apathy, living up to neither set of ideals. This is deeply frustrating and, from what I gather, not an uncommon experience.

But we can go further than mere intuition. There has been a lot of psychological research on the value of telling a coherent story about yourself. One of the chief researchers in this area is Dan P McAdams. Over the years, McAdams has defended two key claims about the importance of narratives to the human condition. First, he claims that narrativity is a central part of our self identity. In other words, we come to understand who we are through the stories we tell about ourselves. This occurs through what McAdams calls synchronic and diachronic integration. In other words, we integrate different roles or elements of our lives as they arise at one time (synchronically) and across time and space (diachronically). McAdams claims that it is during later adolescence that people form distinct and coherent self narratives that become a key part of their self-identity. This doesn’t mean that they lack a ‘self’ at an earlier age but they do lack a coherent self-identity:


To the extent that a person’s self-understanding is integrated synchronically and diachronically such that it situates him or her into a meaningful psychosocial niche and provides his or her life with some degree of unity and purpose, that person "has" identity. Identity.. is something people begin to "work on" and have…[in] the emerging adulthood years. At this time…people begin to put their lives together into self-defining stories. It is an internalized and evolving story of self that integrates the self synchronically and diachronically, explaining why it is that I am sullen with my father and euphoric with my friends and how it happened—step by step, scene by scene—that I went from being a born-again Christian who loved baseball to an agnostic social psychologist. 
(McAdams 2001, 102)

 

In addition to this, McAdams argues that the kinds of stories we tell about ourselves have a significant impact on our psychological well-being. If you tell yourself the story that you are a perpetual failure and a complete fraud, you won’t be as happy and fulfilled as someone who tells themselves a story highlighting their successes and triumphs over adversity. In a study published in 2013, McAdams and his co-author Kate McLean review the psychological literature on this topic and highlight several specific self-narratives that seem to be associated with psychological well-being. They suggest that research shows that people who tell stories that emphasise personal growth, redemption and an overarching sense of purpose or meaning in their lives are psychologically better off than others. Some examples of such research findings from their paper include:


Bauer and colleagues have examined negative accounts of life-story low points as well as stories about difficult life transitions. People who scored higher on independent measures of psychological maturity tended to construct storied accounts that emphasized learning, growth, and positive personal transformation. 
(McAdams and McLean 2013, p 235 reporting on Bauer et al 2006)

 

In a longitudinal demonstration, Tavernier and Willoughby (2012) reported that highschool seniors who found positive meanings in their narrations of difficult high-school turning points showed higher levels of psychological well-being than those students who failed to construct narratives about turning points with positive meanings, even when controlling for well-being scores obtained 3 years earlier, when the students were freshmen. 
(McAdams and McLean 2013, p 235)

 

The psychological importance of such self-storying seems to be confirmed by other sources. For instance, a centrepiece of cognitive behavioural therapy is to reexamine the core beliefs that we have about ourselves. These core beliefs are often the central narrative hinges of our lives. If your core belief is that you are unlovable, you tend to tell yourself a story that fits all your experiences with that core belief. By targeting automatic thoughts and assumptions that are linked to these core beliefs, cognitive therapists have demonstrated considerable success in helping people improve their psychological well-being. Similarly, and just because I happened to be interested in the topic, Houltberg et al’s study of the self-narratives of elite athletes suggests that athletes that have a purpose-based narrative identity score better on independent measures of psychological well-being than those with a performance-based narrative identity (i.e. those that emphasise perfectionism and fear of failure).

There are some important caveats to all of this. The association between certain self-narratives and psychological well-being may be highly culturally dependent. McAdams himself notes this, emphasising that stories of personal redemption and growth have a particular resonance in American culture. In cultures with a more communitarian ethos, other stories may have greater value. This is something to bear in mind as we turn to consider Strawson’s critique of narrativity.


2. Strawson’s Critique of Narrativity

Strawson doens’t buy the narrativity thesis. He doesn’t deny that many people do experience their lives as a narrative, or that some people derive great meaning and value from the stories they tell. He does, however, deny that this is necessary to the human condition and suggests that it is not always a good thing. Indeed, in practice self-storying may be an impediment to the good life.

Strawson has critiqued the narrative view in several venues over the years. As best I can tell, he offers four main lines of criticism.

First, he argues that the narrativity thesis is often unclear. What does it actually mean to say that one does or should live one’s life as a narrative? Does this mean that your life has to fit some well-recognised narrative structure, such as the ‘hero’s journey’ or the ‘rags to riches’ adventure? Or does it simply mean that you have to put some form or shape to your life? If it means the former, then it seems like an obviously false thesis. Not everyone’s life fits a recognised narrative structure and trying to make the events in one’s life fit that structure will lead to a unjustified selectivity in how one remembers and orders events. If it means the latter, then it is arguably too loose a requirement. Virtually any sequence of events can be given some shape or form. Modern literature, for example, is replete with novels that defy traditional narrative structures and yet stil have some loose form.

Second, he argues that narrativity is not psychologically necessary. In other words, contrary to what people like Dennett and McAdams might argue, we are not all essentially narrative beings that come to understand ourselves through stories. Some people don’t think of the events in their lives slotting into some overarching narrative. Strawson usually cites himself as an example of such a person, proudly pronouncing that he has no sense of himself as a narrative being. Consider:


And yet I have absolutely no sense of my life as a narrative with form, or indeed as a narrative without form. Absolutely none. Nor do I have any great or special interest in my past…Nor do I have a great deal of concern for my future. 
(Strawson 2018, p 51)

 

And also:


…I’m blowed if I constituted my identity, or if my identity is my life’s story. I don’t spend time constructing integrative narratives of myself — or my self — that selectively recall the past and wishfully anticipate the future to provide my life with some semblance of unity purpose, and identity. 
(Strawson 2018, 192)

 

Doth he protest too much? Perhaps but what he says resonates with me to an extent. I don’t really invest much time in telling myself some long story about my past. That said, I do sometimes sequence certain events in my life into a story, e.g. intention-obstacle-overcoming that obstacle or not. Furthermore, the notion that someone could have no concern for their past or future is alien to me. Surely we all naturally have a little bit of concern for the past and future? After all, unless we have some serious cognitive deficit, we all naturally remember the past and plan for the future? It’s hard to imagine a human life without that sense of oneself in time and space.

Strawson is cognisant of this and elsewhere in his writings he insists that one can have a sense of oneself as an enduring human being without have a sense of oneself as a narrative being. We occupy the same human biological form over time and we remember what happened to our bodies and we have to plan for the future of what will happen to those bodies, but we don’t have to fit all of these happenings into a psychological narrative. He goes on to suggest that an enduring non-narrative being might represent their lives as a list of remembered events and not a narrative. This suggests to me that Strawson might see his life as a ‘listicle’ and not a story.

Third, Strawon argues that no ethical virtue or duty hinges on narrativity. Some people have the opposite view. They think that interpersonal duties and values such as loyalty and friendship depend on having a sense of oneself as an continuing narrative being. After all, how can you motivate yourself to be moral if you don’t care that much for your future (as Strawson proudly proclaims for himself)? Strawson rejects this. In his paper ‘Episodic Ethics’ he maintains that people who experience their lives as a series of short episodes can still have relatively normal moral lives. A sense of concern and empathy for others does not depend on narrativity. Indeed, it often depends more on attentiveness and awareness of the other in the moment. Projecting into the far future or past can dissociate you from the moral demands of the moment. He also cites examples of prominent people that profess an episodic experience of life and yet live morally normal lives.

Fourth, and adding to the previous argument, Strawson maintains that narrativity is often counterproductive both from the perspective of individual psychological well-being and from the perspective of one’s relations to others. If we constantly try to fit our lives into a narrative, there is a danger that we tell ourselves a false story. This falsity can work in different directions. Sometimes we might tell ourselves a false positive story, painting ourselves in a better light than we deserve, suggesting that we are more generous and caring than we really are, giving ourselves a free pass when we fail to do the right thing in the moment. Sometimes we might tell ourselves a false negative story, painting ourselves in a worse light than we deserve. Many people with depression and anxiety do this. They see themselves as failures, as useless, pathetic, unfit for life. They often experience profound shame and guilt as a result, and they can be ’toxic’ to other people as a result of these stories.

Against this, Strawson argues that some of the best chroniclers of the human condition — the ones that display the greatest empathy and understanding of what it means to be human — are non-narrative in their outlook. Contrary to the therapeutic interventions of psychotherapy, these non-narrative people often have the best kind of self-knowledge. Michel de Montaigne is a favourite example:


Montaigne writes the unstoried life — the only life that matters, I’m inclined to think. He has no “side”, in the colloquial English sense of this term. His honesty, though extreme, is devoid of exhibitionism or sentimentality (St. Augustine and Rousseau compare unfavorably). He seeks self-knowledge in radically unpremeditated life-writing: “I speak to my writing paper exactly as I do to the first person I meet.” He knows his memory is hopelessly untrustworthy, he concludes that the fundamental lesson of self-knowledge is knowledge of self-ignorance.
(Strawson 2018, 196)


There is something to this, I believe. An honest and ethically sensitive appraisal of one’s life requires a dispassionate observance of what is going on, almost like a sustained form of mindfulness. If the narrative self takes over, you often lose a true appreciation of who you are and what you owe to others.


3. Conclusion

So should we be narrative or not? My stance on this is equivocal. On reflection, I don’t think there is as much distance between the two views outlined in this article as might initially seem to be the case. I agree with Strawson on two main points. First, we are not necessarily narrative in nature and, indeed, the suggestion that each of us does (and should) fit the entire course of our lives into a single overarching narrative strikes me as absurd. If I asked any one of my friends to tell me the story of their lives, I doubt any one of them could do it. They don’t think about themselves in those terms. Second, I agree with Strawson that narratives often distort the truth and this can be unhelpful. Narratives can lead to overconfidence and excessive pessimism.

But I don’t think we can completely dismiss the psychological appeal and moral value of narratives either. While I don’t think of my entire life as a single extended narrative, I do think episodes within my life have some narrative-like structure. There are often important lessons to be learned from our experiences and telling stories about those experiences can be an effective way to remember those lessons. The empirical work done my McAdams and his colleagues cannot be rejected out of hand. The stories we tell about ourselves can have a positive and negative impact on our mental health and well-being.

The important thing is to do your best to avoid narrative distortion. Have a realistic sense of your strengths and weaknesses. If you have a tendency to fit everything within a narrative, try to take a step back from doing so. List the events within your life. Gather the evidence carefully. Avoid assuming that you are the hero of your life story; avoid assuming that you are the villain.


Tuesday, April 6, 2021

From Mind-as-Computer to Robot-as-Human: Can metaphors change morality?




Over the past three years, I have returned to one question over and over again: how does technology reshape our moral beliefs and practices? In his classic study of medieval technology, Lynn White Jr argues that simple technological changes can have a profound effect on social moral systems. Consider the stirrup. Before this device was created, mounted warriors had to rely largely on their own strength (the “pressure of their knees” to use White’s phrase) to launch an attack while riding horseback. The warrior’s position on top of the horse was precarious and he was limited to firing a bow and arrow or hurling a javelin.

The stirrup changed all that:


The stirrup, by giving lateral support in addition to the front and back support offered by pommel and cantle, effectively welded horse and rider into a single fighting unit capable of violence without precedent. The fighter’s hand no longer delivered the blow: it merely guided it. The stirrup thus replaced human energy with animal power, and immensely increased the warrior’s ability to damage his enemy. Immediately, without preparatory steps, it made possible mounted shock combat, a revolutionary new way of doing battle. 
(White 1962, p 2)

 

This had major ripple effects. It turned mounted knights into the centrepiece of the medieval army. And since the survival and growth of medieval society was highly dependent on military prowess, these knights needed to be trained and maintained. This required a lot of resources. According to White, the feudal manor system, with its associated legal and moral norms relating to property, social hierarchy, honour and chivalry, was established in order to provide knights with those resources.

This is an interesting example of technologically induced social moral change. The creation of a new technology afforded a new type of action (mounted shock combat) which had significant moral consequences for society. The technology needed to be supported and sustained, but it also took on new cultural meanings. Mounted knights became symbols of strength, valour, honour, duty and so forth. They were celebrated and rewarded. The entire system of social production was reoriented to meet their needs. There is a direct line that can traced from the technology through to this new ideological moral superstructure.

Can something similar happen with contemporary technologies? Is it already happening? In the remainder of this article I want to consider a case study. I want to look at social robots and the changes they may induce in our moral practices. I want to argue that there is a particular mechanism through which they may change our moral practices that is quite subtle but significant. Unlike the case of stirrup — in which the tool changed the social moral order because of the new possibilities for action that it enabled — I want to argue that social robots might change the social moral order by changing the metaphors that humans use to understand themselves. In particular, I want to argue that the more humans come to view themselves as robot-like (as opposed to robots being seen as human-like), the more likely it is that we will adopt a utilitarian mode of moral reasoning. I base this argument on the theory of hermeneutic moral mediation and some recent findings in human-robot interactions. This argument is highly speculative but, I believe, worth considering.

Terminological note: By ‘robot’ I mean any embodied artificial agent with the capacity to interpret information from its environment and act in response to that information. By ‘social robot’ I mean any robot that is integrated into human social practices and responds to human social cues and behaviours, e.g. care robots, service robots. Social robots may be very human like in appearance or behaviour but they need not be. For example, a robot chef or waiter in a restaurant might be very unhuman-like in appearance but may still respond dynamically and adaptively to human social cues and behaviours.


1. Hermeneutic Moral Mediation

In arguing that technology might alter human moral beliefs and practices, it is important to distinguish between two different understandings of morality. On the one hand, there is ‘ideal morality’. This is the type of morality studied by moral philosophers and ethicists. It consists of claims about what humans really ought to value and ought to do. On the other hand, there is ‘social morality’. This is the type of morality practiced by ordinary people. It consists in people’s beliefs about what they ought to value and what they ought to do. Social morality and ideal morality may not align with each other. Indeed, moral philosophers often lament the fact that they don’t. In considering how technology might change human morality, I am primarily interested in how it might change social morality, not ideal morality.

That said, there can be connections between ideal morality and social morality. Obviously, moral philosophers often use claims about ideal morality to criticise social morality. The history of moral reform is replete with examples of this including anti-slavery arguments in the Enlightenment era, pro-suffragette arguments in the late 1800s, and pro-same sex marriage arguments in the late 1990s early 2000s. But changes in social morality may also affect ideal morality, or at least our understanding of ideal morality. If people adopt a certain moral practice in reality, this can encourage moral philosophers to reconsider their claims about ideal morality. There is often a (suspicious?) correlation between changes in social morality and changes in theories of ideal morality.

How can technology induce changes in social morality? There are several theories out there. Peter Paul Verbeek’s theory of technological moral mediation is the one I will rely on in this article. Verbeek argues that technologies change how humans relate to the world and to themselves. To use the academic jargon: they mediate our relationships with reality. This can have moral effects.

Verbeek singles out two forms of mediation, in particular, for their moral impact: (i) pragmatic mediation and (ii) hermeneutic mediation. Pragmatic mediation arises when technology adds to, or subtracts from, the morally salient choices in human life. This forces us to consider new moral dilemmas and new moral questions. The impact of the stirrup on medieval warfare is an example of this. It made mounted knights more effective in battle and military commanders were thus forced to decide whether to use these more effective units. Given the overwhelming value attached to military success in that era, their use became a moral necessity: to not use them would be morally reckless and a dereliction of duty. Hermeneutic mediation is different. It arises when technology changes how we interpret the world, adding a new moral perspective to our choices. Verbeek argues that obstetric ultrasound is a classic example of hermeneutic moral mediation in action because the technology presents the foetus-in-utero to us as an independent being, situated inside but still distinct from its mother, and capable of being treated or intervened upon by medical practitioners. This alters our moral understanding of pre-natal care.

[If you are interested, I wrote a longer explanation of Verbeek’s theory here]

The widespread diffusion of social robots will undoubtedly pragmatically mediate our relationship with the world. We will face choices as to whether to deploy care robots in medical settings, whether to outsource tasks to robots that might otherwise have been performed by humans, and so on. But it is the hermeneutic effects of social robots that I want to dwell on. I think the diffusion of social robots could have a profound impact on how we understand ourselves and our own moral choices.

To make this point, I want to consider the history of another technology.


2. The Mind-as-Computer Metaphor

The computer was the defining technology of the 20th century. It completely reshaped the modern workplace, from the world of high finance, to scientific research, to graphic design. It also enabled communication and coordination at a global scale. In this way, the computer has pragmatically mediated our relationship with the world. We now think and act through the abilities that computers provide. Should I send that email or not? Should I use this bit of software to work on this problem?

Not only has the computer pragmatically mediated our relationship to the world, it has also hermeneutically mediated it. We now think of many processes in the natural world as essentially computational. Nowhere is this more true than in the world of cognitive science. Cognitive scientists try to figure out how the human mind works: how is it that we perceive the world, learn from it and act in it? Cognitive scientists have long used computers to help model and understand human cognition. But they go further than this too. Many of them have come to see the human mind as a kind of computer — to see thinking as a type of computation.

Gerd Gigerenzer and Daniel Goldstein explore this metaphorical turn in detail in their article ‘The Mind as Computer: The Birth of a Metaphor’. They note that it is not such an unusual turn of events. Scientists have always used tools to make sense of the world. Indeed, they argue that the history of science can be understood, at least in part, as the emergence of theories from the applied use of tools. They call this the ‘tools-to-theories’ heuristic. A classic example would be the development of the mechanical clock. Not long after this device was invented, scientists (or natural philosophers as they were then called) started thinking about physical processes (motion, gravitation etc) in mechanical terms. Similarly, when statistical tools were adopted for use in psychological experimentation in the 1900s, it didn’t take too long before psychologists starting to see human psychology as a kind of statistical analysis:


One of the most widely used tools for statistical inference is analysis of variance (ANOVA). By the late 1960s, about 70% of all experimental articles in psychological journals already used ANOVA (Edgington 1974). The tool became a theory of mind. In his causal attribution theory, Kelley (1967) postulated that the mind attributed a cause to an effect in the same way that a psychologist does — namely, by performing an ANOVA. Psychologists were quick to accept the new analogy between mind and their laboratory tool. 
(Gigerenzer and Goldstein 1996, 132).

 

The computational metaphor suffered from a similar fate. The story is a fascinating one. As Gigerenzer and Goldstein note, the early developers of the computer, such as Von Neumann and Turing, did work on the assumption that the devices they were building could be modeled on human thought processes (at either a biological or behavioural level). But they saw this as a one-way metaphor: the goal was to build a machine that was something like a human mind. In the 1960s and 70s, the metaphor turned around on itself: cognitive scientists started to see the human mind as a computational machine.

One of the watershed moments in this shift was the publication of Allan Newell and Herbert Simon’s Human Problem Solving in 1972. In this book, Newell and Simon outlined an essentially computational model of how the human mind works. In an interview, Simon documented how, through his use of computers, he started to think of the human mind as a machine that ran programs:


The metaphor I’d been using, of a mind as something that took some premises and ground them up and processed them into conclusions, began to transform itself into a notion that a mind was something that took some program inputs and data and had some processes which operated on the data and produced some output. 
(quoted in Gigerenzer and Goldstein 1996, 136)

 

This theory of the mind was initially resisted but, as Gigerenzer and Goldstein document, when the use of computational tools to simulate human cognition became more widespread it was eventually accepted by the mainstream of cognitive scientists. So much so that some cognitive scientists find it hard to see the mind as anything other than a computer.


3. The Human-as-Robot Metaphor

What significance does this have for robots and the moral transformations they might initiate? Well, in a sense, the robot is a continuation of the mind-as-computer metaphor. Robots are, after, all essentially just embodied computational devices, capable of receiving data inputs and processing them into actions. If the mind is seen as a computer, is it not then natural to see the whole embodied human as something like a robot?

We can imagine a similar metaphorical turn to the one outlined by Gigerenzer and Goldstein taking root, albeit over a much shorter timeframe since the computational metaphor is already firmly embedded in popular consciousness. We begin by trying to model robots on humans (already the established practice in social robotics), then, as robots become common tools for understanding human social interactions, the metaphor flips around: we start to view humans as robot-like themselves. This is already happening to some extent and some people (myself included) are comfortable with the metaphor; others much less so.

This thought is not original to me. Henrik Skaug Sætra, in a series of papers, has remarked on the possible emergence of ‘robotomorphy’ in how we think about ourselves. Many people have noted how humans tend to anthropomorphise robots (see e.g. Kate Darling’s work), but as robots become common Saetra argues that we might also tend to ‘robotomorphise’ ourselves. In a paper delivered to the Love and Sex with Robots Conference in December 2020, he remarks:


Roboticists and robot ethicists may similarly lead us to a situation in which all human phenomena are understood according to a computational, mechanistic and behaviourist logic, as this easily allows for the inclusion of robots in such phenomena. By doing so, however, they are changing the concepts. In what follows, our understanding of the concept, and of ourself, changes accordingly. 
(Saetra 2020, 10)*

 

But how does it change? Saetra has some interesting thoughts on how robot ethicists might use our interactions with robots to encourage a behaviourist understanding of human social interactions. This could lead to an impoverished (he says ‘deficient’) conception of certain human relationships, including loving relationships. Humans might favour efficient and psychologically simple robot partners over their more complex human alternatives. Since I am a defender of ‘ethical behaviourism’, I am, no doubt, one of the people that is guilty of encouraging this reconceptualisation of human relations (for what it’s worth, I don’t think this necessarily endorses an impoverished conception of love; what I do think is that it is practically unavoidable when it comes to understand our relationships with others).

Fascinating though that may be I want to consider another potential transformation here. This transformation concerns the general moral norms to which we are beholden. As moral psychologists have long noted, the majority of humans tend to follow a somewhat confusing, perhaps even contradictory, moral code. When asked to decide what the correct course of action is in moral dilemmas, humans typically eschew a simple utilitarian calculus (avoid the most suffering; do the most good) in favour of a more complex, non-consequentialist moral code. For example, in Joshua Greene’s various explorations of human reasoning in trolley-like scenarios (scenarios that challenge humans to sacrifice one person for the greater good), it is found that humans care about primary intentions, the physical proximity to a victim and other variables that are not linked to the outcome of our actions. In short, it seems that most people think they have act-related duties — not to intentionally harm another, not to intentionally violate trust, not intentionally violate an oath or duty of loyalty to another etc — that hold firm even when following this duty will lead to a worse outcome for all. This isn’t always true. There are some contexts in which outcomes are morally salient and override the act-related duties, but these are relatively rare.

Recent investigations into human moral judgment of robots paints a different picture. Studies by Bertram Malle and his colleagues, for example, suggests that we hold robots to different moral standards. In particular, we expect them to adopt a more utilitarian logic in their moral decision-making. They should aim for the greater good and they are more likely to be negatively evaluated if they do not. We do not (as readily) expect them to abide by duties of loyalty or community. Malle et al’s findings have been broadly confirmed by other studies into the asymmetrical moral norms that humans apply to robots. We think robots should focus on harm minimisation; we don’t judge them based on their perceived intentions or biases. For example, Hidalgo et al’s recent book-length discussion of a series of experiments done on over 6000 US subjects, How Humans Judge Machines, seems to be broadly consistent with the moral asymmetry thesis.

Now, I would be the first to admit that these findings are far from watertight. They are, for the most part, based on vignette studies in which people are asked to imagine that robots are making decisions and not on interactions with real-world robots. There are also many nuances to the studies that I cannot do justice to here. For instance, there are some tentative findings suggesting that the more human-like a robot’s actions, and/or the more harm it causes, the more inclined we are to judge it in a human-like way. This might indicate that the asymmetry holds, in part, because we currently dissociate ourselves from robots. 

Nevertheless, I think these findings are suggestive and they do point the way toward a hermeneutic moral effect that the widespread deployment of robots might have. If it becomes common wisdom for us to interpret and understand our own behaviour in a robot-like manner, then we may start to hold ourselves to the same moral standards as machines. In other words, we may start to adopt a more outcome-oriented utilitarian moral framework and start to abandon our obsession with intentions and act-related duties.

Three factors convince me that this is a plausible potential future. First, there is a ready-made community of consequentialist utilitarian activists that would welcome such a moral shift. Utilitarianism has been a popular moral framework since the 1800s and has resonance in corporate and governmental sectors. The effective altruist movement, with its obsessive focus on doing the most good through evidence-based personal decision-making, may also welcome such a shift. Second, there is some initial evidence to suggest that humans to adapt their moral behaviours in response to machines. Studies by Ryan Jackson and colleagues on natural language interfaces, for instance, suggest that if a machine asks a clarificatory question implying a willingness to violate a moral norm, humans are more willing to violate the same norm. So we can imagine that if machines both express and act in way that violates non-consequentialist norms, we may, in turn, be more willing to do the same. Finally, there are now some robot ethicists that encourage us to take the metaphorical flip, i.e. to make human moral behaviour more robot-like and not to make robot moral behaviour more human-like. One interesting example of this comes from Sven Nyholm and Jilles Smids article on the ethics of autonomous vehicles in ‘mixed traffic’ scenarios, i.e. where the machines must interact with human-driven vehicles. A common approach to the design of mixed traffic scenarios is to assume that the machines must adapt to human driving behaviour but Nyholm and Smids argue that sometimes it might be preferable for the adaptation to go the other way. Why? Because machine driving, with its emphasis on harm-minimisation and strict adherence to the rules of the road, might be morally preferable. More precisely, they argue that if automated driving is provably safer than human driving, then humans face a moral choice, either they use automated driving or they adapt to automated driving standards in their own behaviour:


If highly automated driving is indeed safer than non-automated conventional driving, the introduction of automated driving thereby constitutes the introduction of a safer alternative within the context of mixed traffic. So if a driver does not go for this safer option, this should create some moral pressure to take extra safety-precautions when using the older, less safe option even as a new, safer option is introduced. As we see things, then, it can plausibly be claimed that with the introduction of the safer option (viz. switching to automated driving), a new moral imperative is created within this domain [for human drivers]. 
(Nyholm and Smids 2018)

 

If we get more arguments like this, in more domains of human and robot interaction, then the net effect may be to encourage a shift to a robot-like moral standard. This would complete the hermeneutic moral mediation that I am envisaging.


4. Conclusion

None of this is guaranteed to happen nor is it necessarily a good or bad thing. Once we know about a potential moral transformation we can do something about it: we can either speed it up (if we welcome it) or try to shut it down (if we do not). Nevertheless, speculative though it may be, I do think that the mechanism I have discussed in this article is a plausible one and worth taking seriously: by adopting the robot-as-human metaphor we may be inclined to favour a more consequentialist utilitarian set of moral norms.


* I’m not sure if this paper is publicly accessible. Saetra shared a copy with me prior to the conference.