Saturday, February 18, 2017

Can you be friends with a robot? Aristotelian Friendship and Robotics

Image by Dick Thomas Johnson - flickr

[If you like this blog, consider signing up for the newsletter...]

Let’s talk about Davecat.

Davecat is the pseudonym of a Michigan-based man. He is married and has one mistress. Neither of them is human. They are both dolls — RealDolls to be precise. Davecat is an iDollator; he promotes love with synthetic beings. His wife is called Sidore. They met at goth club in the year 2000 (according to a story he tells himself). They later appeared together on the TLC show Guys and Dolls. That’s when Elena saw them (Elena is his mistress). She was in Russia at the time, but moved to the USA to live with Davecat and Sidore. They are happy together.

Now let’s talk about Boomer.

Boomer died on a battlefield in Iraq. He was given a military funeral, complete with a 21-gun salute. He was awarded a Purple Heart and a Bronze Star medallion. The odd thing was that Boomer wasn’t a human being. Boomer was a MARCbot — a bomb disposal robot used by the military. Boomer’s comrades felt they owed him the military send off. He had developed a personality of his own and he had saved their lives on many occasions. It was the least they could do. Relationships between soldiers and bomb disposal robots are not uncommon. Julie Carpenter details many of them in her book Culture and Human-Robot Interaction in Militarized Spaces.

Both of these stories demonstrate something important. Humans can form powerful emotional attachments to non-living objects, particularly objects that resemble other humans (in the case of RealDolls) or living beings (in the case of the MARCbot). As we now enter the era of social robotics, we can expect the opportunities for forming such relationships to grow. In the not too distant future, we will all be having relationships with robots, whether we like it or not. The question is what kinds of relationships can we have with them and is this a good or bad thing?

Some people are worried. They think human-robot relationships are emotionally shallow and that their proliferation will cut us off from emotionally richer human-human relationships. In this post I want to look at an argument such people might make against robot-relationships — based on the concept of an Aristotelian friendship. I will give some critical responses to that argument. My position is that many philosophers overstate the case against robot relationships and that there is something to be said in their favour.

1. The Many Forms of Friendship
I’m going to limit my argument to the concept of friendship. There are, obviously, many kinds of relationships in human social life. Friendship is merely one among them, but it is a relationship style of considerable importance and, depending on how it is conceptualised, it can shed light on other social relationships. I’m going to conceptualise it broadly, which enables such cross-comparison.

I’m going to suggest that there are three main styles of friendship:

Utility friendships: This a relationship between two or more individuals whose primary value lies in the instrumental gains that can be achieved through the friendship by one or more of those individuals. For instance, you might value your wealthy friends not so much for who they are but because of the gains their wealth can bring to you.

Pleasure friendship: This is a relationship between two or more individuals whose primary value lies in the pleasure that one or more of those individuals derives from their interactions. For instance, you might have a regular tennis partner and derive great pleasure from the matches you play together.

Aristotelian friendship: This is a relationship between two or more individuals whose primary value lies in the mutual sharing of values and interests, and the mutually enriching effect of the interactions they share on the virtues and dispositions of the individuals. (This is also sometimes referred to as a ‘virtue’ friendship).

Utility and pleasure friendships are characterised by self-interest. The value of the friendships lies in the benefits they bestow on the participants. They are not necessarily mutually enriching. In a utility friendship, all the instrumental gains could flow to one of the individuals. Aristotelian friendships are different. They require mutual benefit.

I refer to such relationships as ‘Aristotelian’ because they were first formally identified by Aristotle and they were the type of friendship he valued most. This is a common view. Many philosophers who write about friendship argue that, although there can be value to utility/pleasure friendships, there is something special about Aristotelian friendships. They are a great good: something to which an ideal human life should have access. It would be a shame, they say, if the only kinds of friendships one ever experienced were of the utility or pleasure type. Indeed, some people go so far as to suggest that Aristotelian friendships are ‘true’ friendships and that other types are not.

Aristotelian friendships have been analysed extensively in the philosophical literature. There are many alleged preconditions for such relationships. I won’t go through them all here, but I will mention four of the more popular ones:

Mutuality condition: There must be mutual sharing of values and interests. This is the most obvious condition since it is built into the definition of the friendship.

Honesty/authenticity condition: The participants in the friendship must be honest with each other. They must present themselves to each other as they truly are. They must not be selective, duplicitous or manipulative.

Equality condition: The participants must perceive themselves to be on an equal footing. One party cannot think themselves superior to the other (the idea is that if they did this would block mutuality).

Diversity condition: The participants must interact with one another in a varied and diverse set of circumstances (this facilitates a higher degree of mutuality than you might get in a pleasure friendship between two tennis-playing partners).

Whether all of these conditions are essential or not is a matter of some debate, but their combination certainly makes it easier to enter into an Aristotelian friendship.

It is important to recognise that Aristotelian friendships are an ideal. Not every friendship will live up to that ideal. Many of the friends you have had in your life probably fall well short of it. That doesn’t mean those friendships lacked value; it just means they weren’t as good as they could possibly have been.

Because it is an ideal the risks entailed by an Aristotelian friendship are greater than those of other friendships. If you think you are in a true Aristotelian friendship with someone else, it is much worse to find that they have been lying to you or manipulating you, than it would be if you only thought yourself to be in a pleasure or utility friendship. My tennis playing partner could be lying to me about his job, his family, and his educational history and it wouldn’t really affect the pleasure of our interactions. It would be different if he was my Aristotelian friend.

That’s enough on the concept of friendship. Let’s look at how this concept can be used to make the case against robot relationships.

2. Robots Cannot be Your Aristotelian Friends
The first, and most obvious, argument you can make against robot relationships is that they can never realise the ideal of Aristotelian friendship. To put it formally:

  • (1) Aristotelian friendships require mutuality (shared interests, values, concerns), authenticity (of self-presentation), equality and diversity.
  • (2) Relationships with robots cannot satisfy all of these conditions.
  • (3) Therefore, relationships with robots can never be Aristotelian friendships.

We are granting premise (1) for the purposes of this discussion. That means premise (2) is the only thing up for grabs. The defender of that premise will claim that robots can never satisfy the mutuality condition because robots can never have inner mental lives: they cannot truly share with us; they do not have their own interests, values and concerns. They will also claim that robots cannot be authentic in their interactions with us. The manufacturers of the robots will trick them out with certain features that suggest the robot cares about us or has some inner mental life (maybe through variations in gesture and the intonation of the robot’s voice). But these are tricks: they mislead us as to the true nature of the robot. They will then argue that we can never be on an equal footing with a robot. The robot is too alien, too different, from us. It will be superior to us in some ways (e.g. in facial recognition and computation) but inferior in others. We will never be able to overcome the feeling of inequality. Finally, they will argue that most robots (for the foreseeable future) will be capable of interacting with us in limited ways. They will not be fully-functioning androids, capable of doing everything a human is capable of doing. Consequently, we will not be able to achieve the diversity of interaction with them that is needed for a true Aristotelian friendship.

Is this a good argument? Should it turn us against robot frienships? There are two major problems. The first, and less important, is that it is possible to push back against the defence of premise (2). There are two ways of doing this. You could take the ‘future possibility’ route and argue that even though robots are not yet capable of satisfying all these conditions, they will be (or may be) capable of doing so in the future. As they develop more sophisticated mental architectures, maybe they will become conscious and develop inner mental lives; maybe they will present authentic versions of themselves; and maybe they will be able to interact with us in more diverse ways (indeed, this last condition seems pretty likely). Alternatively, you could take the ‘performative/behaviourist’ route and argue that it doesn’t really matter if robots are not objectively/metaphysically capable of satisfying those conditions. All that matters is that they perform in such a way that we think they are satisfying those conditions. Thus, if it seems to us as though they share our values and interests, that they have some inner mental life, that they are, more or less, equal to us, then that’s good enough.

I know some people are abhorred by this second suggestion. They insist that the robot must really have an inner mental life; that it cannot simply go through the motions in order for us to form an Aristotelian bond with it. But I’m never convinced by this insistence. It just seems obvious to me that all human-human Aristotelian friendships are founded on a performative/behaviourist satisfaction of the relevant conditions. We don’t have access to someone’s inner mental life; we can never know whether they really share our values and concerns, or whether they are authentically representing themselves (whatever that might mean). All we ever have to go on is their performance. The problem at the moment is that robotic performances just aren’t good enough. If they get good enough, they will be indistinguishable from human performances. Then we’ll be able to form Aristotelian friendships with them.

I know some people will continue to be abhorred by that claim. They will argue that it involves some manipulation or deception on the part of the robot manufacturers. But, again, I’m not convinced by this. For example, if a robot really seems like it cares for you or shares your interests, and if all its objective performances confirm this, then how is that deceptive or misleading? And if the robot eventually betrays your trust or, say, acts in ways that benefit its manufacturers and not your relationship with it, how is this any different from the betrayals and manipulations that are common in human-human friendships? Robot relationships might be no better than human relationships, but if they are performatively equivalent, I don’t see that they will be much worse.

That line of thought is a tough sell. Fortunately, you don’t need to accept it to reject the argument. The other problem with it, and by far the more important problem, is that it doesn’t really matter if robot relationships fail to live up to the Aristotelian ideal. There is no reason why we cannot form utility or pleasure friendships with robots. These relationships will have value and don’t require mutuality. They can be unidirectional. Clearly Davecat has formed some such bond with his RealDolls; and clearly the soldiers who worked with Boomer did too. As long as we can keep relationship types separate in our minds, there is no reason to reject a relationship simply because it falls short of the Aristotelian ideal.

The way to resist this is to argue that engaging in robot relationships cuts us off from the great good of Aristotelian friendships. That’s what the next argument tries to do.

3. The Corrosive Impact of Robot Relationships
The second argument you can make against robot relationships will claim that, even if we accept that robot relationships can only ever be of the pleasure/utility type, there is a danger that if we embrace them we will no longer have access to the great good of an Aristotelian friendship. This would be terrible because Aristotelian friendships are a form of human flourishing.

The argument is simple:

  • (4) If pleasure/utility relationships with robots would cut us off from Aristotelian friendships, then robot relationships would be a terrible thing to encourage.
  • (5) Pleasure/utility relationships with robots will cut us off from Aristotelian friendships.
  • (6) Therefore, robot relationships would be a terrible thing to encourage.

Premise (5) needs support and such support can come from two angles:

  • (7) Forced replacement: It is possible that some people will be forced to only interact with robots in the future: their potential human interactions will be eliminated. This will block them from accessing Aristotelian friendships (because robots cannot be our Aristotelian friends).
  • (8) Corrosion problem: If people enter into pleasure/utility relationships with robots they will be motivated to adopt a more shallow, utility and pleasure seeking attitude with their human friends. This means that even though Aristotelian friendships remain an open possibility, they are less likely to be achieved.

The forced replacement argument is often made in relation to the elderly. There is a noticeable drive to use robots in the care of elderly people. The elderly are often socially isolated. If they have no families, the only human contact they have is, sometimes, with their care workers. Now, admittedly, the care relationship is distinguishable from friendship. But the elderly do sometimes enter into friendships with their carers. If all human contacts are replaced by robots, they will no longer have access to the possibility of an Aristotelian friendship.

The corrosion problem has previously been identified in relation to online friendships and the style of interaction they encourage. The kinds of interactions and friendships we can have online are, according to critics, remarkably shallow. They often consist of perfunctory gestures like posting status updates and liking or emoticonning those updates. These interactions can have utility and can be pleasurable (the new likes and retweets give you a jolt of pleasure when you see them), but they are not deep and diversified. Some worry that such shallow interactions carry over to the real world: we become accustomed to the online mode of interaction and perpetuate it in our offline interactions. By analogy you could argue that the same thing will happen if robot relationships become normalised.

Is this argument any good? It’s probably more formidable than the first but I think the fears to which it alludes are overstated. I don’t deny that there is drive toward the use of robots in certain relationship settings — such as care of the elderly. And (assuming we can’t form Aristotelian friendships with robots) it would be bad if that were the only kind of interaction an elderly person had. But I think the forced replacement idea is fanciful. I don’t think anyone is going to force people to only interact with robots.

What is more likely to happen is that people ignore the elderly because they find it too unpleasant or uncomfortable to interact with them due their care requirements. They will prefer to outsource this to professionals and will not wish to engage with loved ones or parents in states of senescence. On top of that, we are in the midst of a significant demographic shift toward aging populations. This means the care burden in the future will increase. It is probably impossible and unfair to expect the shrinking younger generations to shoulder that burden. Some robotic outsourcing might be necessary.

But, in fact, I think the robots could actually help to facilitate better friendships with those for whom we need to care. Remember the conditions for an Aristotelian friendship. One of them is that participants should be on an equal footing. This is often not possible in a caring relationship. One party sees the other as an inferior: someone in a state of decline or dependency. It is only through the good will of one party that they are enabled to flourish. Giving the more dependent partner some robotic assistance may actually enable a better friendship between the two humans. In this way, robots could complement or promote, rather than corrode and undermine, Aristotelian friendships. A dyadic relationship of inequality between two humans is replaced by a triadic relationship of greater equality between two humans and a robot.

This could be a generalisable point. The comments I made previously about online friendships have been challenged in the philosophical literature. Some people argue that online interactions can be (even if they often aren’t) deep and that the ‘gating/filtering’ features that are often lamented (e.g. the anonymity or selective presentation) can be a boon. In the real world, we frequently interact with one another on unequal terms. If I see you and talk to you I will be able to determine things about your ethnic or socio-cultural background. I might look down on you (sub-consciously or consciously) as a result. But you can hide some of these things online, putting us on a more equal footing. I’m not saying that adding a robot to a dyadic relationship can do the same things as online gating/filtering, but it could have an analogous effect in the real world.

Furthermore, I think there are other reasons to suspect that robot friendships could promote, rather than corrode, Aristotelian friendships. I think in particular of Peter Singer’s arguments about the expanding circle of moral concern. It could be that personifying robots, ascribing to them properties or characteristics of humanity, will train-up our empathic concern for others. We no longer treat them in an objectifying, tool-like way. Some people hate this idea — they say that robots should always be our slaves — but I think there could be benefits from seeing them in a more humanising light. Again, it could encourage us to have more fulfilling interactions with our fellow human beings.

4. Conclusion
To sum up, Aristotelian friendships are held to be a great good - something to which an ideal human life should have access. People might object to robot relationships on the grounds that (a) they can never attain the Aristotelian ideal and/or (b) even if they have other benefits, they cut us off from the Aristotelian ideal.

There are reasons to doubt this. Robots might be able to attain the Aristotelian ideal if they are performatively equivalent to human friends. And even if they can’t, there is reason to suspect that they could complement or promote Aristotelian friendships amongst humans, not corrode or undermine them.

No comments:

Post a Comment