Pages

Sunday, May 31, 2015

A Framework for Understanding our Ethical Relationships with Intelligent Technology


Hiroshi Ishiguro with the Telenoid R1


How do we relate to technology? How does it relate to us? These are important questions, particularly in light of the increasingly ubiquitous and often hidden roles that modern computing technology plays in our lives. We have always relied on different forms of technology, from stone axes to trains and automobiles. But modern computing technology has some important properties. When it incorporates artificially intelligent programmes, and utilises robotic action-implementation systems, it has the ability to interfere with, and possibly supersede, human agency.

Some of this interference might be desirable. If a robotic surgeon can increase the success rate of a risky type of surgery, we should probably welcome it. But some of the interference might be less desirable. I have argued in the past that we should have some concerns about automated systems that render our public decision-making processes more opaque. Either way, we should think seriously about the different modalities and styles of relationship we can have with technology. To this end, a general taxonomy or framework for understanding our relationships with technology would seem desirable.

I have some of my own ideas on this front, which I will share in a future post, but I am also keen to review proposals by others. For instance, there is Don Ihde’s, now-classic, taxonomy of the different types of phenomenological relationship we can have with technology. This taxonomy mentions four such relationships:

Embodiment Relationship: The technology is perceived to be part of us (part of our bodies). For example, my eyeglasses are simply part of my extended body; they are integral to how I perceive and interact with the world.

Hermeneutic Relationship: The technology analyses and interprets the world for a human user. For example, google maps does this for me when it comes to working out where I need to go.

Alterity Relationship: The technology is perceived as being something alien or ‘other’ in nature. For example, some people have this reaction to human-like robots.

Background Relationship: The technology is either so ubiquitous and commonplace that it is ignored, or it is actually hidden from most humans. For example, certain forms of surveillance technology are like this — we don’t even notice all the CCTV cameras that watch us on the streets everyday.

I think there is value to this taxonomy, but it is deliberately limited in its purview to the phenomenological character of our relationships with technology. In the remainder of this post, I want to take a look at a more complicated taxonomy/framework. This is one that was proposed in a recent article by Marlies Van de Voort, Wolter Pieters, and Luca Consoli (hereinafter ‘Van de Voort et al’), and although it builds upon Ihde’s taxonomy, it tries to focus specifically on the ethical implications of different types of computing system.


1. The Four Relationships and the Three Actors
Van de Voort et al’s framework is centred on three actors and four different relationships between these actors. The three actors are:

The Computer: This is an computing system, which the authors define as any system ‘that calculates output based on a given input, following a predefined script of instructions.’ They also insist that (for their purposes) such a system should be intelligent (i.e. use algorithms for decision-making), context-aware (i.e. able to perceive and incorporate information from their surrounding environment) and autonomous (i.e. capable of operating without constant human input).

The Individual: This is just any individual human being that is affected by the computing system. The effects can vary greatly, depending on the relationships between the individual and the computer. We’ll talk about this in more detail below.

The Third Party: This is any entity (i.e. individual, group or another computing system) that either controls, receives information from, or originally programmed and set-up the computer.

The four relationships that are possible between these three actors are:

The Observation Relationship: This is where the computer simply observes and collects information about the individual, which it then may or may not transmit to a third party. Surveillance systems are the exemplars of this relationship.

The Interference Relationship: This is where the computer has some goal state that it must realise by interfering with individual humans. This may or may not involve control or input from a third party. Drone weapon systems are exemplars of this type of relationship, where the ‘interference’ in question can be lethal.

The Interaction Relationship: This is where the computer has some direct interaction with the individual, and that interaction can come in the shape of either observation or interference. A care-giving robot would exemplify this style of relationship.

The Advice Relationship: This is where the computer gives advice to the third party, who then in turn either observes or interferes with the individual. Van de Voort et al refer to this as observation or interference via a proxy.

As you can see, these relationships are not mutually exclusive. The first two are subsumed within the last two. You can think of it like this: The observation and interference relationships are the two basic forms that human relationships can take with computers. But the quality of those relationships then varies depending on whether the computer interacts directly or via a proxy with the human beings. The diagram below, which I have adapted from Van de Voort et al’s original article is supposed to illustrate these different relationships. I’m not sure how useful it is, but I offer it for what it is worth.





2. The Ethical Dimensions to these Relationships
Van de Voort et al claim that their framework is useful when it comes to understanding the ethical dimensions to computing systems. To see this, we need to do two things. First, we need to appreciate how every computing system is, at its core, an information processing system. It acquires information from the world, it organises and processes that information in some manner, and it then makes use of that information by performing some sort of ‘action’ (where that word is understood to have a fairly liberal meaning). Second, we need to consider how this information processing could take place in each of the four relationships outlined above.

We start with information processing in the observation relationship. Here, the computing system is acquiring information about a human individual. Consider, for example, the way in which Facebook or Google track information about internet and social media usage. Algorithms are then used to sort and organise that information. The system must then make a decision as to what to do with that information, either forwarding it to a third party, storing it for future use, or deleting it. Obviously, there are a number of significant ethical dimensions to all of this. Questions must be asked about when and whether it is appropriate to collect, process, store, forward or delete such observational information. These are questions that are being and have been asked in the recent past.

We move on then to the interference relationship. Here, the computing system has a goal that it tries to realise by acting in the world. It will use information it collects from the world to figure out what action it should perform, whether or not its goal state has been realised, and whether it needs to ‘try again’. Consider, for example, the autonomous drone weapon system. This system would have a goal (destroy terrorist target X) which it would use to devise an action plan (fire missile at terrorist target X), and when implementing that action plan would try to use feedback from the world to figure out whether the goal has been achieved. A third party would typically oversee this process by programming the goals and perhaps assisting with feedback and learning. Obviously, the creation of such a system involves a number of ethical issues. Is the goal morally appropriate? Is the selected action plan the most morally appropriate means of achieving that goal? When should the computer try again and when should it give up? These questions are becoming increasingly important in light of the emerging trend towards reliance on such autonomous systems.

This brings us to the interaction relationship. Here, the computing system interacts directly with some human subject, either through observation or interference. Consider, for example, a household robot that assists a human with a variety of mundane chores (cooking, cleaning, ironing, etc.). The robot would have a number of goal states (keep the house clean, cook the food etc.) but a constantly shifting set of sub-goals, depending on what needs to be cooked and cleaned and in what order. It will also need to work in and around the human living in the home. Again, this will have a number of ethical dimensions to it. The computer system will need be efficient and to take into consideration the wishes and desires of the human users.

We arrive then, finally, at the advice relationship. Here, the computer will evaluate and interpret information inputed into it from the environment. It will then use this to issue advice to third parties. The computer will need to decide when and whether to issue that advice, and the precise form that the advice will take. Consider, for example, a medical diagnostics robot that uses information to come up with the most plausible diagnosis for a medical patient, and maybe also suggest possible treatment courses. This will clearly have a number of ethical dimensions to it. The reliability of the information will need to factored in, as will the accuracy of the advice and the likely degree of reliance that will be placed on this advice. Will the third parties rely on it completely, or will they simply ignore it?

In the end, having considered the ethical dimensions to these four different relationships, Van de Voort et al argue that designers of such systems face three big ethical questions. The first concerns the scope of the system, i.e. which kinds of information or action fall within or outside its zone of responsibility. The second concerns the impact of the system on the environment in which it operates, i.e. the directness and immediacy of its effects. And the third concerns the involvement of human and other systems, i.e. does the system operate in isolation from, in contradistinction to, or in harmony with others.


3. Conclusion
This is a very quick overview of Van de Voort et al’s framework. To briefly recap, they try to develop a framework for understanding modern computing systems and the types of ethical issues that arise with these systems. The framework centres on three actors (the computer, the individual and the third party) and four relationships (observation, interference, interaction, advice).

I have mixed feelings about their proposed framework. I think their focus on the tripartite relationship between computer systems, individuals, and third parties is useful. I also think there is something to their claims about the four different relationships, though what they have ended up with is quite messy and imprecise (since there are many different kinds of interaction and observation). Where I have some major problems is with the value of all this when it comes to assessing the ethical implications of different systems. In essence, all they really say is that we need to think about the ethical implications of data collection and action-implementation by computerised systems. I certainly agree that we should think about these things, but that observation in and of itself is pretty banal. I am not sure that we needed the complex framework to draw our attention to those issues.

To be sure, every taxonomy and framework will have its problems. If nothing else, they will all tend to omit important details and nuances. We cannot expect perfection in this respect. But I think it might be possible to do better.

No comments:

Post a Comment