Pages

Thursday, February 25, 2021

The Technological Mediation of Morality: Explained


3D Ultrasound - Does this change our moral perception of the unborn child?


People have been talking about the death of privacy for at least three decades. The rise of the internet, mass surveillance and oversharing via social media have all been seen as knells summoning it to the grave. In our everyday behaviours, in our choices to use platforms that engage in routine and indiscriminate digital surveillance, we supposedly reveal a preference for digital convenience and social interaction that indicates a willingness to sacrifice our privacy. Despite this, privacy advocates claim that privacy has never been more alive than it was before. Indeed, they argue that it is precisely because privacy is under threat, and because we are forced to make compromises with respect to privacy in our day-to-day lives, that we should care about it more than we did before.

This is just one example of how technology seems to have an effect on our moral values. On the one hand, the creation of new technologies — in this case the internet and smart devices — has created new opportunities for tracking, surveillance and spying. This puts privacy in the vice. On the other hand, the increased pressure on privacy activates it in our minds and makes us worry about it more than ever. We respond by calling for new social norms with respect to the use of surveillant technologies, as well as legal reforms and protections.

Philosophers of technology sometimes explain this phenomenon by using the concept of technological mediation. The idea, in brief, is that technology mediates our relationship to the world: it changes how we perceive ourselves, our actions and our relationship to the world. This, in turn, has an effect on our moral perceptions and actions. Technology is never really value neutral: it comes loaded with moral significance and meaning. But its value-ladenness is not something beyond our control. All people involved in the design and use of a technology have some say in the moral significance of that technology.

In this article, I want to explain this concept of technological mediation and how it affects our moral reasoning. I’ll do so in three parts. First, I will briefly explain Don Ihde’s classic theory of human technology relations. Second, I will outline Peter Paul Verbeek’s key insights into the technological mediation of morality. Third, I will consider the practical significance of the technological mediation of morality.

This may all sound a little dry and theoretical, but I promise it is interesting and may change how you think about technology.


1. Don Ihde’s Four Types of Human-Technology Relationships

”Mediation” is one of those fancy academic terms that can be obscure to outsiders. If my experience is anything to go by, academics love to throw the word into some otherwise banal sentence to make their thoughts sound more sophisticated than they really are. So, for example, you will commonly hear people at conferences say something like “Facebook mediates our perception of social reality”, to which others will nod their heads in agreement as though that says something informative or significant.

It doesn’t have to be so obscure or fancy-schmancy. The etymology of the term ‘mediate’ lies in the Latin verb for ‘to be placed in the middle of’ and that’s a pretty good first approximation of what academics mean when they talk about technological mediation. They mean that technological artifacts place a layer of some sort between humans and the world around them — that the technology stands between us and the world. This then has an effect on how we perceive the world. Consider, a trivial example: my eyeglasses. I wear them on my head everyday. They mediate my perception of reality: they bend light rays in such a way that I can see more clearly. Without the mediation provided by my glasses, I would have much poorer eyesight.

But mediation is a little more complex than that. In his now classic work on the philosophy of technology, Technology and the Lifeworld, Don Ihde outlines four kinds of relationships that humans can have with technologies and the world around them. They are:


Embodiment Relations: These arise when humans use technology as an extension of their own bodies/perceptual faculties. My use of eyeglasses and the blind person’s use of a cane are examples of embodiment relations. They are a particular kind of mediation where the technology is an extended part of who we are. Ihde schematises embodiment relations in the following way:
(Humans — Technology) → World
Hermeneutic Relations: These arise when humans use technology to reinterpret or reframe their perception of the world, perhaps by creating new concepts or categories to understand what they are seeing, or perhaps by appropriating old ones to make sense of the new perception. A classic example is the use of processed images in science, e.g. MRI scans or astronomical photography using non-visible electromagnetic radiation. In this type of mediation, the technologies are representing the world to us and we see them as joined to this external world, not to ourselves. This can be schematised as follows:
Humans → (Technology — World)
Alterity Relations: These arise when humans have to relate directly to a technological artifact. In other words, the artifact doesn’t represent or reinterpret the external reality for us; it is, rather, the external reality with which we must interact. The rest of the world fades into the background. The kinds of relationships we have with robots or ATMs are thought to be classic examples of this kind of relationship. In some ways, alterity relations are the antithesis of mediation insofar as the technologies in this instance do not mediate between us and the world. They are, in a sense, the world. Nevertheless, this can still be viewed as a logical extension of mediation. Furthermore how we perceive and understand technologies in alterity relations can affect other perceptions we might have of the world around us. I’ll get back to this later. Alterity relations can be schematised in the following way:
Humans → Technology(World)
Background Relations: These arise when technologies fade into the background and are not seen as something separate from the world. Rather they are just part of the background canvas upon which we experience reality. Artificial lighting and heating are sometimes given as examples of this kind of relation. This may represent the logical extreme of mediation when the technology is no longer seen to mediate our interaction with reality but is, simply, part of the stage on which external reality presents itself. These relations can be schematised as follows:
Humans (Technology/World)

 

People have built on Ihde’s framework over the years, proposing different kinds of human-technology relations (e.g. augmentation, immersion). But I still think his original is probably the most useful. One of the key ideas to be drawn from it is that how technologies are perceived and understood, and how they mediate our relationships with the world, is not something that is stable or fixed. It depends a lot on our cultural context, experiences and uses of the technology. What might be part of the background for us (e.g. electrical lighting) might be part of the foreground for others (e.g. those coming from pre-electrical societies). And what might have been part of the background for us in one context (e.g. air conditioning) might be something we have to relate with directly in another (e.g. when the system breaks down and needs to be repaired). This instability is important when it comes to understanding how technology mediates morality.


2. Verbeek’s Theory of Moral Mediation

Working from a similar perspective to that of Ihde, Peter Paul Verbeek has developed a theory for understanding how technology mediates our moral perception and engagement with the world. In other words, Verbeek claims that technology not only changes how we relate to the world in a descriptive or non-normative sense, but also how we relate to it in a moral sense. It presents us with new moral choices and moral frameworks for action.

Here’s how he characterises the idea himself:


[The technological mediation approach] studies technologies as mediators between humans and reality. The central idea is that technologies-in-use help to establish relations between human beings and their environment. In these relations, technologies are not merely silent ‘intermediaries’ but active ‘mediators’ that help to constitute the entities that have a relationship ‘through’ the technology… …By organizing relations between humans and world, technologies play an active, though not a final, role in morality. Technologies are morally charged, so to speak. They embody a material form of morality, and when used, the coupling of this ‘material morality’ and human moral agency results in a ‘composite’ moral agency. 
(Verbeek 2013, pp 77-78)

 

What does all that mean? I think we can break it down and make more straightforward by focusing on three key insights from Verbeek’s work.

The first two insights relate to the effect that technology has on morality. Verbeek claims that technology mediates our moral relationship with the world in two distinctive ways. First, it pragmatically mediates our relationship with the world. This means that it changes the space of options and actions available to us and this, in turn, has moral significance. Consider two ways in which this might happen:


Technology makes options available that once were unavailable - For example the creation of projectile weapons, missiles and ultimately nuclear weapons made killing at distance and at a massive scale possible. Similarly, the creation of the cell phone/mobile phone made it possible to connect with anyone at anytime in (virtually) any place.
Technology can close off options that were once available - For example speed bumps on the road can prevent us from driving at high speeds. Alcohol interlocks in cars can prevent us from driving while drunk. Internet blocking devices can prevent us from surfing the web during work hours.

 

The net effect of this is that technology can thrust new moral choices upon us or, alternatively take them away from us. We have to engage our existing moral values and normative theories to decide what we ought to do in these new circumstances. Is killing at a distance less bad than killing up close and personal? Is it okay to call someone at anytime and in any place or should we limit our connectivity in some way?

In addition to this, technology also hermeneutically mediates our relationship with the world. That is to say, it changes how we perceive and understand aspects of the real world (e.g. the concepts and analogies we apply to it) and this can have an impact on our moral decision-making. This new mode of moral seeing is in addition to any choices that the technology might add or take away.

Verbeek has a go-to example of hermeneutic mediation: obstetric ultrasound. This is a technology that allows people to see the foetus in utero at various stages of development. According to Verbeek, ultrasound images are not presented to us in a neutral way. On the contrary, they encourage us to see the foetus as an independent entity, separate from its mother (though present inside her), and as a possible patient for certain treatments or interventions (most obviously, abortion). Here’s how he puts it:


This technology is not merely a neutral interface between expecting parents and their unborn child: it helps to constitute what this child is for its parents and what the parents are in relation to their child. By revealing the unborn in terms of variables that mark its health condition, like the fold in the nape of the neck of the fetus, ultrasound ‘translates’ the unborn child into a possible patient, congenital diseases in preventable forms of suffering (provided that abortion is an available option) and expecting a child into choosing for a child, also after the conception. 
(Verbeek 2013, p 77-78)

 

Another example of this hermeneutic mediation might be the combination of the cameraphone and social media. By having a device on us at all times that allows for the recording of our everyday experiences, we are encouraged to see those experiences in a new way. They are not things to be enjoyed in and of themselves. They are now to be seen as opportunities for sharing with others, bragging, self-promotion and monetisation. We suddenly focus on the instrumental value of our experiences, not their intrinsic value.

This leads, in turn, to Verbeek’s third key insight. You may have heard the famous phrase that all technologies/artifacts have a politics (an ideology or set of values embedded within them). The classic illustration of this comes from Langdon Winner’s observation about the bridges over the highways on Long Island: they were not high enough to accommodate buses. Because they were less likely to own cars, Winner pointed out that this excluded poor (black) people from the beaches on Long Island. In Winner’s analysis, this was a deliberate design decision by Robert Moses, the planner behind the road network, who let his values shape their construction.

Verbeek agrees with this basic picture but finesses it somewhat. Technologies are indeed value-laden (“dripping with morality” in one memorable phrase) but their values are not entirely shaped by their designers. Oftentimes technologies are interpreted and used in ways that designers do not anticipate or intend. For example, I doubt that Facebook intended for their livestreaming feature to be used by rampage shooters on mass killing sprees. They probably intended it to be used for more benign purposes. Nevertheless, the technology made this possible. According to Verbeek, while designers have a significant part to play in the mediating effect of their technologies, users and regulators also have a role to play. Users and regulators can appropriate technology for new ends, encourage specific uses of it, and develop new interpretations of its moral significance.

This is both an uplifting and dispiriting thought.


3. Practical Significance of Moral Mediation

What does all this mean in practice? There are a number of key lessons here, some of which have been implicit in the discussion so far but are worth specifying.

First, as should be obvious, technological mediation puts the lie to the neutrality of technology. Technology is not some value-neutral tool over which we have complete moral autonomy. It comes with certain values and choices embedded in its design. A speed bump encourages us to slow down: it is biased in favour of slower driving. A cameraphone with internet connectivity and social media encourages the sharing and archiving of everyday life. You still have some choices as to whether you use technologies for their intended purpose but you often have to fight against the in-built biases.

Second, the fact that technologies mediate our moral perceptions and actions is important when it comes to the risk assessment of new technologies. Oftentimes, technological risk assessments focus heavily on what Verbeek and others call the ‘hard’ impacts of technology: the health risks, the possibility of environmental damage, the safety concerns and so on. These hard impact assessments use existing moral frameworks and evaluative standards (e.g. energy efficiency, radiation exposure) to determine whether the technology falls within acceptable parameters. This overlooks the potential ‘soft’ impacts, in particular the impact on social values and norms. What if the rise of the smartphone undermines the value of privacy? Is that not something we should factor into our risk assessment? Of course it is very hard, in practice, to assess these soft impacts (for reasons I won’t get into here) but they are worth considering nonetheless.

Third, and leading on from this, in order to meaningfully assess the soft impacts we need to know whether there are particular patterns to moral mediation. In other words, will it be easy to predict the future course of moral mediation or is it simply chaotic and unpredictable? We know, in general, that technology tends to add moral choices and dilemmas to our lives; it tends not to take them away. Indeed, the examples I gave earlier of technologies that eliminate options are all examples of technologies designed to take away an option that an earlier technology made possible. The alcohol interlock takes away the option of driving while drunk, but we would not have had that option if the automobile had not been invented in the first place. Furthermore, the creation of the interlock adds another choice: the choice of whether to use it or not. So it seems fair to say that the net effect of technological innovation is to add moral complexity to our lives, but can we say anything more specific and predictively useful? I’m not sure, but developing detailed case studies of technological mediation and extrapolating lessons from them looks like a good start.

Fourth, and more pessimistically, as Kudina and Verbeek (2019) have argued, technological mediation adds another dimension to how we think about the Collingridge Dilemma. This dilemma is something that is widely discussed in the world of responsible innovation and design. The classic version of the Collingridge Dilemma works like this:


Classic Collingridge Dilemma: When technology is at an early stage of development we have the power to control it, but we don’t know what its social impacts will be. When technology is at a late stage of development, we know what its social impacts are, but we lose the power to control it.

 

In short, once a technology proliferates in society it will be but it will be too late to do anything about its social impacts. As Kudina and Verbeek argue, there is a moral variation on the dilemma that arises from our awareness of the technological mediation of morality.


Moral Collingridge Dilemma: “[W]hen we develop technologies on the basis of specific value frameworks, we do not know their social implications yet, but once we know these implications, the technologies might have already changed the value frameworks to evaluate these implications.” (Kudina and Verbeek 2019, 293)

 

This moral variation on the dilemma is interesting to me because it reminds me of what the philosopher L.A. Paul’s has said about transformative experiences. Briefly, Paul has argued that some life choices cannot be rationally evaluated in advance because they transform who we are. Her main example of this is the decision to have children. To know whether having children is a good choice for you, you need to actually have them and acquire the experiential knowledge of what it is like to have a child. No amount of advance reading or consultation with friends will give you this. (Having now had a child, I think I disagree with Paul but let’s set that disagreement to the side for now)

One way of understanding Paul’s argument is that undergoing a transformative experience has an effect on the evaluative frameworks you use to rationally assess different choices. Anecdotally, it does seem to me like having a child changes how you value different aspects of your life. So the metrics you use to evaluate the choice of having a child will be different after you have had the child. What Kudina nad Verbeek are suggesting is that something similar is true when it comes to the development of technologies. The very act of developing and using the technology might change how we evaluate its merits. We could, in short, undergo a kind of moral transformation that makes it nearly impossible to rationally assess a technology in advance.

That’s a pessimistic thought on which to end and I merely offer it as a suggestion. I’m not sure that any technologies have resulted in transformative moral changes. The development of the internet does seem to have affected how much we value communication and connectivity. So much so that many people now demand internet connectivity as something close to a basic human right. But I’m not sure if that is a transformative moral change since we always valued those things to some extent.

It’s something to ruminate on if nothing else.

2 comments:

  1. Interesting article. I had a though based on part of what you wrote regarding the effect of technology on moral complexity. You mentioned that technology almost always increases moral complexity, and that even instances where it seems to reduce choice (the alcohol interlock) are still contingent on the increase in choice the technology already provided. I don't disagree with you, but I wonder if there is an scenario in which technology reduces moral complexity by allowing for multiple decisions, by reducing the difficulty or time required to complete an activity, to be made that would otherwise conflict with each other. For example, I have two friends dying who live in different cities and I would like visit both of them before they pass. In a world lacking high speed transit, I would be forced to choose which friend I visit. But advances in transportation technology allow me to visit both without trouble, negating the need to make a choice and (potentially) reducing moral complexity.

    ReplyDelete
    Replies
    1. I probably need a more precise definition of moral complexity. I see what you are saying. I would consider that a situation in which technology reduces the need for tragic tradeoffs or dilemmas. I can see that a reduction in a type of moral cost that we might otherwise have to bear. As you say, it seems to reduce moral complexity too by obviating the need for a tragic choice.

      Delete