Pages

Tuesday, March 15, 2016

New Technologies as Social Experiments: An Ethical Framework




What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionise the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience. A considerable number have been quite negative: the assault on privacy; increased distractability, endless social noise. But were any of them weighing on the mind of Steve Jobs when he stepped onstage to deliver his keynote on January 9th 2007?

Some probably were, but more than likely they leaned toward the positive end of the spectrum. Jobs was famous for his ‘reality distortion field’; it’s unlikely he allowed the negative to hold him back for more than a few milliseconds. It was a cool product and it was bound to be a big seller. That’s all that mattered. But when you think about it this attitude is pretty odd. The success of the iPhone and subsequent smartphones has given rise to one of the biggest social experiments in human history. The consequences of near-ubiquitous smartphone use were uncertain at the time. Why didn’t we insist on Jobs giving it quite a good deal more thought and scrutiny? Imagine if instead of an iPhone he was launching a revolutionary new cancer drug? In that case we would have insisted upon a decade of trials and experiments, with animal and human subjects, before it could be brought to market. Why are we so blase about information technology (and other technologies) vis-a-vis medication?

That’s the question that provokes Ibo van de Poel in his article ‘An Ethical Framework for Evaluating Experimental Technology’. Van de Poel is one of the chief advocates of the view that new technologies are social experiments and should be subject to similar sorts of ethical scrutiny as medical experiments. Currently this is not being done, but he tries to develop a framework that would make it possible. In this blogpost, I’m going to try to explain the main elements of that framework.


1. The Experimental Nature of New Technology
I want to start by considering the motivation for van de Poel’s article in more depth. While doing so, I’ll stick with the example of the iPhone launch and compare it to other technological developments. At the time of its launch, the iPhone had two key properties that are shared with many other types of technology:

1. Significant Impact Potential: It had the potential to cause significant social changes if it took off.

2. Uncertain and Unknown Impact: Many of the potential impacts could be speculated about but not actually predicted or quantified in any meaningful way; some of the potential impacts were completely unknown at the time.

These two properties make the launch of the iPhone rather different from the other quasi-technological developments. For example, the construction of a new bridge could be seen as a technological development, but the potential impacts are usually much more easily identified and quantified in that case. The regulatory assessment and evaluation is based on risk, not uncertainty. We have lots of experience building bridges and the scientific principles underlying their construction are well understood. The regulatory assessment of the iPhone is much trickier. This leads van de Poel to suggest that a special class of technology be singled out for ethical scrutiny:

Experimental Technology: New technology with which there is little operational experience and for which, consequently, the social benefits and risks are uncertain and/or unknown.

Experimental technology of this sort is commonly subject to the ‘Control Dilemma’ - a problem facing many new technologies that was first named and described by David Collingridge:

Control Dilemma: For new technologies, the following is generally true:
(A) In the early phases of development, the technology is malleable and controllable but its social effects are not well understood.
(B) In the later phases, the effects become better understood but the technology is so entrenched in society that it becomes difficult to control.

It’s called a dilemma because it confronts policy-makers and innovators with a tough choice. Either they choose to encourage the technological development and thereby run the risk of profound and uncontrollable social consequences; or they stifle the development in the effort to avoid unnecessary risks. This has led to a number of controversial and (arguably) unhelpful approaches to the assessment of new technologies. In the main, developers are encouraged to conduct cost-benefit analyses of any new technologies with a view to bringing some quantificational precision into the early phase. This is then usually overlaid with some biasing-principle such as the precautionary principle — which leans against permitting technologies with significant impact potential — or the procautionary principle — which does the opposite.

This is isn’t a satisfactory state of affairs. All these solutions focus on the first horn of the control dilemma: they try to con us into thinking that the social effects are more knowable at the early phases than they actually are. Van de Poel suggests that we might be better off focusing on the second horn. In other words, we should try to make new technologies more controllable in their later phases by taking a deliberately experimental and incremental approach to their development.


2. An Ethical Framework for Technological Experiments
Approaching new technologies as social experiments requires both a perspectival and practical shift. We need to think about the technology in a new way and put in place practical mechanisms for ensuring effective social experimentation. The practical mechanisms will have epistemic and ethical dimensions. On the epistemic side of things, we need to ensure that we can gather useful information about the impact of technology and feed this into ongoing and future experimentation. On the ethical side of things, we need to ensure that our experiments respect certain ethical principles. It’s the ethical side of things that concerns us here.

The major strength of Van de Poel’s article is his attempt to develop a detailed set of principles for ethical technological experimentation. He does this by explicitly appealing to the medical analogy. Medical experimentation has been subject to increasing levels of ethical scrutiny. Detailed theoretical frameworks and practical guidelines have been developed to enable biomedical researchers to comply with appropriate ethical standards. The leading theoretical framework is probably Beauchamp and Childress’s Principlism. This framework is based on four key ethical principles. Any medical experimentation or intervention should abide by these principles:

Non-maleficence: Human subjects should not be harmed.
Beneficence: Human subjects should be benefited.
Autonomy: Human autonomy and agency should be respected.
Justice: The benefits and risks ought to be fairly distributed.

These four principles are general and vague. The idea is that they represent widely-shared ethical commitments and can be developed into more detailed practical guidelines for researchers. Again, one of the major strengths of Van de Poel’s article is his review of existing medical ethics guidelines (such as the Helsinki Declaration and the Common Rule) and his attempt to code each of those guidelines in terms of Beauchamp and Childress’s four ethical principles. He shows how it is possible to fit the vast majority of the specific guidelines into those four main categories. The only real exception is that some of the guidelines focus on who has responsibility for ensuring that the ethical principles are upheld. Another slight exception is that some of guidelines are explanatory in nature and do not state clear ethical requirements.

For the details of this coding exercise, I recommend reading van de Poel’s article. I don’t want to dwell on it here because, as he himself notes, these guidelines were developed with the specific vagaries of medical experimentation in mind. He’s interested in developing a framework for other technologies such as the iPhone, the Oculus Rift VR, the Microsoft HoloLens AR, self-driving cars, new energy tech and so forth. This requires some adaptation and creativity. He comes up with a list of 16 conditions for ethical technological experimentation. They are illustrated in the diagram below, which also shows exactly how they map onto Beauchamp and Childress’s principles.




Although most of this is self-explanatory, I will briefly run through the main categories and describe some of the conditions. As you can see, the first seven are all concerned with the principle of non-maleficence. The first condition is that other means of acquiring knowledge about a technology are exhausted before it is introduced into society. The second and third conditions demand ongoing monitoring of the social effects of technology and efforts to halt the experiment if serious risks become apparent. The fourth condition focuses on containment of harm. It accepts that it is impossible to live in a risk-free world and to eliminate all the risks associated with technology. Nevertheless, harm should be contained as best it can be. The fifth, sixth and seventh conditions all encourage an attitude of incrementalism toward social experimentation. Instead of trying to anticipate all the possible risks and benefits of technology, we should try to learn from experience and build up resilience in society so that any unanticipated risks of technology are not too devastating.

The next two conditions focus on beneficence and responsibility. Condition eight stipulates that whenever a new technology is introduced there must be some reasonable prospect of benefit. This is quite a shift from current attitudes. At the moment, the decision to release a technology is largely governed by economic principles: what matters is whether it will be profitable, not whether it will benefit people. Problems can be dealt with afterwards through legal mechanisms such as tortious liability. Condition nine is about who has responsibility for ensuring compliance with ethical standards. It doesn’t say who should have that responsibility; it just says it should be clear.

Conditions ten to thirteen are all about autonomy and consent. Condition ten requires a properly informed citizenry. Condition eleven says that majority approval is needed for launching a social experiment. Van de Poel notes that this could lead to the tyranny of the majority. Conditions twelve and thirteen try to mitigate that potential tyranny by insisting on meaningful participation for those who are affected by the technology, including a right to withdraw from the experiment.

The final set of conditions all relate to justice. They too should help to mitigate the potential for a tyranny of the majority. They insist that the benefits and burdens of any technological experiment be appropriately distributed, and that special measures be taken to protect vulnerable populations. Condition sixteen also insists on reversibility or compensation for any harm done.


3. Conclusion
I find this proposed framework interesting, and the idea of an incremental and experimental approach to technological development is intuitively appealing to me. I should perhaps make two observations by way of conclusion. First, as Van de Poel himself argues, there is a danger in developing frameworks of this sort. In the medical context, they are sometimes treated as little more than checklists: ticking off all the requirements allows the researchers to feel good about what they are doing. But this is dangerous because there is no simple or straightforward algorithm for ensuring that an experiment is ethically sound. For this reason, Van de Poel argues that the framework should be seen as a basis for ethical deliberation and conversation, not as a simple checklist. That sounds fine in theory, but then it leaves you wondering how things will work out in practice. Are certain conditions essential for any legitimate experiment? Can some be discarded in the name of social progress? These questions will remain exceptionally difficult. The real advantage of the framework is just that it puts some shape on our deliberations.

This leads me to the second observation. I wonder how practically feasible a framework of this sort can be. Obviously, we have adopted analogous protocols in medical research. But for many other kinds of technology — particularly digital technology — we have effectively allowed the market to dictate what is legitimate and what is not. Shifting to an incremental and experimental approach for those technologies will require a major cultural and political shift. I guess the one area where this is clearly happening at the moment is in relation to self-driving cars. But that’s arguably because the risks of that technology are more obvious and salient to the developers. Are we really going to do the same for the latest social networking app or virtual reality headset? I’m not so sure.

No comments:

Post a Comment