Pages

Tuesday, November 20, 2012

The Reversal Test and Status Quo Bias



Changing policies can often seem arduous, and undesirable, even when it might be for the best (ethically speaking). As a result, reluctance to change can often creep into many organisations. I’m sure we’ve all encountered it. This reluctance is compounded by two other facts. The first is that we are usually deeply uncertain about the long-term consequences of any proposed reforms to the systems in which we operate. Consequently, when we reason about such things, we tend to fall back (at least in part) on our intuitive judgments about what seems right and wrong. The second fact is that, as numerous studies in cognitive psychology bear out, humans seem to be intuitively biased in favour of the status quo. So when our uncertainty forces us to rely on our intuitions, reluctance change is the natural result.

In an article written several years back, Nick Bostrom and Toby Ord argue that this bias to the status quo is a major problem in applied ethical decision-making. In order to be rational ethical decision-makers we ought to systematically check ourselves against the possibility that our aversion to a particular policy is driven by the bias toward the status quo. To do this effectively, they propose the introduction of something called the Reversal Test. In this post, I want to explain what this test is and how it works. As we shall see, there are really two tests, and they each have slightly different effects.

Before I begin, I should acknowledge that some may doubt the existence of a systematic bias toward the status quo. To them, much of what follows may seem unjustified. But the evidence for the status quo bias looks to be abundant and robust. Bostrom and Ord discuss this evidence in their article, and presentations of it can also be found in Kahneman’s work Thinking Fast and Slow. Although I am happy to entertain doubts about this evidence, I shan’t discuss it here. Instead, I’ll skip directly to the tests themselves, since that’s where my interest lies.


1. The Reversal Test
The reversal test is, in essence, a heuristic or rule of thumb that counteracts the effects of the status quo bias. The test can be stated like this:

The Reversal Test: When a proposed change to a certain parameter (in a certain direction) is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who believe this to explain why any changes to the parameter are deemed to be bad. If they are unable to do so, we have reason to suspect they are suffering from the status quo bias.


Thus, to give an overly-simplified example, suppose we are being asked to consider a proposed increase in the speed limit (from, say, 60mph to 70mph) and most people seem to think this would be bad. Then, we ask them whether a reduction in the speed limit (from 60mph to 50mph) would also be bad. If they think so, we ask them to justify their belief that 60mph is the optimum speed limit. If they cannot, we have reason to suspect they are biased toward the existing status quo.

On the face of it, this is pretty banal. We are just asking people to justify their beliefs which is surely what they should be doing anyway. Nevertheless, its practical effect could be significant. This is because the Reversal Test performs one crucial function: it shifts the burden of proof. Typically, we think that the burden of proof is on those proposing change. But assuming they can offer some reason for the change, and are nevertheless resisted, the Reversal Test has the neat effect of shifting the burden of proof onto the resisters. They have to explain why the current state of affairs represents a local (or absolute) optimum within the possible space of parameter values.


2. The Double Reversal Test
Of course, the burden of proof could be met. In particular, opponents of the policy could point to risks inherent in the proposed changes, or to transition costs that outweigh the value of the proposed changes. But there are problems with these kinds of responses too. Namely: humans are not particularly good at estimating risks and, due in part to status quo bias, they tend to overestimate the actual costs associated with proposed changes, focusing too much on short-term transition costs and not enough on potential long-term benefits.

So Bostrom and Ord propose an extended version of the test, which they call the Double Reversal Test:

Double Reversal Test: Suppose there is resistance to changing the value of a parameter in any direction. Now imagine that some natural event threatens to change the value in one direction. Would it be a good thing to counterbalance the effect of that natural event with something that maintains the status quo? If so, then ask whether, assuming that the natural event reverses itself at a later point in time, it would also be a good idea to reverse the counterbalance so as to maintain the original value of the parameter? If no one thinks so, then current opposition to the policy is likely to stem from the status quo bias.

The basic idea behind this test is illustrated in the diagrams below.




Although the diagrams help, this version of the test is difficult to follow in the abstract. Fortunately, Bostrom and Ord give quite a nice example of what it really means. Their example concerns resistance to cognitive enhancement technologies, which is, in fact, their focus throughout the paper. They think that current opposition is driven largely by the status quo bias, and not by any coherent moral principle. So they ask us to imagine the following scenario.

A hazardous chemical has entered the municipal water supply. Try as we might, there is no way to remove it, and there is no alternative water source. The chemical has the disastrous effect of impairing everybody’s cognitive function. Fortunately, there is a solution. Scientists have developed somatic gene therapy which will permanently increase the cognitive function of the population just enough to offset the impairment caused by the chemical. Everyone breathes a sigh of relief; the current level of cognitive capacity is maintained. But, at a later time, the chemical begins to vanish from the water. If we do nothing, cognitive capacity will be increased over its original level. So should we do something to reverse the effect of the somatic gene therapy? If not, then it’s likely that current opposition to cognitive enhancement stems more from status quo bias than from any coherent moral concerns.

The Double Reversal test is because it helps to disentangle two distinct conceptions of the status quo:

The Average Value Conception: In which the status quo is viewed as the current average value of the parameter in question.
The Default Position Conception: In which the status quo is viewed as the value of the parameter if no actions are taken.

Allegiance to the current set of average values might be ethically justified, and if we are willing to intervene to counterbalance the natural event, then perhaps we have some principled reason to think the current average is optimal. But if we don’t think it is necessary to counterbalance the original policy after the natural event reverses itself, then we are switching to a default position conception of the status quo. Switching in this manner suggests our attachment to the current set of values is unprincipled. After all, if we are willing to take the risk and incur the transition costs to counterbalance the natural event, but unwilling to incur additional costs to counterbalance the subsequent reversal of the natural event, then what is current opposition really based on?

In sum then, Bostrom and Ord’s reversal tests are useful heuristics to employ in ethical policy-making. The basic Reversal Test is useful because it shifts the burden of proof onto those who defend the status quo, and the Double Reversal Test is useful because it allows us to see more clearly whether the current attachment to the status quo is principled or not.

No comments:

Post a Comment