Pages

Wednesday, February 12, 2014

What's the case for sousveillance? (Part Two)


(Part One)

This series of blog posts is looking at arguments in favour of sousveillance. In particular, it is looking at the arguments proffered by one of the pioneers and foremost advocates of sousveillance: Steve Mann. The arguments in question are set forth in a pair of recent papers, one written by Mann himself, the other with the help of co-author Mir Adnan Ali.

Part one clarified what was meant by the term “sousveillance”, and considered an initial economic argument in its favour. To briefly recap, “sousveillance” refers to the general use of veillance technologies (i.e. technologies that can capture and record data about other people) by persons who are not in authority. This is to be contrasted with “surveillance” which is explicitly restricted to the use of veillance technologies by authorities. The initial economic argument for sousveillance was based on the notion that it could smooth the path to efficient economic exchanges by minimising the risk involved in such exchanges.

As it happens, this economic argument is really the main argument offered by Ali and Mann. They simply restate in a couple of different ways. This is not to denigrate their efforts. Restating or rephrasing an argument can often be highly beneficial by drawing attention to different qualities or features of the argument. One of the goals of today’s post is to see whether this holds true in the case of Ali and Mann’s argument. This requires us to look at two further economic arguments for sousveillance. The first based on the claim that sousveillance technologies can reduce information asymmetries and thereby reduce inefficiences in economic markets. The second based on the claim that sousveillance technologies minimise the scope for opportunism in economic exchanges (as often occurs in principal-agent transactions).

Additionally, this post will look at one other argument defended by Ali and Mann. This argument shifts attention away from economic exchanges and onto exchanges between ordinary citizens and government bureaucracies. The claim being sousveillance helps to correct for the inequalities of power that are common in such exchanges.

Are these arguments any good? Do they make a persuasive case for sousveillance? Let’s see.


1. Sousveillance and Information Asymmetries
Friedrich Hayek’s famous argument for the free market (and against central planning) was based on the notion that the free market was an information processing and signalling system par excellence. Every society needs to make decisions about what goods should be made and services provided. It’s difficult for a central planner to make those decisions in an efficient and timely manner. They have to collate information about the preferences of all the people in the society, they have to work out the costs of the various goods and services, and they then have to implement production plans and supply schedules. The system is slow and cumbersome, and those within the planning administrations are often improperly incentivised.

It’s much easier for a distributed network of producers and suppliers to do this by responding to changes on the ground, and adjusting their production and supply to match local demand. They will be facilitated in doing this by the prices that are charged for goods and services on various markets. The prices are a signal, telling them which goods and services are worth providing, and which aren’t worth the hassle. By responding to subtle fluctuations in prices, this distributed network of agents will be able to coordinate on a schedule of production and supply that is maximally efficient.

The Hayekian argument turns on the value of the price signal. Provided that all relevant information is reflected in price signals, the free market system should indeed be the most efficient. The problem is that market prices often fail to reflect all the relevant information. This happens for a variety of reasons. For example, businesses might fail to incorporate long-term environmental costs into their short-term production costs because those costs are borne by society as a whole, not the producer. This has a knock-on effect on the market price. Similarly, and more importantly for present purposes, certain markets are infected by information asymmetries between buyers and sellers. These arise when one of the parties to an economic exchange has more exchange-relevant information than the other. This gives rise to a number of problems.

A classic example comes from George Akerlof’s paper “A Market for Lemons”. In it, Akerlof suggests that the market for second-hand cars is characterised by information asymmetry. The person selling the car knows far more about the quality of the car than the buyer. This puts the buyer at a disadvantage, which will be reflected in the price s/he is willing to offer for the car. The net effect of this information asymmetry is that sellers with good second-hand cars are driven out of the market — they won’t be offered a price they are willing to accept — and hence bad second-hand cars (“lemons”) predominate. This is all down to the fact that the price signal cannot incorporate all exchange-relevant information in this particular market.

The second-hand car example is an illustration of one of two problems with information asymmetries (and it should be noted that it’s not clear that the second-hand car market does exemplify the problem discussed by Akerlof). Since these problems are central to Ali and Mann’s argument, it is worth defining them a little more precisely:

Adverse Selection: This is the problem at the heart of Akerlof’s market for lemons. It refers to the notion that “bad” customers or products tend to predominate in certain markets, due to information asymmetries. Another classic illustration of this is the customer self-selection effect in markets for insurance. It is sometimes felt that those who demand insurance (e.g. health insurance) are those who know that their lifestyle is such that they are more likely to need it. But, of course, it is difficult for an insurance company to be better informed than the customer about such lifestyles. So the insurance companies err on the side of caution, and charge higher premiums to compensate for the potential risk. This means that low-risk customers are put at a disadvantage: they can’t credibly distinguish themselves from the high-risk customers.
Moral Hazard: This is a problem that arises from the fact that the costs of certain activities are not borne by the agents who carry them out. This is fuelled by information asymmetries between the party bearing the costs and the agent carrying out the acts (the latter have more information about their activities than the former). Insurance is again a classic example of a transaction involving moral hazard: the insured knows more about what they are going to do than the insurer. The bailout of major financial institutions post-2008 was also believed to give rise to moral hazard. The reason being that “too big to fail” institutions could now engage in high-risk activities, safe in the knowledge that if things got too bad, the government would come to their rescue.

Both problems devalue price signals by altering prices from what they would have been had the transactions been undertaken in conditions of perfect information, and by incentivising undesirable behaviour.

Now, you may well be wondering: what has all this got to do with sousveillance? Well, Ali and Mann argue that sousveillance can help to solve the problems of adverse selection and moral hazard by minimising information asymmetries. In other words, they argue (numbering continuing from part one):


  • (4) The problems of adverse selection and moral hazard reduce the efficiency of economic exchanges.
  • (5) Sousveillance can help to minimise the problems of adverse selection and moral hazard.
  • (6) Therefore, sousveillance can increase the efficiency of economic exchanges.


The key to this argument is premise (5). Ali and Mann make the case in favour of it in two parts. First, they note that adverse selection can be minimised through pre-transaction screening of “bad” customers or sellers, and through credible signalling. Sousveillance helps to facilitate both. For instance, an insurance customer who has carefully documented his life up until the point that he needs insurance can credibly signal to the insurance company that he is low-risk (if he is); or the owner of the second-hand car can provide meticulous records of his personal use of the car to demonstrate its quality. The same is true for moral hazard. In that case, the problem really has to do with post-transaction monitoring of the active party by the party who bears the costs. Sousveillance can, of course, facilitate such post-transaction monitoring.

Here’s my problem with all of this: Although I have no doubt that constant monitoring of activities with veillance technologies — both pre and post-transaction — could reduce (some) information asymmetries, I find it hard to see how this wouldn’t simply give rise to sur-veillance of a highly coercive and insidious nature, rather than sousveillance of a positive and autonomy-enhancing nature. As you’ll recall from part one, surveillance is when de facto authorities impose surveillance on ordinary people; sousveillance is when everybody uses veillance technologies. Whenever there are inequalities of power, there is a potential de facto authority. If those authorities can insist upon monitoring our activities, it seems to me like we have the conditions for liberty-undermining surveillance. I suspect this is what would happen in the case of things like insurance contracts.

Consider, in the first instance, the signalling powers of sousveillance technologies could indeed be quite autonomy-enhancing. The early-adopters could credibly signal that they are low-risk customers, and reap all the benefits of reduced costs of insurance premiums. But this could easily set-up a slippery slope to the compulsory use of such technologies. After all, given the benefits, why wouldn’t the insurance company insist upon monitoring every customer’s waking move before agreeing to give them insurance. And since the exchange between insurance providers and customers is characterised by inequalities of bargaining power (e.g. we are often legally obliged to buy insurance), it is hard to see why this wouldn’t amount to a kind of coercive surveillance. You would be dominated by the company: subtly encouraged to bring your behaviour in-line with their preferences, whatever those preferences happen to be. And since similar inequalities of bargaining power are present in other markets, I think this is a general problem for the economic case for sousveillance. (There are possibly counterarguments to this domination-style argument. I’d be interested in hearing about them in the comments section)


2. Sousveillance, Opportunism and Bargaining with Bureaucracies
Now we’ve spent a long time looking at the information asymmetry example. This is because it is indicative of Ali and Mann’s style of argument, and because it gave me the opportunity to raise one of my main concerns about their claims. Fortunately, this means that the remaining arguments can be dealt with in a more cursory manner. They are simply variations on a theme.

The next argument Ali and Mann offer is based on an analysis of economic opportunism. This is the idea that certain economic transactions create conditions in which parties can engage in opportunistic, self-serving, and socially costly behaviour. Ali and Mann identify three types of opportunism, each of which they claim can be combatted by sousveillance:

First-degree Opportunism: This arises from the imperfect enforceability of contracts, i.e. people reneging on their contractual promises. Opportunism of this nature can be legally remedied, provided that the breach of contract can be proved. Obviously, the idea is that sousveillance can facilitate this. It can also allow for contractees to offer credible gestures that will encourage people to enter into otherwise risky contracts. Ali and Mann give the example of a painter who agrees that his work can be sousveilled by those availing of his service, so that they can see that he followed their instructions.
Second-degree Opportunism: This arises from unanticipated eventualities in long-term contracts, e.g. employment contracts. The idea is that no contract can cover every possible eventuality. If unanticipated eventualities come to pass, one or more of the parties to the contract could exploit the ambiguities in the contract that fail to cover those eventualities. Sousveillance can apparently minimise this in two ways. First, by providing a perfect record of the original negotiations, and hence a basis for arguing about implicit understandings. Second, by providing a cheap way in which the ongoing execution of the contract can be measured and enforced.
Third-degree Opportunism: This arises from discretions in relational contracts. The classic example here is the principal-agent contract, where a principal hires an agent to perform a certain task, and grants the agent discretionary powers in carrying out that task. The agent (partly due to information asymmetries; partly due to misaligned incentives) can sometimes exploit those discretionary powers. Again, sousveillance comes to the rescue by providing the principal with the means of monitoring the use of those discretionary powers.

I have three brief objections to this. First, on the notion that records of negotiations could help to minimise opportunism arising from unforeseen eventualities, I worry that Ali and Mann are being slightly naive about the complexity of negotiations and the vagueness and ambiguity of language. We often have meticulous records of the contexts in which varies legal instruments (e.g. constitutions and statutes) are drafted, but that doesn’t eliminate the uncertainty, or reduce the scope for opportunistic interpretations of those instruments. I see no reason why sousveillance would change things in this regard.

Second, I return to my earlier worry about liberty-undermining surveillance. It may be true that an enterprising sole trader — like a painter — can take advantage of veillance technologies and make himself a more attractive commodity. But that’s to ignore other contexts in which there are inequalities of bargaining power. For example, suppose (as is already the case in some industries) that constant veillance becomes a compulsory part of all employee contracts.

Third, there is the possibility that the monitoring of parties to some economic exchanges is counterproductive. I’m not sure whether it is an entirely credible theory of worker-motivation, but I’ll use the example anyway: Dan Pink’s book Drive argues that carrot-and-stick style incentives for workers are often counterproductive, particularly when it comes to non-routine, creative, and problem-solving forms of work. In those types of work, what is needed is not the threat of punishment or the allure of reward, but rather a sense of autonomy, mastery and purpose. If we follow Ali and Mann’s logic, however, sousveillance is advantageous precisely because it provides a monitoring tool that makes threats of punishment or reward more credible. That is to say: sousveillance only really helps to enhance the system of carrot-and-stick incentives. It may actually detract from creating a sense of autonomy, mastery and purpose, as workers may feel they are not being trusted.

Turning then to the last of Ali and Mann’s arguments. This one has to do with the positive impact of sousveillance on negotiations with bureaucracies. The mechanics of the argument should be familiar to us by now. The claim is that bureaucratic decision-making can be impersonal, and often based on incomplete or imperfect information. This induces feelings of terror and helplessness among those affected by bureaucratic decision-making (think Kafka!). Sousveillance can improve things. Those of us forced to deal with bureaucracies will be able to provide full documentary evidence about ourselves and our actions. This will put us on a firmer basis when it comes to challenging the decisions that affect our lives.

I think some of the problems mentioned above apply to this argument as well. There is also something unrealistic about it all. Our ability to challenge bureaucratic decision-making will depend largely on whether such decisions are rational (not arbitrary) and whether we can know their rational basis. Ali and Mann suggest that Freedom of Information protocols will help us in this regard by giving us access to the internal regulations and guidelines of bureaucracies. But, of course, mere access to information is not always helpful, as anyone who has dealt with FOI documents will attest. There is still the fact that the internal regulations might be exceedingly complex, couched in vague and ambiguous language, or replete with discretionary powers.


3. Conclusion
So that brings us to the end of this aspect of my series on Mann’s arguments for sousveillance. The next post on the topic will be more concerned with the general concept of sousveillance and different types of veillance society. Consequently, it’s worth briefly recapping the arguments discussed so far.

As we have seen, Ali and Mann’s primary case in favour of sousveillance is based on its potential economic advantages. They present this argument in three different ways. The first being a general argument about trust and the risks of economic exchange; the second being about information asymmetries; and the third being about opportunism. They then add to this economic case the claim that sousveillance will help to reduce feelings of terror/helplessness in the face of bureaucratic decision-making.

In each case, I’ve suggested that Ali and Mann may have overstated the arguments for sousveillance. Although there may be some benefits, it’s possible that several of the examples discussed by Ali and Mann would give rise to liberty-undermining surveillance, rather than autonomy-enhancing sousveillance. This is due to their underappreciation of inequalities of bargaining power in economic exchanges. Furthermore, although sousveillance may encourage some kinds of good behaviour, its widespread use may be counterproductive. This is because many people might perceive it as an affront to their autonomy, or a sign of a lack of trust.

No comments:

Post a Comment