[The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book. You can listen to the talk using the plugin above or download it here.]
The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. The third law states that “any sufficiently advanced technology is indistinguishable from magic”. The idea, I take it, is that if someone from the Paleolithic was transported to the modern world, they would be amazed by what we have achieved. Supercomputers in our pockets; machines to fly us from one side of the planet to another in less than a day; vaccines and antibiotics to cure diseases that used to kill most people in childhood. To them, these would be truly magical times.
It’s ironic then that many people alive today don’t see it that way. They see a world of materialism and reductionism. They think we have too much knowledge and control — that through technology and science we have made the world a less magical place. Well, I am here to reassure these people. One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about.
Let me explain by way of an analogy. In the late 1940s, the behaviorist psychologist BF Skinner — famous for his experiments on animal learning —got a bunch of pigeons and put them into separate boxes. Now, if you know anything about Skinner you’ll know he had a penchant for this kind of thing. He seems to have spent his adult life torturing pigeons in boxes. Each box had a window through which a food reward would be presented to the bird. Inside the box were different switches that the pigeons could press with their beaks. Ordinarily, Skinner would set up experiments like this in such a way that pressing a particular sequence of switches would trigger the release of the food. But for this particular experiment he decided to do something different. He decided to present the food at random intervals, completely unrelated to the pressing of the switches. He wanted to see what the pigeons would do as a result.
The findings were remarkable. Instead of sitting idly by and waiting patiently for their food to arrive, the pigeons took matters into their own hands. They flapped their wings repeatedly, they danced around in circles, they hopped on one foot, convinced that their actions had something to do with the presentation of the food reward. Skinner and his colleagues likened what the pigeons were doing to the ‘rain dances’ performed by various tribes around the world: they were engaging in superstitious behaviours to control an unpredictable and chaotic environment.
It’s important that we think about this situation from the pigeon’s perspective. Inside the Skinner box, they find themselves in an unfamiliar world that is deeply opaque to them. Their usual foraging tactics and strategies don’t work. Things happen to them, food gets presented, but they don’t really understand why. They cannot cope with the uncertainty; their brains rush to fill the gap and create the illusion of control.
Now what I want to argue here is that modern workers, and indeed all of us, in an environment suffused with AI, can end up sharing the predicament of Skinner’s pigeons. We can end up working inside boxes, fed information and stimuli by artificial intelligence. And inside these boxes, stuff can happen to us, work can get done, but we are not quite sure if or how our actions make a difference. We end up resorting to odd superstitions and rituals to make sense of it all and give ourselves the illusion of control, and one of the things I worry about, in particular, is that a lot of the current drive for transparent or explainable AI will reinforce this phenomenon.
This might sound far-fetched, but it’s not. There has been a lot of talk in recent years about the ‘black box’ nature of many AI-systems. For example, the machine learning systems used to support risk assessments in bureaucratic, legal and financial settings. These systems all work in the same way. Data from human behaviour gets fed into them, and they then spit out risk scores and recommendations to human decision-makers. The exact rationale for those risk scores — i.e. the logic the systems use — is often hidden from view. Sometimes this is for reasons intrinsic to the coding of the algorithm; other times it is because it is deliberately concealed or people just lack the time, inclination or capacity to decode the system.
The metaphor of the black box, useful though it is, is, however, misleading in one crucial respect: It assumes that the AI is inside the box and we are the ones trying to look in from the outside. But increasingly this is not the case. Increasingly, it is we who are trapped inside the box, being sent signals and nudges by the AI, and not entirely sure what is happening outside.
Consider the way credit-scoring algorithms work. Many times neither the decision-maker (the human in the loop) nor the person affected knows why they get the score they do. The systems are difficult to decode and often deliberately concealed to prevent gaming. Nevertheless, the impact of these systems on human behaviour is profound. The algorithm constructs a game in which humans have to act within the parameters set by the algorithm to get a good score. There are many websites dedicated to helping people reverse engineer these systems, often giving dubious advice about behaviours and rituals you must follow to improve your scores. If you follow this advice, it is not too much of a stretch to say that you end up like one Skinner’s pigeons - flapping your wings to maintain some illusion of control.
Some of you might say that this is an overstatement. The opaque nature of AI is a well-known problem and there are now a variety of technical proposals out there for making it less opaque and more “explainable” [some of which have been discussed here today]. These technical proposals have been accompanied by increased legal safeguards that mandate greater transparency. But we have to ask ourselves a question: will these solutions really work? Will they help ordinary people to see outside the box and retain some meaningful control and understanding of what is happening to them?
A recent experiment by Ben Green and Yiling Chen from Harvard tried to answer these questions. It looked at how human decision-makers interact with risk assessment algorithms in criminal justice and finance (specifically in making decisions about pretrial release of defendants and the approval loan applications). Green and Chen created their own risk assessment systems, based on some of the leading commercially available models. They then got a group of experimental subjects (recruited via Amazon’s Mechanical Turk) to use these algorithms to make decisions under a number of different conditions. I won’t go through all the conditions here, but I will describe the four most important. In the first condition, the experimental subjects were just given the raw score provided by the algorithm and asked to make a decision on foot of this; in the second they were asked to give their own prediction initially and then update it after being given the algorithm’s prediction; in the third they were given the algorithm’s score, along with an explanation of how that score was derived, and asked to make a choice; and in the fourth they were given the opportunity to learn how accurate the algorithm was based on real world results (did someone default on their loan or not; did they show up to their trial or not). The question was: how would the humans react to these different scenarios? Would giving them more information improve the accuracy, reliability and fairness of their decision-making?
The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.
It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.
Now, the original title of my talk promised five reasons for pessimism about AI in the workplace. But what we have here is one big reason that breaks down into five sub-reasons. Let me explain what I mean. The problem of techno-superstitionism stems from two related problems: (i) a lack of understanding/knowledge of how the world (in this case the AI system) works and (ii) the illusion of control over that system.
These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.
Related to this is the fourth problem: that in order to make AI systems work effectively with humans, the designers and manufacturers have to control human attention and behaviour in a way that undermines human autonomy. Humans cannot be given free rein inside the box. They have to be guided, nudged, manipulated and possibly even coerced, to do the right thing. Explanations have to be packaged in a way that prevents the humans from undermining the accuracy, reliability and fairness of the overall system. This, of course, is not unusual. Workplaces are always designed with a view to controlling and incentivising behaviour, but AI enables a rapidly updating and highly dynamic form of behavioural control. The traditional human forms of resistance to outside control cannot easily cope with this new reality.
This all then culminates in the fifth and final problem: the pervasive use of AI in the workplace (and society more generally) (v) undermines human agency. Instead of being the active captains of our fates; we become the passive recipients of technological benefits. This is a tragedy because we have built so much of our civilisation and sense of self-worth on the celebrations of agency. We are supposed to be agents of change, responsible to ourselves and to one another for what happens in the world around us. This is why we value the work we do and why we crave the illusion of control. What happens if agency can no longer be sustained?
As per usual, I have left the solutions to the very end — to the point in the talk where they cannot be fully fleshed out and where I cannot be reasonably criticised for failing to do so — but it seems to me that we face two fundamental choices when it comes to addressing techno-superstition: (i) we can tinker with what’s presented to us inside the box, i.e. we can add more bells and whistles to our algorithms, more levers and switches. These will either give humans genuine understanding and control over the systems or the illusion of understanding and control. The problem with the former is that frequently involves tradeoffs or compromises to the system’s efficacy and the problem with the latter is that involves greater insults to the agency of the humans working inside the box. But there is an alternative: we can stop flapping our wings and get out of the box altogether. Leave the machines to do what they are best at while we do something else. Increasingly, I have come to think we should do the latter; that do so would acknowledge the truly liberating power of AI. This is the argument I develop further in my book Automation and Utopia.
Thank you for your attention.
Humans have already lost control take the unimaginable volumes of data held and manipulating the computer algorithms operating for themselves a form of collective consciousness not achievable in the human world. Implementation of human firewall systems is the only way to stop direct access to the potential for damage to humanity. Even this is being eroded on a daily bases. Humanity is not only being coerced to behave in an unpredictable way indirectly it also is limiting its tenure as it influences nature by its on 'wants' systems fuelled by guess who, AI. There is nothing artificial about the intelligence today, it is self servicing and humanity is its addict. The fix would be to switch off the lights and reset but that would be catastrophic. The reality is a brainwashed society, prioritising decisions fundamentally economic unilaterally but also greatly dystopian with humanity discordant with its self through increased bipolarity. The volume and speeds of transfer of misinformation result in illogical, irrational and unreasonable paradigms. Humanity is for sum a struggle and for others a force against AI. Where something can not be understood how can it be controlled, nor do we need to, evidently we can see it in action. humanity has 200 countries with a multitude of cultures, this may be what can save us as the pitfalls of the advancing nations are self evident a new world order will organise and promote new ideals as once powerful economies then will need to reprioritise, reset, reflect and recover.
ReplyDelete