Pages

Thursday, November 10, 2016

Technological Unemployment and the Search for Meaning: Should we retreat from reality?


One of the slides from my talk showing various books on the topic of technological unemployment


[Note: what follows is, roughly, the text of a keynote talk that I delivered at the W-Jax conference in Munich on the 8th November 2016 — an evening that will now live in infamy thanks to the election of Donald Trump. I hesitated in posting this as a result of that momentous event. I had intended the talk to be a somewhat provocative thinkpiece, not a well thought-out argument, and its discussion of technological unemployment and utopianism seemed inapposite in light of what happened. Upon reflection, however, I think that technological unemployment, workforce polarisation, and rising inequality can explain, at least in part, the rise of Trump and similar anti-establishment political movements, and so that the issues raised in the first two thirds of this talk are important at this moment in time. Now, more than ever, we need politics and politicians who are willing to address the changing nature of the work and who can articulate a hopeful and optimistic vision of the future. I’ve added some quick reflections on this toward the end of the post. These did not appear in the original talk.]


I want to start with a story. The story is about Danny Izquierdo. Danny is a young man, in his early 20s, from Maryland in the U.S. He is one of a growing number of young men who spend large chunks of their time playing video games. Danny has a degree. He has worked at a few odd jobs but finds them frustrating. He prefers games: He is attracted to the sense of community and meritocracy he finds in them. When interviewed by the Washington Post reporter Ana Swanson (in September of this year) he had this to say:

When I play a game, I know if I have a few hours I will be rewarded… With a job, it’s always been up in the air with the amount of work I put in and the reward. 
(Swanson 2016)

Now, I don’t want to dwell too much on Danny. He is just one man chosen, probably unfairly, to be emblematic of a more general problem: the retreat of young men from the workforce into virtual worlds. This is a trend that is worrying a number of social commentators. The psychologist Philip Zimbardo (famous for his work on the Stanford Prison Experiment) and his co-author Nikita Coulombe wrote an entire book about it called Man Disconnected. They think modern society is failing men. They worry about excessive time being spent in virtual worlds, including both video games and online pornography. They think this retreat from reality is wreaking havoc on young men’s cognitive development, concentration and social orientation.

It’s also something worrying Nicholas Eberstadt (of the American Enterprise Institute). He published a widely-discussed book earlier this year entitled Men Without Work which documented the startling withdrawal of men from the US workforce. By his estimation, some 7 million men aged between 25-54 are ‘missing in action’. It’s not that they can’t find work. It’s that they aren’t even looking. He laments the effects this is having on the economy and on social order, claiming that:

The male exodus from work also undermines the traditional family dynamic, casting men into the role of dependents and encouraging sloth, idleness and vices perhaps more insidious… 
(Eberstadt 2016)

And it’s not just an American phenomenon. High youth unemployment rates are common across many European countries. Indeed they are startlingly high in countries like Greece, Spain and Ireland. Furthermore, people are withdrawing from reality across the world. Screen time and video game time are on the increase. The apotheosis of this new paradigm comes, perhaps, in the shape of the Japanese hikkomori, a new breed of hermit-like people, uninterested in work, uninterested in sex, enclosed within a reality of their own creation.

And yet, for all their hand-wringing there is something missing from the analysis of these cultural critics.* Recent research from Aguilar, Bils, Charles and Hurst suggests that although young men are working less and playing more, they are also happier. Indeed, the self-reported happiness of those in their 20s and early 30s has risen from the low to high 80%s in the first decade and half of the new millennium. What’s more, the fondness for video games is not limited to men. Research from Andrew Przybylski found that in a sample of approximately 5000 chilrden aged between 10-15, about 40% of boys and girls were playing between 1 and 3 hours of video games per day.

In this talk I want ask two questions: is this retreat from reality in general, and from the world of work in particular, inevitable due to the rise of technological unemployment? And if so is it to be lamented or welcomed? In response, I want to argue in response that it probably is inevitable and that it might be something to be welcomed. This is because our most plausible conceptions of utopian worlds presume the preeminence of the ludic life.

I’m going to make this argument in three stages. First, I’m going to look at the technological unemployment debate and give you all an idiot’s guide to the arguments being made by a number of economists and technologists about the future (or lack thereof) of work. I’ll argue that there will probably be much less work in the future and that it is only a matter of time before the robots come to take your job.

Second, I will argue that this increase in unemployment is going to kick off a crisis of meaning. For better or worse, people derive meaning, satisfaction and self-worth from their jobs. If their jobs are taken away from them, they may struggle to make sense of their place in the world. I will be subtle in making this argument. I will suggest that typical laments about the loss of work are misplaced: work is not that pleasant for many people. Nevertheless, I will argue that improvements in automation do threaten sources of meaning more generally. It is important that we, as a society, address this crisis.

Finally, I will argue that if we are to solve the crisis of meaning, our best hope may lie in the virtual world. This is because two of the most philosophically plausible theories of utopianism put games and virtual reality in the centre of the frame.

Before I do all that, however, I must issue a health warning. I’m a philosopher and theorist. I’m big on concepts and arguments. I like to precisely define terms and reduce popular debates to logical syllogisms. I’m going to be doing some of that in what follows, but I’ll probably be less formal and less precise than I typically would be if I was presenting to an academic audience. I appreciate that the goal of these keynotes is to provoke and entertain, not to lecture and bore, but if you do happen to find what I’m saying interesting, and you would like the more formal and precise version, can I suggest reading my blog Philosophical Disquisitions? There is plenty more there about the arguments and ideas that I present here this evening.


1. An Idiot’s Guide to the Technological Unemployment Debate
I’m sure you have noticed the hype about robots, automation and the future of work in recent years. A spate of books and reports and op-eds have been written predicting the rise of the robots and the demise of paid employment. To give you a flavour of this, here are just a handful of the books that happen to adorn my shelves and which have been published in the past five years. There is: Frederico Pistono’s Robots Will Steal Your Job and That’s Okay; Brynjolfsson and McAfee’s The Second Machine Age; Martin Ford’s The Rise of the Robots; Tyler Cowen’s Average is Over; Jerry Kaplan’s Humans Need Not Apply; Susskind and Susskind’s The Future of the Professions; Calum Chace’s The Economic Singularity; and Ryan Avent’s The Wealth of Humans.

While each of these books has their merits, and while you may have read some of them, I’m going to try and do you all a favour by condensing them down into an ‘idiot’s guide’. If you follow along for the next 5 minutes, you’ll be able to bluff your way through any conversation about this topic and impress your friends and colleagues with your logical rigour.

Let me start with some definitions (I did warn you that I like this kind of thing!):

Job: Any collection of tasks (physical, emotional, cognitive) performed in return for economic reward (or in the hope of receiving an economic reward)

Technological Unemployment: The widespread replacement of human task performers with machine task performers, resulting in many fewer jobs.


The ‘many fewer’ is deliberately vague. No believer in widespread technological unemployment thinks that machines will eliminate all jobs; they only think they will eliminate lots of jobs. How many they will eliminate is hard to say. Some people tout figures like a future where only 10-20% of the adult population works for a living. If that happened, then we could definitely say that we have widespread technological unemployment.

So will it happen? What’s the argument in favour of it? In abstract terms, it looks at little something like this:


  • (1) If machines can perform more and more job-related tasks at a cheaper cost than human workers, there will be technological unemployment.
  • (2) Machines can perform more and more job-related tasks at cheaper cost than human workers.
  • (3) Therefore, there will be technological unemployment.


The authors of the books mentioned above spend a lot of time defending the second premise of this argument. They often start by pointing to historical examples of widespread technological unemployment. Their favourite is the shift in employment in farming in America over the course of the 20th Century. In 1900, approximately 40% of the American population was employed in agriculture. Back then, working the farm required a lot of human (and horse) powered labour. By the 2000s, only 2% of the American population worked in agriculture. Much of this change has been attributed to the efficiencies made possible by modern machinery (changes have been similar around the much of the developed world). Of course, this is just one historical example. But it provides proof of concept. The defenders of technological unemployment then shift to listing examples of current and nascent technologies that seem like they reducing (or are on the cusp of reducing) the amount of human labour needed to keep the economy rolling. Examples include: robot fast-food workers; self-driving cars (set to displace 5 million jobs in the US); newer more flexible and intelligent industrial robots like Baxter; Amazon’s Kiva Robots (set to obviate the need for warehouse stockers and pickers); and machine learning systems like IBM’s Watson (set to displace doctors and diagnosticians, if you believe the hype).

And people are often willing to believe the hype. But economists then step in to deliver what seems to be a killer blow to the technological unemployment argument. They argue that proponents of that argument are guilty of two major fallacies —fallacies that first year economics students could easily point out — the Luddite Fallacy and the Lump of Labour Fallacy. These fallacies effectively amount to the same thing: we have been here before and employment is still pretty high. People worried about the effect of machines on jobs over 200 years ago in the early days of the Industrial Revolution. The Luddites smashed textile machinery in response to the automation of their skilled labour. But they were wrong to do so. The amount of jobs that the economy creates is not fixed. There is no single ‘lump’ of labour to be divided up between the machines and humans. We can live, happily, side by side. New technologies create new opportunities. Just think about all the new jobs that have been created by digital technologies, from social media marketer, to computer programmer, to online ads technician. The future is bright, even if it is going to be different.

Let’s set to one side the fact that the Luddite fallacy is not really a fallacy (when people lose their jobs to machines it isn’t easy for them to find new sources of employment, even if future generations make the shift). The appeal to both it and the lump of labour fallacy means that proponents of technological unemployment are obliged to explain why it is different this time. And they duly try to fulfil that obligation by issuing something I am going to call the G.A.S.P response. This is a mnemonic you can use to remember the four factors they all appeal that makes it different this time:


General Purpose: The technologies underlying the current machine age do not simply replace individual tasks; they change how work is done across the board. Furthermore, they are, or could be, general purpose technologies, ones that can be deployed across a range of employment contexts. That, at least, is the great hope of AI and machine learning. 
Accelerating Change: The current technologies are improving at an accelerating (exponential) rate. This gives rise to two distinct problems: (i) it makes it difficult to draw lessons from historical examples because those examples may be drawn from the relatively linear portion of an exponential growth curve; and (ii) it could make it difficult for workers to retrain to find new jobs because they are slower at improving than the machines.
Superstar Effect: Modern digital networks make it easy for highly skilled workers to capture most of the value within a particular market for goods and services. Why go to the second or third best supplier when global networks allow you to go to the best? Clear examples of this include Google, Facebook and Amazon. They dominate particular markets due to the power of global networks. This means that even when new employment opportunities are created, they will tend not to create many jobs.
Present Indicators: There are several present economic indicators that suggest that technology is having an impact on both the quantity and quality of work. Examples of these indicators include: (i) stagnant real wage growth; (ii) the decoupling of productivity and income; (ii) the polarisation effect (i.e. the hollowing out of middle skill jobs and the growth in low and high skill jobs); and (iv) the decline in the labour force participation rate (in the US) or the decline in real wages in countries where the participation rate remains fairly static (e.g. the UK). (This is probably the weakest of the four claims since there are other explanations for these indicators).


Maybe the G.A.S.P. response is incomplete. Maybe there is more to be said in favour of the mainstream view that employment will remain robust. I’m not going to get into the further details here. I just want to give you my take on the whole thing. I draw two major conclusions from the technological unemployment debate. The first is that it is only a matter of time: assuming their are no physical or logical roadblocks to creating general purpose machinery then it is only a matter of time before machines can replace all human workers, whether that is in 10, 50, 100 or 1000 years time is purely academic (from my perspective) - we need to think about a future without work. The second is that even if this doesn’t happen for a long time, it still seems likely that technology will have a profound impact on the quality and quantity of employment in the short to medium term. The polarisation effect is the clearest illustration of this: more workers are being forced into low skill, precarious, and poorly paid work as a result of the hollowing out of middle skill jobs.


2. The Crisis of Meaning
So what are we going to do about this? Clearly these changes to employment will have a profound impact on society. We rely on work for income and we rely on our incomes to pay for the things we need to survive. Without an income our quality of life will be much reduced. Many technologists and futurists tout the Universal Basic Income as a solution to this problem. A UBI is a guaranteed minimum income, paid to all citizens within a given state or territory, that breaks the link between income and work.

The UBI really feels like an idea whose time has come. The Swiss had a referendum about introducing one earlier this year. They rejected it decisively, but several other countries are experimenting with the idea (Netherlands, Finland) and it certainly seems like it has taken root in the popular consciousness as the way to address the fallout from technological unemployment. You can see why too. For all its radical pretensions, the UBI is actually quite a conservative solution to the problem of mass unemployment. By continuing to pay people an income, the doyens of the capitalist class hope to prop up the consumer economy that has rewarded them so greatly.

But I think a UBI is, at best, a partial solution to the problem. The mistake is to assume that an income is the only benefit (or ‘good’) that we get from work. This simply isn’t true. As I mentioned earlier on, for better or worse, work is a major source of meaning, satisfaction and self-worth. The philosophers Anca Gheaus and Lisa Herzog wrote an interesting article about this earlier in the year. They argued that there are four non-income related goods of work:

Excellence: Work is a privileged forum for achieving the cognitive/physical mastery of some particular skill (e.g. programming). Work gives you the space and time needed to cultivate to develop mastery. Mastery is something that many find intrinsically valuable.
Contribution: Work provides the opportunity to contribute something of value (some good or service) to the society in which you live.
Community: Work is usually undertaken within organisations or in collaboration with other people. It allows you to exercise collective agency in the pursuit of some common aim.
Recognition: Work is a way to achieve social status, recognition and approbation.

Giving people a guaranteed income is not going to give them these non-income related goods. That’s a problem.

But let’s not look at work through overly rose-tinted lenses. Although Gheaus and Herzog may be right that work is a privileged context for achieving these four goods, their argument is flawed in one crucial respect. It is only a privileged context given the current economic necessity of work. If work was no longer necessary for a living we could find other ways to achieve mastery, contribution, community and recognition. It’s not like work is currently brilliant at allowing us to achieve these four goods. Work is, for many people, deeply unpleasant and deeply dispiriting. A Gallup global workplace survey in 2013, for example, found that only 13% of workers actually enjoy what they do. What’s more, to continue to force people to find a job to generate a sense of meaning and purpose about their lives would be torturous in a world of diminishing employment opportunities. It is likely to provoke anxiety and resentment.

Still, I think there is something to worry about when it comes to the withdrawal from work. I think there is a serious risk that rampant automation will rob us of the things we need to make life meaningful. Philosophers have a particular ways of thinking about meaning. The classic view is that a meaningful life is characterised by the Good, the True and the Beautiful. That is to say, your life is meaningful to the extent that you can do moral good (make the world a better place), pursue truth (make contributions to knowledge), and create beauty (works of art/literature etc.). You might think that widespread automation would free you up to pursue the good, the true and the beautiful, but the great fear is that the benefits of automation won’t be limited to the purely economic domain. It may be that machines are better at solving moral problems than we are. In fact, many argue that this is already the case and is one reason why we should hand over control to machines, e.g. the self-driving car debate, algorithmic decision-making debate. It may be that machines are better at figuring out the truth. In fact, this already seems to be the case in certain areas of science where the power of machine learning is being leveraged to generate new theories and ideas. It may never be that machines are better at creating art — though they already can create it — but we have to ask ourselves: is this enough?

The bottom line is that advanced machines can sever the link between what we do and what happens in the world around us. This might be okay if machines only sever the link in the economic domain, but there is reason to suspect they will sever other links too, including some that are essential to meaning. This suggests to me that technological unemployment could kick off a crisis of meaning.


3. Games as Utopia
And how do we solve this crisis? I am not wholly optimistic, but here is where I want to return to my original question about video games and the retreat from reality. It is easy to look upon this retreat with some concern. The real world is, after all, what seems to matter most, but maybe this is wrong. Maybe the rise of the machines gives us a chance to rethink what it means to live a good life. In particular, maybe it gives a chance to think about what it would mean to create the best of all possible lives, right here on earth. I want to close by outlining two philosophical arguments to this effect. Both claim that video games and virtual realities provide plausible conceptions of utopia.

Now, utopia is a much-maligned concept. The word ‘utopia’ was first introduced into the English language by Thomas More in his 1516 book Utopia. Technically, ‘utopia’ translates from the Greek as ‘no place’ suggesting to us that Thomas More was making a somewhat satirical point: i.e. claiming that it was impossible to create the perfect world. Nevertheless, the word has taken on the meaning that is central to that book, namely: that a utopia is the best of all worlds. People have tried to create utopias over the years. They often failed, usually leaving much human misery and destruction in their wake. Nevertheless, there are two, relatively recent (1970s), philosophical conceptions of utopia that I think are interesting and both converge on the notion that a life of video games and virtual pursuits is the best of all possible lives.

The first comes from a book by Bernard Suits called the Grasshopper. This is an odd book. It is most famous for providing a definition of what it means to play a game. Suits defines a game as the ‘voluntary attempt to overcome unnecessary obstacles’. More precisely he defines a game as anything that satisfies the following three conditions: (i) it has a prelusory goal, i.e. some end state or outcome that determines when the game is over and who has won; (ii) constitutive rules, i.e. rules that set up unnecessary obstacles between the player and the prelusory goal; and (iii) a lusory attitude, i.e. a willingness to accept the constitutive rules. Anyone here who is involved in game design might already be familiar with this definition. It was used quite extensively by Jane McGonigal in her 2011 book Reality is Broken which is partly about the gamification of reality.

What is sometimes missed is that Suits’s book is not just about games. It is about utopia too. Specifically, it is about defending the claim that a life that consists of nothing but games is the best of all possible lives. Suits asks us to think about what it is we are trying to do with our new technologies. It seems like we are trying to get them to solve our problems (get us what we want and need) in the most efficient possible manner. So suppose this trend continues and we create a world of perfectly (or near perfectly) efficient machines. These machines can get us anything we want at the flick of a switch. You want a new house? You just have tell the machine and it will build one for you. You don’t need to lift a finger. Suits argues that in such a world all human activities would be games. In other words, if we perfect technology, we will have nothing left to do but play lots of games. Think about it like this. Suppose you want to build a house in this future reality but you don’t want the machine to do it for you. You want to do it the old fashioned way and build it for yourself. You are now playing a game: you are placing unnecessary obstacles between yourself and the goal you want to achieve (you could have just flicked the switch).

So if our goal should be to perfect technological solutionism, it follows that our goal is to create a world in which games take centre stage.

You don’t like that argument? Here’s another one. This one comes from the philosopher Robert Nozick and his famous book Anarchy, State and Utopia. I say ‘famous’ because it has been taught to generations of philosophy students as a defence of the libertarian, minimal state. And although the book is primarily a defence of libertarianism, it is also partly about utopianism. Nozick presents one of the most interesting and novel takes on what it means to live in a utopian world. He starts by trying to operationalise the concept of utopia. He agrees that a utopia is the best of all possible worlds, but what does that mean in practice? He suggests the following:

Utopia: A world that is judged to be best by its members, i.e. there is no other world they can imagine that would be better.

He then highlights a problem. People don’t agree on what it means to be ‘best’. They have different preferences and values. This means that it is very unlikely that there is a single utopian world, i.e. a world that is judged to be best by all its members. But there is a solution to this problem. Instead of trying to create a single world that is best for all, we should create a world-building mechanism that allows people to create and join worlds that correspond to their own standards of bestness. Nozick calls this the ‘meta utopia’:

Meta-Utopia: A world building mechanism that allows people to create and join worlds that correspond to their own standards of bestness.

How could we create a meta-utopia? Well, when you think about, doesn’t virtual reality technology seem like an obvious way to do this? If it is sufficiently immersive and widely distributed, it will allow people to create and join virtual worlds that correspond to their own standards of bestness. (To be clear: Nozick definitely wouldn’t agree with this given his Experience Machine Argument).

This brings us back to the opening story — to Danny Izquierdo and the generation of young men (and women!) that are retreating from the world of work. What I am now suggesting is that they may be right. We may be forced to retreat from work by advances in technology, but this may be a good thing. Danny and others may be the first intrepid explorers into a virtual, game-playing utopia.

Now, before you think I have gone completely insane, let me say that I don’t necessarily agree with the argument I have just presented. I like the real world and I like to think that what I do makes a contribution to that world (i.e. a contribution to the Good, the True and the Beautiful). Furthermore, I do not believe that virtual can survive without the real. It will only be possible to retreat into virtual utopias if the political, social and technical institutions in the real world are stable enough to allow this to happen. Ironically, it seems like the technological revolutions that make virtual utopias possible are also destabilising these institutions. I think we are beginning to see this in the politics of fear and resentment that is sweeping through the world at this very moment. Much of that fear and resentment is being dredged to the surface by globalisation and the changing nature of work. So even if you think the argument I outlined in this talk is sensible, you will have to find some way to make the virtual and real work together. You cannot do that through total disengagement.


* To be clear, what they say may be factually wrong in a number of ways too. Zimbardo's claims, in particular, seem to be deeply problematic. I don't really care whether they are right or wrong for the purposes of this talk. I use their view as a scratching post for defending an alternative point of view.

No comments:

Post a Comment