There is a famous story about an encounter between Henry Ford II (CEO of Ford Motors) and Walter Reuther (head of the United Automobile Workers Union). Ford was showing Reuther around his factory, proudly displaying all the new automating technologies he had introduced to replace human workers. Ford gloated, asking Reuther ‘How are you going to get those robots to pay union dues?’. Reuther responded with equal glee ‘Henry, how are you going to get them to buy your cars?’.
The story is probably apocryphal, but it’s too good a tale to let truth get in the way. The story reveals a common fear about technology and the impact it will have on human society. The fear is something I call the ‘unsustainability problem’. The idea is that if certain trends in automation continue, and humans are pushed off more and more productive/decision-making loops, the original rationale for those ‘loops’ will disappear and the whole system will start to unravel. Is this a plausible fear? Is it something we should take seriously?
I want to investigate those questions over the remainder of this post. I do so by first identifying the structure of the problem and outlining three examples. I then set out the argument from unsustainability that seems to follow from those examples. I close by considering potential objections and replies to that argument. My goal is not to defend any particular point of view. Instead — and as part of my ongoing work — I want to identify and catalogue a popular objection/concern to the development of technology and highlight its similarities to other popular objections.
[Note: This is very much an idea or notion that I thought might be interesting. After writing it up, I'm not sure that it is. In particular, I'm not sure that the examples used are sufficiently similar to be analysed in the same terms. But maybe they are. Feedback is welcome]
1. Pushing Humans off the Loop
Let’s start with some abstraction. Many human social systems are characterised by reciprocal relationships between groups of agents occupying different roles. Take the relationship between producers (or suppliers) and consumers. This is the relationship at the heart of the dispute between Ford and Reuther. Producers make or supply goods and services to consumers; consumers purchase and make use of the goods and services provided by the producers. The one cannot exist without the other. The whole rationale behind the production and supply of goods and services is that there is a ready and willing cadre of consumers who want those goods and services. That’s the only way that the producers will make money. But it’s not just that the producers need the consumers to survive: the consumers also need the producers. Or, rather, they need themselves to be involved in production, even if only indirectly, in order to earn an income that enables them to be consumers. I have tried to illustrate this in the diagram below.
The problem alluded to in the story about Ford and Reuther is that this loop is not sustainable if there is too much automation. If the entire productive half of the loop is taken over by robots, then where will the consumers get the income they need to keep the system going? (Hold off on any answers you might have for now — I’ll get to some possibilities later)
When most people think about the unsustainability problem, the production-consumption relationship is the one they usually have in mind. And when they think about that relationship, they usually only focus on the automation of the productive half of the relationship. But this is to ignore another interesting trend in automation: the trend towards automating the entire loop, i.e. production and consumption. How is this happening? The answer lies in the growth of the internet of things and the rise of ‘ambient payments’. Smart devices are capable of communicating and transacting with one another. The refrigerator in your home could make a purchase from the robot personal shopper in your local store. You might be the ultimate beneficiary of the transaction but you have been pushed off the primary economic loop: you are neither the direct producer nor the supplier.
It’s my contention that it is this trend towards total automation that is the really interesting phenomenon. And it’s not just happening in the production-consumption loop either. It is happening in other loops as well. Let me give just two examples: the automation of language production and interpretation in the speaker-listener loop, and the automation of governance in the governor-governed loop.
The production and interpretation of language takes place in a loop. The ‘speaker’ produces language that he or she wishes to cause some effect in the mind of the ‘listener’ — without the presumption of a listener there is very little point to the act. Likewise, the ‘listener’ interprets the language based on the presumption that there is a speaker who wishes to be understood, and based on what they have learned about the meaning of language from living in a community of other speakers and listeners. Language lives and breathes in a vibrant and interconnected community of speakers and listeners, with individuals often flitting back and forth between the roles. So there is, once again, a symbiotic relationship between the two sides of the loop.
Could the production and interpretation of language be automated? It is already happening in the digital advertising economy. This is a thesis that Pip Thornton (the research assistant on the Algocracy and Transhumanism Project that I am running) has developed in her work. It is well known that Google makes its money from advertising. What is perhaps less well-known is that Google does this by commodifying language. Google auctions keywords to advertisers. Different words are assigned different values based on how likely people are to search for them in a given advertising area (space and time). The more popular the word in the search engine, the higher the auction value. Advertisers pay Google for the right to use the popular words in their adverts and have them displayed alongside user searches for those terms.
This might sound relatively innocuous and uninteresting at first glance. Language has always been commodified and advertisers have always, to some extent, paid for ‘good copy’. The only difference in this instance is that it is Google’s PageRank algorithm that determines what counts as ‘good copy’.
Where the phenomenon gets interesting is when you start to realise that this has resulted in an entire linguistic economy where both the production and interpretation of language is slowly being taken over by algorithms. The PageRank algorithm functions as the ultimate interpreter. Humans adjust their use of language to match the incentives set by that algorithm. But humans don’t do this quickly enough. An array of bots are currently at work stuffing webpages with algorithmically produced language and clicking on links in the hope that it will trick the ranking system. In very many instances neither the producers nor interpreters of advertising copy are humans. The internet is filled with oddly produced, barely comprehensible webpages whose linguistic content has been tailored to the preferences of machines. Humans web-surfers often find themselves in the role of archaeologists stumbling upon these odd linguistic tombs.
Automation is also taking place in the governor-governed relationship. This is the relationship that interests me most and is the centrepiece of the project I’m currently running. I define a governance system as any system that tries to nudge, manipulate, push, pull, incentivise (etc.) human behaviour. This is a broad definition and could technically subsume the two relationships previously described. More narrowly, I am an interested in state-run governance systems, such as systems of democratic or bureaucratic control. In these systems, one group of agents (the governors) set down rules and regulations that must be followed by the others (the governed). It’s less easy to describe this as a reciprocal relationship. In many historical cases, the governors are rigidly separated from the governed and by necessity have significant power over them. But there is still something reciprocal about it. No one — not even the most brutal dictator — can govern for long without the acquiescence of the governed. The governed must perceive the system to be legitimate in order for it to work. In modern democratic systems this is often taken to mean that they should play some role in determining the content of the rules by which they are governed.
I have talked to a lot of people about this over the years. To many, it seems like the governor-governed relationship is intrinsically humanistic in nature. It is very difficult for them to imagine a governance system in which either or both roles becomes fully automated. Surely, they say, humans will always retain some input into the rules by which they are governed? And surely humans will always be the beneficiaries of these rules?
Maybe, but even here we see the creeping rise of automation. Already, there are algorithms that collect, mine, classify and make decisions on data produced by us as subjects of governance. This leads to more and more automation on the governor-side of the loop. But the rise of smart devices and machines could also facilitate the automation of the governed side of the loop. The most interesting example of this comes in the shape of blockchain governance systems. The blockchain provides a way for people to create smart contracts. These are automated systems for encoding and enforcing promises/commitments, e.g. the selling of a derivative at some future point in time. The subjects of these smart contracts are not people — at least not directly. Smart contracts are machine-to-machine promises. A signal that is recorded and broadcast from one device is verified via a distributed network of other computing devices. This verification triggers some action via another device (e.g. the release of money or property).
As noted in other recent blog posts, blockchain-based smart contracts could provide the basis for systems of smart property (because every piece of property in the world is becoming a ‘smart’ device) and even systems of smart governance. The apotheosis of the blockchain governance ideal is the hypothetical distributed autonomous organisation (DAO) which is an artificial, self governing agent, spread out across a distributed network of smart devices. The actions of the DAO may affect the lives of human beings, but the rules by which it operates could be entirely automated in terms of their production and implementation. Again, humans may be indirect beneficiaries of the system, but they are not the primary governors or governed. They are bystanders.
2. The Unsustainability Argument
Where will this process of automation bottom out? Can it continue indefinitely? Does it even make sense for it to continue indefinitely? To some, it is not possible to understand the trend toward total automation in terms of its causes and effects. To them, there is something much more fundamental and disconcerting going on. Total automation is a deeply puzzling phenomenon — something that cannot and should continue to the point where humans are completely off the loop.
The Ford-Reuther story seems to highlight the problem in the clearest possible way. How can a capitalistic economy survive if there are no human producers and consumers? Surely this is self-defeating? The whole purpose of capitalism is to provide tools for distributing goods and services to the humans that need them. If that’s not what happens, then the capitalistic logic will have swallowed itself whole (yes, I know, this is something that Marxists have always argued).
I call this the unsustainability problem and it can be formulated as an argument:
- (1) If automation trend X continues, then humans will be pushed off the loop.
- (2) The loop is unsustainable* without human participation.
- (3) Therefore, if automation trend X continues we will end up with something that is unsustainable*.
You’ll notice that I put a little asterisk after unsustainable. That’s deliberate. ‘Unsustainable’ in this context is not to be understood in its colloquial sense, though it can be. Unsustainable* stands for a number of possible concerns. It could be literally unsustainable in the sense that the trend will eventually lead to some breaking point or crash point. This is common in certain positive feedback loops. For example, the positive feedback loop that causes the hyperinflation of currencies. If the value of a currency inflates like it did in Weimar Germany or, more recently, Zimbabwe, then you eventually reach a point where the currency is worthless in economic transactions. People have to rely on another currency or have recourse to barter. Either way, the feedback loop is not sustainable in the long-term. But unsustainable* could have more subtle meanings. It may be the trend is sustainable in the long-term (i.e. it could continue indefinitely) but if it did so you would radically alter the value or meaning that attached to the activities in the loop. So much so that they would seem pointless or no longer worthwhile.
To give some examples, the unsustainability argument applied to the producer-consumer case might involve literal unsustainability, i.e. the concern might be that it will lead to the capitalistic system breaking down; or it might be that it will radically alter the value of that system, i.e. it might force a change in the system of private property. In the case of the speaker-listener loop, the argument might be that automation misses the point of what a language is, i.e. that a language is necessarily a form of communication between two (or more) conscious, intentional agents. If there are no conscious, intentional agents involved, then you no longer have a language. You might have some form of machine-to-machine communication, but there is no reason for that to take the form of language.
3. Should the Unsustainability Problem Concern Us?
I want to close with some simple critical reflections on the unsustainability argument. I’ll keep these fairly general.
First, I want to talk a bit more about premise (1). There are various ways in which this may be false. The simple fact that there is automation of the tasks typically associated with a given activity does not mean that humans will be pushed off the loop. As I’ve highlighted on other occasions, the ‘loops’ referred to in debates about technology are complicated and break down into multiple sub-tasks and sub-loops. Take the production side of the producer-consumer relationship. Productive processes can usually be broken down into a series of stages which often have an internal loop-like structure. If I own a business that produces some widgets, I would usually start the productive process by trying to figure out what kinds of widgets are needed in the world, I would then acquire the raw materials needed to make those widgets, develop some productive process, release the widgets to the consumers, and then learn from my mistakes/successes in order to refine and improve the process in the future. When we talk about the automation of production, there is a tendency to ignore these multiple stages. It’s rare for them all to be automated, consequently it’s likely that humans will retain some input into the loops.
Another way of putting this point is to say that technology doesn’t replace humans; it displaces them, i.e. changes the ecology in which they operate so that they need to do new things to survive. People have been making this point for some time in the debate about technology and unemployment. The introduction of machines onto the factory floors of Ford Motor Cars didn’t obviate the need for human workers; it simply changed what kinds of human workers were needed (skilled machinists etc.). But it is important that this displacement claim is not misunderstood. It doesn’t mean that there is nothing to worry about or that the displacement won’t have profound or important consequences for the sustainability of the relevant phenomenon. The human input into the newly automated productive or consumptive processes might be minimal: very few workers might be needed to maintain production within the factory and there might be limited opportunity for humans to exercise choice or autonomy when it comes to consumer-related decisions. Humans may be involved in the loops but be reduced to relatively passive roles within them. More radically, and possibly more interestingly, the automation trends may subsume humans themselves. In other words, the humans may not be displaced by technology; they may become the technology itself.
This relates to the plausibility of premise (2). This may also be false, particularly if unsustainability is understood in its literal sense. For example, I don’t see any reason to think that the automation of language production and interpretation in online advertising cannot continue. It may prove frustrating for would-be advertisers, and it may seem odd to the humans who stand on the sidelines watching the system unfold, but the desire for advertising space and the scarcity of attention suggests to me that, if anything, there will be a doubling down on this practice in the future. This will certainly alter the activity and rob it of some of its value, but there will still be the hope that you can find someone that is paying attention to the process. The same goes for the other examples. They may prove sustainable with some changed understanding of what makes them worthwhile and how they affect their ultimate beneficiaries. The basic income guarantee, for instance, is sometimes touted as a way to keep capitalism going in the face of alleged unsustainability.
Two other points before I finish up. Everything I have said so far presumes that machines themselves should not be viewed as agents or objects of moral concern — i.e. that they cannot directly benefit from the automation of production and consumption, or governance or language. If they can — and if it is right to view them as beneficiaries — then the analysis changes somewhat. Humans are still pushed off the loop, but it makes more sense for the loops to continue with automated replacements. Finally, as I have elaborated it, the unsustainability problem is very similar to other objections to technology, including ones I have covered in the recent past. It is, in many ways, akin the outsourcing and competitive cognitive artifacts objections that I covered here and here. All of these objections worry about the dehumanising potential of technology and the future relevance of human beings in the automated world. The differences tend to come in how they frame the concern, not in its ultimate contents.