By Miles Brundage (FHI, Oxford University) and John Danaher (NUI Galway)
(Be sure to check out Miles's other work on his website and over at the Future of Humanity Institute, where he is currently a research fellow. You can also follow him on twitter @Miles_Brundage)
The rise of the robots and the end of work. The superintelligence control problem and the end of humanity. The headlines seem to write themselves. The growth of artificial intelligence undoubtedly brings with it many perils. But it also brings many promises. In this article, we focus on the promise of widely distributed assistive artificial intelligences (i.e. AI assistants). We argue that the wide availability of such AI assistants could help to address one aspect of our growing inequality problem. We make this argument by adopting Mullainathan and Shafir’s framework for thinking about the psychological effects of scarcity. We offer our argument as a counterbalance to the many claims made about the inequality-boosting powers of automation and AI.
1. The Double Effect of Income Scarcity
Achieving some degree of distributive justice is a central goal of contemporary societies. In very abstract terms, this requires a just distribution of the benefits and burdens of social life. If some people earn a lot of money, we might argue that they should be taxed at a higher rate to ensure that the less well off receive some compensating measure of well-being. Tax revenues could then be used to provide social benefits to those who lack them through no fault of their own. Admittedly, some societies pay little more than lip service to the ideals of distributive justice; but in many cases it is a genuine, if elusive, goal. When pressed, many would say that they are committed to the idea that there should be equal opportunities and a fair distribution of benefits and burdens for all. They simply differ in their understanding of equality and fairness.
Various forms of inequality impact on our ability to achieve distributive justice. Income inequality is one of them. Income inequality is a major concern right now. The gap between the rich and the poor seems to be growing (Atkinson 2015; Piketty 2014). And this is, in part, exacerbated by advances in automation. Whether automation is causing long-term structural unemployment is a matter of some controversy. Several authors have argued that it is or that it soon will (Brynjolfsson and McAfee 2014; Ford 2015; Chace 2016). Others are more sceptical. But they sometimes agree that it is having a polarising effect on the job market and the income associated with jobs that are still available to humans. For example, David Autor argues that advances in automation are having a disproportionate impact on routine work (typically middle-income middle-skill work): the routine nature of such work makes it amenable to computer programs (using traditional 'top down' methods or programming or bottom up machine learning methods) performing the associated tasks. This forces workers into two other categories of work: non-routine abstract work and and non-routine manual work. Abstract work is creative, problem-solving work which requires high levels of education and is usually well-rewarded. Manual work is skilled dexterous physical work. It usually does not require high levels of education and is typically poorly-paid and highly-precarious (i.e. short-term, contract-based work). The problem is that there are fewer jobs available at the abstract (and high-paid) end of the jobs-spectrum. The result is that workers displaced by advances in automation tend to be pushed into the manual (and lower-paid) end.
If these polarising trends continue, more and more people will suffer from income-related scarcity. They will find it harder to get work that pays well; and the work they do get will tend to be precarious and insecure. This should be troubling to anyone who cares about distributive justice. The critical question becomes: how can we address the problems caused by income-related scarcity in such a way that there is a just distribution of the benefits and burdens of social life?
What is often neglected in debates about this question is the double effect of income-related scarcity. Research suggests that the poor don’t just suffer from all the problems we might expect to ensue from a lack of income (inability to pay bills, shortage of material resources, reduced ability to plan for the future), they also suffer a dramatic cognitive impact. The work of Sendhil Mullainathan and Eldar Shafir is clear on this point (2014a; 2014b; 2012). To put it bluntly, they argue that having an insufficient income doesn’t just make you poor, it makes you stupid, too.
That’s a loaded way of putting it, of course. Their, more nuanced, view is that income-scarcity puts a tax on your cognitive bandwidth. ‘Bandwidth’ is general term they use to describe your ability to focus on tasks, solve problems, exercise control, pay attention, remember, plan and so forth. It comes in two main flavours:
Bandwidth1- Fluid intelligence, i.e. the ability to use working memory to engage in problem-solving behaviour. This is the kind of cognitive ability that is measured by standard psychological tests like Raven’s Progressive Matrices.
Bandwidth2 - Executive control, i.e. the ability to pay attention, manage cognitive resources, initiate and inhibit action. This is the kind of ability that is often tested by getting people to delay gratification (e.g. the infamous Marshmallow test).
Mullainathan and Shafir’s main contention, backed up by a series of experimental and field studies, is that being poor detrimentally narrows both kinds of cognitive bandwidth. If you have less money, you tend to be uniquely sensitive to stimuli relating to price. This leads to a cognitive tunnelling effect. This means that you are very good at paying attention to anything relating to money in your environment. But this tunnelling effect means that you have reduced sensitivity to everything else. This results in less fluid intelligence and less executive control. The effects can be quite dramatic. In one study, performed in a busy shopping mall in New Jersey, low-income and high-income subjects were primed with a vignette that made them think about raising different sums of money ($150 and $1500) and were then tested on fluid intelligence and executive control. While higher-income subjects performed equally well in both instances, those with lower incomes did not. They performed significantly worse when primed to think about raising $1500. Indeed, the impact on fluid intelligence was as high as 13-14 IQ points.
Mullainathan and Shafir have supported and expanded on these findings in a range of other studies. They argue that the tax on bandwidth doesn’t just hold for income-related scarcity. It holds for other kinds of scarcity, too. People who are hungry are more likely to pay attention to food-related stimuli, with consequent effects on their intelligence and executive control. The same goes for those who are busy and hence time-poor. There is, it seems, a general psychological impact of scarcity. The question we ask here is: Can AI help mitigate that impact?
2. Could AI address the tax on cognitive bandwidth?
To answer this we need to ask another question: What does AI do? There are competing definitions of artificial intelligence. Much of the early debate focused on whether machines could think and act like humans. Nowadays the definition seems to have shifted (at least amongst AI researchers) to whether machines can solve particular tasks or problems, e.g. facial recognition, voice recognition, language translation, pattern matching and classification, playing and winning complex games like chess or Go, planning and plotting routes for cars, driving cars and so on. Many of the tasks performed by modern AIs are cognitive in their character. They involve processing and making use of information to achieve some goal state, such as a high chance of winning the game of Go or having correctly labeled pictures.
The cognitive character of AI throws up an interesting possibility: Could AI be used to address the tax on cognitive bandwidth that is associated with scarcity? And could this, in turn, help us to edge closer to the ideals of distributive justice?
Mullainathan and Shafir’s research suggests that the tax on bandwidth is a major hurdle to resolving problems of inequality. People who have a scarcity mindset are often lured into accepting short-term solutions to their scarcity-related problems. This is because they suffer from immediate forms of scarcity: not enough money to get through the week, not enough food to get through the day. They will often adopt the quickest and most convenient solutions to those problems. One classic example of this is the tendency for the poor to take out short-term high-interest loans: they borrow heavily from their future to pay for their present. This can create a negative feedback loop, making it even more difficult to help them out of their position.
If this is (at least partially) a function of the tax on cognitive bandwidth, then perhaps the wide distribution of assistive AI could create some cognitive slack, and perhaps this could address some of the problems of inequality. An argument along the following lines suggests itself:
- (1) Poverty (or scarcity more generally) imposes a tax on cognitive bandwidth which has deleterious consequences: the poor are less able to plan for the future effectively; they are more susceptible to short-term fixes to their problems; and their problem-solving and fluid intelligence is negatively impacted. (Mullainathan & Shafir’s thesis)
- (2) These consequences exacerbate problems with income inequality (negative feedback problem).
- (3) Personal AI assistants could address/replace/make-up for the tax on cognitive bandwidth.
- (4) Therefore, personal AI assistants could redress the deleterious consequences of cognitive scarcity.
- (5) Therefore, personal AI assistants could reduce some of the problems associated with income inequality.
This argument is not intended to be formally valid. It is informal and defeasible in nature. In schematic terms, it is saying that poverty (or scarcity) results in X (the tax in bandwidth) which in turn causes Y (the exacerbation of income inequality); Personal AI assistants could prevent X; therefore this should in turn prevent Y. Any first year philosophy student could tell you that this is formally invalid. Just because you prevent X from happening doesn’t mean that Y won’t happen. We are aware of that. Our argument is a more modest one: if you can block off one of the causal contributors to increased income inequality, perhaps you can help to alleviate the problem. Other things could, of course, further compound the problem. But personal AI assistants could help.
How might they do this? This is the central claim of the argument (premise 3). We use the term ‘personal AI assistants’ to refer to any AI system that provides assistance to an individual in performing routine or non-routine cognitive tasks. The assistance could range from basic information search, to decision-support, to fully automated/outsourced cognitive labour. The tasks on which the AI provides assistance could vary enormously (as they already do). They could include elements of personal day-to-day finance, such as budgeting, planning expenditure, shopping lists, advice on personal finance and so forth. Decision-support AI of this sort could help the poor to avoid exacerbating their financial woes due to their reduced cognitive bandwidth. The assistive functions need not be limited to personal finance, of course; we simply use this example as it is particularly pertinent in the present discussion. Support from AI could also help in non-finance-related aspects of an individual’s life. If, as Mullainathan and Shafir argue, the scarcity-induced tax on cognitive bandwidth has negative effects on an individual’s problem-solving capabilities, then it could presumably impact negatively on their work or their ability to find work. Assistive AI could plausibly help to redress these deficits, too.
There is nothing outlandish about the possibility of personal AI assistants of this sort. First generation variants of them are widely available. Apple’s Siri, Google’s Assistant, Microsoft’s Cortana, and Amazon’s Echo are only the most obvious and ‘personalised’ versions. Most people own devices that are capable of channeling these technologies into their daily lives. Admittedly, this first generation of AI has its weaknesses. But we can expect the technology to improve. And, if we take the argument being made here seriously, it might be appropriate to act now to encourage makers of this technology to invest in forms of AI that will provide assistance on the crucial cognitive functions that are most impacted by poverty.
A few further remarks can be made about what sorts of AI capabilities would enable personal AI assistants to contribute more meaningfully to cognitive slack than they do currently, and what these capabilities might look like. Having such an understanding and noting research trends in the direction of such capabilities may give us additional reasons to believe Premise 3 may one day be true. A current problem with AI assistants is that they lack common sense reasoning abilities, and as such may misunderstand queries, or fail to flag potential scheduling conflicts (or opportunities) that are not easily resolvable using their existing core competencies (e.g. simply noting that a meeting taking place in another location will require travel to that location from where the user is now). On top of this, affective computing, aimed at developing computational systems that can sensibly respond to and in turn shape human emotions, is currently developing rapidly, but is also not yet in a stage of development where AI assistants can reliably identify cues of emotional distress or comfort. Nevertheless, developing such a capacity is plausible eventually. Finally, natural language understanding is not yet at the level of capability that would allow accurate summarization and prioritization of emails, text messages, and so forth, but this is also plausible eventually.
To ground the claims of plausibility above, consider a simple argument: humans are, like purported future AIs, also computational systems (though perhaps with greater and more efficient use of processing power than those available to computers today). Brains compute appropriate responses to one’s environment, and have “wetware” analogues of the sensors and effectors that AI and robotic systems currently use. Such humans are often capable of being very effective personal assistants, providing cognitive slack for their human employers. If scientific progress continues and we eventually have a well-developed account of the physical processes through which such human cognitive assistance occurs, we would expect to be able to replicate that functionality in machines.
This suggests a sort of lower bound on the potential that AI could have to alleviate the tax on cognitive bandwidth: that lower bound is the level of support provided by the best human personal assistants today. Currently, many rich and powerful people have access to either one or many assistants (or, more generally, staff) and are able to process more information, perform more tasks, and otherwise be more effective in their lives thanks to offloading much of their sensing, cognition, and action to others. Given the functional equivalency between the human mind and an advanced AI, we could easily imagine personal assistant AIs of the future that are at least as powerful as the single best personal assistant or staff member, or a team thereof.
One point glossed over in this discussion is the matter of physicality. Human assistants often perform tasks in the real world, and a personal assistant AI on a smartphone wouldn’t necessarily be able to do all the same things, even with the same cognitive abilities. This requires revising the lower bound on the alleviation provided by AI assistant to the level of assistance that could be provided solely through digital means by a human assistant. But this may not be a radical revision. Indeed, speaking from prior experience working as someone’s assistant, one of the authors (Miles) notes that a lot of tasks can be done to alleviate cognitive scarcity for another person simply through reading, analyzing, and writing emails; receiving and making calls; sending and receiving tasks; updating their calendar; and so forth. Such tasks do not require physical effectors.
With always-on access to one’s user, a personal assistant AI could periodically chime in with important updates, ask focused questions regarding the relative priorities of tasks, suggest possible drafts (or revisions) of emails, flag likely-important missed messages, etc. Today, assistant AIs do only a small fraction of these capabilities, and often do them poorly. The lower bound of human-level performance is not perfection - an assistant (be they human or machine) cannot necessarily predict every evaluation their principal would make of events, tasks, people, etc. in advance, and there are inherent trade-offs between reducing the time spent asking for the principal’s feedback (and possibly annoying them), on the one hand, and getting things done effectively behind the scenes, on the other.
Remember, this is just a rough lower bound for the slack that could be created by personal AI assistant. It could also be that personal AI assistants exceed the abilities of humans in various ways. Already, Google and other search engines are vastly better and faster than humans at processing billions and trillions of pieces of information and presenting relevant links (and, often, relevant facts or direct answers to a query), and they do not need to sleep. In other areas, as well, AI already vastly exceeds human capabilities. So it is easy to imagine that the scarcity-alleviating effect of AI could be far greater than that of a human assistant or team thereof for every person.
3. Conclusion: Weaknesses of the argument?
That’s the bones of the argument we wish to make. Is it any good? There are several issues that would need to be addressed to make it fully persuasive.
For starters, we would need to confirm that AI assistants do actually alleviate the tax on bandwidth. That will require empirical analysis. It is impossible to empirically assess future technologies, but there is probably much to be learned from studying the alleviating effects of existing AI assistants and search engines, and by evaluating the impact of human assistants on cognitive bandwidth. We would also need to compare any positive results that can be found in these studies with the putative negative results identified by other authors. Nicholas Carr, for example, has argued that the use of automating technologies leads to cognitive degeneration (i.e. a disenhancement of cognitive ability). Our argument may provide an important counterpoint to Carr’s thesis, suggesting that alleviating some cognitive burdens can alleviate other negative aspects of inequality, but perhaps there are tradeoffs here in terms of cognitive degeneration that would need to be assessed.
In addition to this, there are a range of concerns people might have about our argument even if AI can be proven to provide cognitive slack. AI assistants at the moment are constructed by large-scale corporate enterprises and are intended at least in part to support their corporate interests. Those interests may not align with the agenda we have outlined in this post. So we need to work hard to ensure that the right kinds of assistive AI are developed and made widely available. There is a danger that if the technology is not widely distributed it will just make the (cognitively) rich even richer. One suggestion here is that perhaps we should view AI assistance as a public good or a form of social welfare, and support its responsible development and free diffusion as such? Furthermore, there may be unintended consequences associated with the wide availability of AI assistance that we don’t fully appreciate right now. An obvious one would be a ‘treadmill’ effect: if cognitive slack is created by this technology, people will simply be burdened (taxed) with other cognitive tasks to stretch them to their limitations once more.
Despite these concerns, we think the argument we have sketched is provocative and may provide an interesting corrective to the current doomsaying around AI and social inequality. We welcome critical feedback on its key premises and encourage people to articulate further objections in the comments section.