I recently had the privilege of being interviewed for the Austrian website Futurezone about the future of work and the meaning of life. The interview was translated into German from a conversation conducted via email in English. The official published version is available here. I have reproduced the English email responses below. There are some discrepancies between them. I don't really speak German (minimal, direction-seeking competence) so if anyone does and can produce a better re-translation, I'd be interested in it.
Here it is...
1. Do you think that we are headed towards a world of technological unemployment? If so, by when might it reach a critical point?
I'm wary of making predictions about this, particularly since similar historical concerns about technological unemployment have always proved to be unfounded. There are some reasons to think that 'this time it's different'. If we create AI with general and not simply narrow intelligence then I think we are heading for a radically different world. But I'm not sure when this will happen.
I do, however, think that irrespective of achieving general AI, technology is dramatically altering the types of jobs that are done by humans (e.g. Autor's polarisation effect). This is happening already and is having a noticeable impact on job stability and income.
2. What would it mean for your argument if only part of the world would reach a point where machines take over most of the work?
My argument is about the impact on meaning and human flourishing. If only part of the world reaches a point where machines take over most work, then presumably the same issues would arise in that part of the world. The only thing that would disrupt the analysis is what is happening in the rest of the world and how they react. Will they be jealous of those who have achieved technological unemployment or thankful that they still have jobs? Will people try to migrate away from or to those regions? I can see arguments for both sides, and I try to outline them in paper (i.e. reasons why people would want to have jobs and want to avoid them).
I should also add that the international picture would depend very much on how the income distribution problem is resolved.
3. What would happen to the quest for social status in a world where differences in income and class would presumably be leveled?
People will fight for status in other ways, I suspect. One thing I didn't discuss in the paper in much detail, but which I have discussed elsewhere (see here and here) is the suggestion that games and other leisure activities will become a major outlet for the unemployed. I imagine that success and proficiency in these games could be a significant source of status.
One minor point: I'm not sure that 'class' would be leveled in a world of technological unemployment with income redistribution. It depends on what you mean by 'class', but my interpretation of the term doesn't rest everything on levels of income.
4. Is the concept of “meaning in life” even valid in a world where machines optimally manage societies?
As long as there are humans around to ask questions about what it is all for, then I think the concept has validity. Unless you mean something by 'validity' that I don't quite grasp. To me, it just means whether or not it is sensible to ask the question and look for an answer. I think meaning will always be important to humans because it has been since the birth of civilisation.
5. Do you think these intelligent machines might have their own agenda?
'Agenda' might be the wrong way of putting it. Machines could have goals that are antithetical or inconsistent with our own, and if they have great power and adaptable intelligence this could lead to problems. This is something that is widely discussed in the literature about AI risk and motivates the doomsaying pronouncements of people like Elon Musk and Stephen Hawking.
6. If people can no longer contribute to objectively “good” developments and their activities become restricted to the domains of arts and culture, what happens to those that don’t have talent?
I could probably write many thousands of words about this. It depends on a couple of things. Is talent innate and fixed? If not, then people could use their free time to learn and master new skills in the worlds of art and culture (or games, as I mentioned above). If it is fixed, how exactly is it fixed? Is it genetic or biological in some way? If so, then developments in human enhancement technology might enable people to overcome those limitations. But then we get into questions about whether enhanced artistic or cultural creation is as valuable as unenhanced equivalents. Maybe it is less authentic and meaningful. Contrast the achievements of an athlete who wins based on innate talent with an athlete who uses performance enhancing drugs. Do we value the former more than the latter? My sense is that we do, but I'm not sure that it is a defensible distinction.
7. In how far could eastern philosophic approaches help to find meaning without the need to actively contribute?
I think there is great wisdom in some of the Eastern approaches to enlightenment and self-transcendence, particularly in the Buddhist tradition. This could be a major source of inspiration and meaning for people. My friend and colleague James Hughes (from the Institute of Ethics and Emerging Technologies) explores the intersection between buddhist thought and transhumanism in his work.
8. Is the ceding of control of our society to computers a bigger limitation to our freedom than compulsory work?
Potentially. I wrote another paper about this that you might be interested in. It's called 'The Threat of Algocracy' and discusses ways in which the takeover of public decision-making by machines could threat core values in liberal democratic society, specifically values associated with the participation in and comprehension of decision-making processes that interfere with our freedom.
9. Could technological unemployment pose a threat to the future of mankind, by breaking our “spirit”?
This is an interesting topic. If we feel completely inferior to machines and are unable to find another source of meaning in life, then it is possible that we would end up in a state of listless, frustrated boredom. I think that would amount to breaking our spirit.
10. Wouldn’t VR as an escapist solution invalidate the need for technological progress to reach a point where machines do the work?
I'm not sure I understand this question. First, I'm not convinced that VR is an escapist solution. This is something I discuss briefly in the paper (the 'primacy of the real' objection) and in more detail in this podcast. In other words, I'm not convinced that the kinds of experience and meaning found in a VR world are necessarily less meaningful than those in the real world. Second, as long as there is a demand/necessity for people to work in order to secure basic needs, then VR couldn't function as an escapist solution.
11. In your paper you assume that we will find a way to ensure egalitarian distribution of the gains from automation. How confident are you in that outcome, especially in light of political willingness, economic motivations, developmental differences and the elites’ willingness to give up their relative advantage?
I have some optimism - several countries in Europe are either experimenting with or seriously considering something like a basic income guarantee, and the cultural conversation about the idea is really taking off. But I doubt that we would ever achieve an egalitarian distribution (if by that you mean parity of income). I think there will always be some relative advantage. In the paper I don't assume egalitarian distributions of the gains; I only assume that people will have access to everything they need, i.e. that they live in a world of relative abundance.