Pages

Wednesday, March 1, 2017

The Carebot Dystopia: An Analysis



[If you like this blog, consider signing up for the newsletter...]

The world is ageing. A demographic shift is underway. According to some figures (Suzman et al 2015), the proportion of the worldwide population aged 65 or older will outnumber the proportion aged under 5 by the year 2020. And the shift is faster is some countries. Japan is a striking example. Demographers refer to it as a ‘super-ageing’ society. By 2030, they estimate that one in three Japanese people will be aged 65 or over. One in five will be over 75.

Old age, in its current form (i.e. absent any breakthroughs in anti-ageing therapies), is a state of declining health and increasing dependency. The care burden facing our ageing societies is, consequently, a matter of some concern. Some people are turning to robots for the answer, advocating that they be used as companions and carers for the elderly, both to shift the burden off the shrinking youth population, as well as to cut costs.

Is this the right way to go? Robert and Linda Sparrow think not. In their 2006 article “In the hands of machines? The future of aged care”, they paint a dystopian picture. They describe a future in which robots dominate the care industry and the elderly are increasingly isolated from and forgotten by their younger human peers:

The Carebot Dystopia: “We imagine a future aged-care facility where robots reign supreme. In this facility people are washed by robots, fed by robots, monitored by robots, cared for and entertained by robots. Except for their family or community service workers, those within this facility never need to deal or talk with a human being who is not also a resident. It is clear that this scenario represents a dystopia…” 
(Sparrow and Sparrow 2006, 152)

But is it really so clear? I want to try to answer that question in this post. I do so with the help of Mark Coeckelbergh’s recent article “Artificial agents, good care and modernity.” Coeckelbergh’s article performs two important functions. First, it tries to develop an account of good care that can be used to evaluate the rise of the carebots. And second, it explains how the use of carebots is tied into the broader project of modernity in healthcare. He criticises this project because not only does it try to mechanise healthcare, it also tries to mechanise human beings.

I’ll explain both parts of his critique in this post. As will become clear, my own personal view is less dystopian than that of Coeckelbergh (and Sparrow & Sparrow), but I think it is important to understand why he (and they) find the rise of the carebots so disturbing.


1. Ten Features of Good Care
Coeckelbergh starts his article by identifying ten features of good care. There is little to be done apart from listing these ten features and explaining the rationale behind their articulation. I’m going to label these ten features using the notation F1, F2 (and so on) because I’ll be referring to them in a later argument and it is easier to do this using these abbreviations:

F1 - ‘Good care attempts to restore, maintain and improve the health of persons’. This feature speaks for itself really. It seems obvious enough that care should aim at maintaining and restoring health and well-being, but Coeckelbergh notes that this excludes certain controversial practices from the realm of care (e.g. euthanasia). That’s not because he objects to those practices, but because he wants to develop an account of care that is relatively uncontroversial.

F2 - ‘Good care operates according to bioethical principles and professional standards and codes of ethics’. Again, this feature is pretty straightforward. Ethical principles and codes of practice are widespread in medicine nowadays (with most contemporary codes being developed, initially, as a response to the abuse of powers by Nazi doctors during WWII).

F3 - ‘Good care involves a significant amount of human contact’. This feature will obviously be controversial in the context of a debate about carebots since it seems to automatically exclude them. But there is clearly a powerful intuition out there — shared by many — which holds that human contact is an important part of therapy and well-being. That said, the intuition is probably best explained in terms of other, more specific properties or characteristics of human-human contact (such as those that follow).

F4 - ‘Good care is not just physical but also psychological (emotional and relational)’. This cashes out what is important about human-human contact in terms of empathy, sympathy and other emotional relations. This is a key idea is many theories of care. It requires that we take a ‘holistic’ approach to care. We don’t just fix the broken body but also the mind (of course, holding to some rigid distinction between body and mind is problematic but we’ll ignore that for the time being).

F5 - ‘Good care is not only professional but should involve relatives, friends and loved ones’. Again, this cashes out what is important about human-human contact in terms of the specific relationships we have with people we also love and care about. It’s important that we don’t feel abandoned by them to purely professional care.

F6 - ‘Good care is not experienced solely as a burden but also (at least sometimes) as meaningful and valuable’. This one speaks for itself really. What is interesting about it is how it switches focus from the cared-for to the carer. The claim is that care is better when the carer gets something out of it too.

F7 - ‘Good care involves skilled know-how next to skilled know-that’. This might require some explanation. The idea is that good care depends not just on propositional or declarative knowledge being dispensed by some professional expert (like a doctor) but also on more tacit and implicit forms of manual, and psychological knowledge. The suggestion is that care is a craft and that the carer is a craftsman/woman.

F8 - ‘Good care requires an organisational context in which there are limits to the division of labour so as not to make the previous criteria impossible to meet’. This feature points to problems that arise from excessive specialisation (assembly-line style) in healthcare. If the care task is divided up into too many discrete stages and distributed among too many individuals, it will be impossible to develop the rich, empathetic, craftsman-style relationship that good care requires.

F9 - ‘Good care requires an organisational context in which financial considerations are not the only or main criterion of organisation’. This feature is related to the previous one. It suggests that a capitalistic, profit-maximising logic is antithetical to good care.

F10 - ‘Good care requires the patient to accept some degree of vulnerability and dependency’. This features brings the focus back to the cared-for and suggests that they have to shoulder some of the burden/responsibility for ensuring that the care process goes well. They cannot resist their status as someone who is dependent on others. They need to embrace this status (to at least some extent).

There is probably much to be said about each of these ten features. Some could be disputed (as, indeed, I have already disputed F3) and others may need to be finessed in light of criticisms, but we will set these complications to the side for now and consider how these ten features of good care can be used to criticise the rise of the carebots.




2. The Case Against Carebots
There is a simple argument to be made against carebots:


  • (1) Good care requires features F1…F10.
  • (2) If we use carebots, or, rather, if their use becomes widespread, it will not be possible to satisfy all of the required features (F1…F10) of good care.
  • (3) Therefore, the rise of the carebots is contrary to good care.


This simple argument leads to some complex questions. Obviously premise (2) is too vague in its current form. It prompts at least two further questions: (i) Which features of good care, exactly, are blocked by the use of carebots? and (ii) why does the use of carebots block these features?

The first of these questions is important because some of the features clearly have nothing to do with carebots and are unlikely to be undermined by their use. For example, the attitude of the cared-for, the adherence to professional ethical codes, the organisational context, and the ability to maintain, restore and improve health would seem to be relatively unaffected by the use of carebots. There could certainly be design and functionality issues when it comes to the deployment of carebots — it could be that they are not great at maintaining health and well-being — but these are contingent and technical problems, not in principle problems. Once the technology improves, these problems could be overcome. The deeper question is whether there are certain limitations that the technology could not (or is highly unlikely to) overcome.

That’s where features F3…F7 become important. They are the real focus when it comes to opposition to carebots. As I said previously, F3 (the need for human contact) is unhelpful in the present context because it stacks the deck against the deployment of carebots. So let’s leave that to the side. The more important features are F4…F7, which cash out in more detail why human-human contact is important. There is serious concern that carebots would block the satisfaction of those features of good care.

This brings us to the second question: why? What is it about robots that prevents those features from being satisfied? The arguments are predictable. The claim, broadly speaking, is that robots won’t be able to provide the empathy and sympathy needed, they won’t be able to develop the skills needed for care-craftsmanship, they cannot be our loved ones, and they cannot experience the care-giving task as a meaningful one. Why not? Because robots are not (and are unlikely to be) persons. They may give the illusion of being persons, but they will lack the rich, phenomenological, inner mental life of a person. They may provide a facade or pretense of personhood, nothing more.

This is my way of putting it. Coeckelbergh is more subtle. He acknowledges that carebots may actually help satisfy the conditions of good care if they are mere tools, i.e. if they merely assist human caregivers in certain tasks. The danger is that they won’t be mere tools. They will be artificial agents that takeover certain tasks. Of course, it is not clear what it means to say that an artificial agent will ‘takeover’ a task — the caregiving task is multifaceted and consists of many sub-tasks. But here Coeckelbergh focuses on the perception and experience of humans involved in care. He is a proponent of an experiential approach to robot ethics — one that prioritises the felt experiences of humans over any supposed objective reality.

So he argues that carebots will undermine good care “if and insofar as [they] appear as agents that takeover care tasks” (2015, 273). And these appearances matter, in turn, because the robots who appear as agents will be unable to satisfy the features of good care:

”insofar as the machine is perceived as ‘taking over’ the task of care and as taking on the role of the human care agent, then, if the ideal of care articulated above is assumed, it seems that something would be expected from the machine that the machine cannot give: the machine cannot take up this role and responsibility, cannot care in the ways defined above. It may appear to have emotions, but be unable to fulfil what is needed for care as articulated above.” 
(Coeckelbergh 2015, 273)


Is it a good argument? I’ve voiced my opposition to this kind of thing before. I have three major objections. The first is that robots could be persons and have a rich inner mental life. In my mind, there is no good ‘in principle’ objection to this. That said, this is just a conceptual possibility, not an immediate practical reality. The second objection is that I am a performativist/behaviourist when it comes the ethics of our interactions with others (including robots and human beings). I think we never have access to another person’s inner mental life. We may infer this from their outer behaviour, but this outer behaviour is ultimately all we have to go on. If robots are performatively equivalent to humans in this respect, they will be able to fulfil the same caregiving roles as human agents. Indeed, I’m surprised Coeckelbergh, with his preference for the experiential approach, doesn’t endorse something similar. In this respect I find the ‘experiential’ framing of his objection to carebots a little odd. His preoccupation with appearances is not that deep. His objection is ultimately metaphysical in nature. The appearances only matter if the robots do not, in fact, have the requisite capacities for empathy, sympathy and so on. That said, I accept that carebots are unlikely to be performatively equivalent to human beings in the near future. So I fall back on my third objection, which is that in many instances carebots will be able to complement, not undermine human-to-human relationships.

This final objection, however, is challenged by Coeckelbergh’s other argument about modernisation in healthcare. Let’s look at that now.


3. Healthcare, Modernity and the Machine
The argument in the previous section was about carebots blocking the route to good care because of what they are and how they interact with humans. As such, it was focused on the robots themselves. This next argument shifts focus from the robots to the general social-economic context in which they are deployed. The idea underlying it is that robots are a specific manifestation of a much more general problem.

That problem is one of modernisation in healthcare. It is a problem that goes to the heart of the capitalistic and ‘neoliberal’ model of organisation. The idea is that capitalistic modes of production and service provision are mechanistic at an organisational level. Think of Henry Ford’s assembly-line. The goal of that model of production was to break the task of building a car up into many discrete, specialised tasks, so as to maximise the productivity of labour power. The production process was thus treated as a machine. The machine was built out of capital and labour power.

This has bad consequences for the humans that are part of that production machine. The individual workers in the assembly-line are dehumanised and automatised. They are reduced to mere cogs in the capitalistic machine. They are alienated from their labour and the products of their labour.

Coeckelbergh uses this Marxist line of thought to build his critique of carebots. His claim is that modern healthcare has been subjected to the same mechanical organisational forces. I’ll leave him describe the problem:

…All kinds of practices become shaped by this kind of thinking and this way of organizing work, even if they do not literally resemble industrial production processes or assembly lines. For health care work, it means that under modern conditions, care work has become ‘labour’, which (1) is wage labour (care is something one does for money) and (2) involves modern employment relations (with professionalization, disciplining, formalization of the work, management, etc.) and (3) involves relations between care giver and care receiver in which the receiver is in danger of appearing to the care giver as an object…in which the care is made into a commodity, a product or a service. 
(Coeckelbergh 2015, 275)

The problem with carebots is that they simply reinforce and accelerate this process of mechanisation. They contribute to the project of modernity and that project is (or should be) disturbing:

Here the worry is that the machine is used to automate health care as part of its further modernization, and that this has the alienation effects mentioned. This is not about the machine ‘taking over’; it is about humans becoming machines. 
(Coeckelbergh 2015, 275)

To set all this out a little more formally:


  • (4) The mechanisation of service provision is bad (because of its alienating and dehumanising effects) and so anything that contributes to the process of mechanisation is bad/not to be welcomed.
  • (5) The use of carebots contributes to the mechanisation of service provision in health care.
  • (6) Therefore, the use of carebots is bad.


This is an interesting argument. It involves a much more totalising critique: a critique of modern society and how it treats its citizens and subjects. Robots are challenged because they are a particular manifestation of this more general social trend.

Is the argument any good? I have some concerns. Because it is part of this more totalising critique, its persuasiveness is largely contingent on how much you buy into that larger critique. If you are not a strong Marxist, if you don’t accept the Marxist concept of alienation, if you embrace an essentially mechanical and materialist view of humanity, then the criticisms have much less bite.

Furthermore, even if you do buy into those larger critiques, there is at least some reason to doubt that the use of carebots is all that bad. There are good reasons to object to the mechanisation of service provision because of how it treats human service providers: they are treated as cogs in a machine not as fully autonomous beings in themselves. Replacing them with machine labour might be thought to free them from this dehumanising process. Thus automation might be deemed a net benefit because of its potential to liberate humans from certain capitalistic forces. This is an argument that I have made on other occasions, and it is embraced by some on the academic left. That said, this argument only focuses on the workers and service providers, not on the people to whom the service is provided. There may be a dehumanising effect on them. But that’s really what the first of Coeckelbergh’s arguments was about.

Anyway, that’s it for now. To briefly recap, Coeckelbergh has provided two arguments against carebots. The first focuses on the conditions of good care and suggests that robots are unable to satisfy those conditions. The second focuses on the project of modernity and its mechanising effects. It worries about carebots to the extent that they contribute to that project. Both arguments have their merits, but it’s unclear whether they truly support the ‘dystopian’ concerns outlined at the start of this post.

1 comment: