Pages

Monday, April 25, 2022

Criticisms and Developments of Ethical Behaviourism




A few years ago, I developed a position I called 'ethical behaviourism' and applied it to debates about the moral status of artificial beings. Roughly, ethical behaviourism is a moral equivalent of the Turing test for artificial intelligence. It states that if an entity looks and acts like another entity with moral status, then you should act as if it has that status. More strongly, it states that the best evidence we have for knowing that another entity has moral status is behavioural. No other form of evidence (mechanical, ontological, historical) trumps the behavioural evidence.

My longest defence of this theory comes from my original article "Welcoming Robots into the Moral Community: A Defence of Ethical Behaviourism" (official; open access), but, in many ways, I prefer the subsequent defence that I wrote up for a lecture in 2019 (available here). The latter article clarifies certain points from my original article and responds to additional objections.

I have never claimed that ethical behaviourism is particularly original or insightful. Very similar positions have been developed and defended by others in the past. Nevertheless, for whatever reason, it has piqued the curiosity of other researchers.  The original paper has been cited nearly 80 times, though most of those citations are 'by the way'. More significantly, there are now several interesting and substantive critiques and developments on it available in the literature. I thought it would be worthwhile linking to some of the more significant ones here. I link to open access versions wherever possible.

If you know of other substantive engagements with the theory, please let me know.


  • "The ethics of interaction with neurorobotic agents: a case study with BabyX" by Knott, Sagar and Takac - This is possibly the most interesting paper engaging with the idea of ethical behaviourism. It is a case study of an actual artificial agent/entity. Ultimately, the authors argue that my theory does not account for the experience of people interacting with this agent, and suggest that artificial agents that mimic certain biological mechanisms are more likely to warrant the ascription of moral patiency.

  • 'Is it time for rights for robots? Moral status in artificial entities' by Vincent Müller - A critique of all proponents of moral status for robots that includes somewhat ill-tempered critique of my theory. Müller admits he is offering a 'nasty reconstruction' (something akin to a 'reductio ad absurdum') of his opponents' views. I think he misrepresents my theory on certain key points. I have corresponded with him about it, but I won't list my objections here. 

  • 'Social Good Versus Robot Well-Being: On the Principle of Procreative Beneficence and Robot Gendering' by Ryan Blake Jackson and Tom Williams - One of the throwaway claims I made in my original paper on ethical behaviourism was that, if the theory is correct, robot designers may have 'procreative' duties toward robots. Specifically, they may be obliged to follow the principle of procreative beneficence (make the best robots it is possible to make). The authors of this paper take up, and ultimately dismiss, this idea. Unlike Müller's paper, this one is a good-natured critique of my views.


  • 'How Could We Know When a Robot was a Moral Patient?' by Henry Shevlin - A useful assessment of the different criteria we could use to determine the moral patiency of a robot. Broadly sympathetic to my position but suggests that it needs to be modified to include cognitive equivalency and not just behavioural equivalency.



Another honourable mention here would be my blog post on ethical behaviourism in human-robot relationships. It summarises the core theory and applies it to a novel context.


No comments:

Post a Comment