Pages

Monday, May 17, 2021

What Matters for Moral Status: Behavioural or Cognitive Equivalence?


Here's a new paper. This one is forthcoming in the July issue of the Cambridge Quarterly of Healthcare Ethics. It's part of a special edition dedicated to the topic of other minds. The paper deals with the standards for determining whether an artificial being has moral status. Contrary to Henry Shevlin, I argue that behavioural equivalence matters more than cognitive equivalence. This paper gives me the opportuntity to refine some of my previously expressed thoughts on 'ethical behaviourism' and to reply to some recent criticisms of that view. You can access a preprint copy at the links below.


Title: What Matter for Moral Status: Behavioural or Cognitive Equivalence?

Links: Official (to be added); Philpapers; Researchgate; Academia

Abstract: Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.



 

 

 



No comments:

Post a Comment