I have a new paper, forthcoming later this year, in the Cambridge Quarterly of Healthcare Ethics. It's about what we ought to do (or believe) when we are unsure of whether another entity has a mind. While many of looked at this topic before, I argue that a proper accounting of the false positive and false negative risks of over- and under-ascribing mindedness to other entities is needed in order to decide what to do. I look at AI as a particular case study of this, but the argument has broader significance. I have posted a preprint for the time being. The final version will be available in open access format.
Title: Moral Uncertainty and Our Relationships with Unknown Minds
Journal: Cambridge Quarterly of Healthcare Ethics
Links: Official; Philpapers; Researchgate
Abstract: We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI etc), animals, and patients with ‘locked in’ syndrome. Do these entities have basic moral standing? Could they count as true friends or intimate partners? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimise the risks of moral wrongdoing or improve the choiceworthiness of our actions. One particular argument adopted in this literature is the ‘risk asymmetry argument’, which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favouring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this paper argues that taking potential risk asymmetries seriously can help to resolve disputes about the status of human-AI relationships, at least in practical terms (philosophical debates will, no doubt, continue), however, the resolution depends on a proper, empirically-grounded assessment of the risks involved. Being sceptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take, though this in turn creates tension in our moral views that requires additional resolution.