It's now possible, with the right set of training data, for anyone to create a digital copy of anyone. Some people have already done this as part of research projects, and employers are proposing to do it for employees. What are the ethics of this practice? Should you ever consent to having a digital copy made? What are the benefits and harms of doing so? In a new paper with Sven Nyholm, we propose a minimally viable permissibility principle for the creation and use of digital duplicates. Overall, we think there are significant risks associated with the creation of digital duplicates and that it is hard to mitigate them appropriately. The full paper is available open access here.
Here's the abstract.
Abstract: With recent technological advances, it is possible to create personalised digital duplicates. These are partial, at least semi-autonomous, recreations of real people in digital form. Should such duplicates be created? When can they be used? This article develops a general framework for thinking about the ethics of digital duplicates. It starts by clarifying the object of inquiry– digital duplicates themselves– defining them, giving examples, and justifying the focus on them rather than other kinds of artificial being. It then identifies a set of generic harms and benefits associated with digital duplicates and uses this as the basis for formulating a minimally viable permissible principle (MVPP) that stipulates widely agreeable conditions that should be met in order for the creation and use of digital duplicates to be ethically permissible. It concludes by assessing whether it is possible for those conditions to be met in practice, and whether it is possible for the use of digital duplicates to be more or less permissible.
And here's the minimally viable permissibility principle that we propose in the text:
Minimally viable permissibility principle (MVPP) = In any context in which there is informed consent to the creation and ongoing use of a digital duplicate, at least some minimal positive value realised by its creation and use, transparency in interactions between the duplicate and third parties, appropriate harm/risk mitigation, and there is no reason to think that this is a context in which real, authentic presence is required, then its creation and use is permissible.
Hmmm...IS there an ethic to this? If so, how so? Sorry I do not get it. Seems like people want/think/ believe they can live forever, through the wonders of science? I don't think so. As I have claimed, people and things blow up; break down; fall apart and wear out. If we invent an ethic surmounting this, it is only an invention....another example of contextual reality. Ergo, no matter how it is dressed, it is word salad: a bowl full of speculative nothingness. Coming back to what I claimed: AI stuff also blows up; breaks down; wears out and falls apart. No ethics there, either....just sayin.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteUpon removing previous remarks, I re-read; tried to decipher the abstract for this post. I'm still recovering from personal loss. Would I want a personalized, digital duplicate of her? Can't imagine why I might want that, or what meaning it might have. Life is short, as compared with that of a Galapagos tortoise. Doing the best we can, with what we have and know is pretty good potato salad...with a few, not many, capers.
ReplyDeletePrivacy is difficult to assess and evaluate now. When people walk outside, bellowing into their smartphones twenty-four hours a day, one must wonder whether what privacy means. They are, certainly, oblivious, to any privacy desired by those trying to sleep at three a.m., as they walk by windows, fuming, farting and venting their anger over someone who did them wrong. The camel says: hmmmmph.
ReplyDelete