What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine's actions? That's the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History & Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show Notes
- What is responsibility? Why is it so complex?
- The three faces of responsibility: attribution, accountability and answerability
- Why are people so worried about responsibility gaps for autonomous systems?
- What are some of the alleged solutions to the "gap" problem?
- Who are the techno-pessimists and who are the techno-optimists?
- Why does Daniel think that there is no techno-responsibility gap?
- Is our application of responsibility concepts to machines overly metaphorical?
Relevant Links
- "There is no Techno-Responsibility Gap" by Daniel
- "Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability" by Mark Coeckelbergh
- Technologically blurred accountability? by Kohler, Roughley and Sauer
No comments:
Post a Comment