|Thomas Sinclair (left), Ben Kenward (right)|
Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy.
- What is a moral value?
- What is a moral machine?
- What is moral progress?
- Has society progress, morally speaking, in the past?
- How can we design moral machines?
- What's the problem with getting machines to follow our current moral consensus?
- Will people over-defer to machines? Will they outsource their moral reasoning to machines?
- Why is a lack of moral progress such a problem right now?