In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.
You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).
- 0:00 - Introduction
- 1:46 - What is algorithmic decision-making?
- 4:20 - Isn't all decision-making algorithmic?
- 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate
- 12:02 - Limitations of the COMPAS debate
- 15:22 - Other examples of unfairness in algorithmic decision-making
- 17:00 - What is discrimination in decision-making?
- 19:45 - The mental state theory of discrimination
- 25:20 - Statistical discrimination and the problem of generalisation
- 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination
- 34:40 - Algorithmic typecasting: Could we all end up like William Shatner?
- 39:02 - Egalitarianism and algorithmic decision-making
- 43:07 - The role that luck and desert play in our understanding of fairness
- 49:38 - Deontic justice and historical discrimination in algorithmic decision-making
- 53:36 - Fair distribution vs Fair recognition
- 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?
- 'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns
- 'Algorithmic Accountability and Public Reason' by Reuben Binns
- 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al
- 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm
- 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same)