Lots of algorithmic tools are now used to support decision-making in the criminal justice system. Many of them are criticised for being biased. What should be done about this? In this episode, I talk to Chelsea Barabas about this very question. Chelsea is a PhD candidate at MIT, where she examines the spread of algorithmic decision making tools in the US criminal legal system. She works with interdisciplinary researchers, government officials and community organizers to unpack and transform mainstream narratives around criminal justice reform and data-driven decision making. She is currently a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly, she was a research scientist for the AI Ethics and Governance Initiative at the MIT Media Lab.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show notes
Topics covered in this show include
- The history of algorithmic decision-making in criminal justice
- Modern AI tools in criminal justice
- The problem of biased decision-making
- Examples of bias in practice
- The FAT (Fairness, Accountability and Transparency) approach to bias
- Can we de-bias algorithms using formal, technical rules?
- Can we de-bias algorithms through proper review and oversight?
- Should we be more critical of the data used to build these systems?
- Problems with pre-trial risk assessment measures
- The abolitionist perspective on criminal justice reform
Relevant Links
- "Studying up: reorienting the study of algorithmic fairness around issues of power." by Chelsea and ors