Tuesday, October 20, 2020

84 - Social Media, COVID-19 and Value Change



Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert (PhD) about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

Show Notes

Topics discussed include:

  • What is a value?
  • Descriptive vs normative theories of value
  • Psychological theories of personal values
  • The nature of emotions
  • The connection between emotions and values
  • Emotional contagion
  • Emotional climates vs emotional atmospheres
  • The role of social media in causing emotional contagion
  • Is the coronavirus promoting a negative emotional climate?
  • Will this affect our political preferences and policies?
  • General lessons for technology and value change


Relevant Links


Saturday, October 10, 2020

83 - Privacy is Power


Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book Privacy is Power.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 

Show Notes

Topics discussed in this show include:

  • The most surprising examples of digital surveillance
  • The nature of privacy
  • Is privacy dead?
  • Privacy as an intrinsic and instrumental value
  • The relationship between privacy and autonomy
  • Does surveillance help with security and health?
  • The problem with mass surveillance
  • The phenomenon of toxic data
  • How surveillance undermines democracy and freedom
  • Are we willing to trade privacy for convenient services?
  • And much more

Relevant Links

Friday, October 9, 2020

Artificial Intelligence and Legal Disruption: A New Model for Analysis



Along with a team of amazing co-authors, I recently published an article examining the ways in which AI can disrupt legal norms and practices. It's a long article, with lots of detail on debates in law and technology, but I think it contains a really important and interesting model for mapping out the different forms of AI-mediated disruption of the legal system. Furthermore, since the legal system is, in effect, just a system of norms and AI is just a type of technology, the model developed also helps us to understand how technology can disrupt any normative system.

More details below.


Title: Artificial Intelligence and Legal Disruption: A New Model for Analysis

Authors: Hin-Yan Liu, Matthijs Maas, John Danaher, Luisa Scarcella, Michaela Lexer, Leonard Van Rompaey

Links: Official; Philpapers; Researchgate; Academia

Abstract: Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. 

 

 

Tuesday, October 6, 2020

In Defence of the Hivemind Society (New Paper)



Myself and Steve Petersen have just published a new paper in Neuroethics. It is a bit out of leftfield. Following a conversation I had with Steve a few years ago (via email), we decided to see if we could make a positive case for wanting to merge your mind with the minds of other people. We think a positive argument can be made to this effect and offer it up for discussion, criticism (or even ridicule) among the wider philosophical community. More details below.

Title: In Defence of the Hivemind Society

Links: Official; Philpapers; Researchgate; Academia

Abstract: The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening – something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim – the axiological openness argument and the desirability argument – and then defend it against three major objections. 

 


Wednesday, September 23, 2020

82 - What should we do about facial recognition technology?


Brenda Leong
 

Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force. 

You can listen to the episode below or download here. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 


Show notes


Topics discussed include:
  • What is facial recognition anyway?
  • Are there multiple forms that are confused and conflated?
  • What's the history of facial recognition? What has changed recently?
  • How is the technology used?
  • What are the benefits of facial recognition?
  • What's bad about it? What are the privacy and other risks?
  • Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms?
  • What can we do to address the risks? Should we regulate or ban?

Relevant Links


Friday, September 18, 2020

81 - Consumer Credit, Big Tech and AI Crime


In today's episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of 'too big to fail' tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute's Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area.

You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).



Show Notes

Topics discussed include:

  • The digitisation, datafication and disintermediation of consumer credit markets
  • Algorithmic credit scoring
  • The problems of risk and bias in credit scoring
  • How law and regulation can address these problems
  • Tech platforms that are too big to fail
  • What should we do if Facebook fails?
  • The forms of AI crime
  • How to address the problem of AI crime


Relevant Links

Post Block Status & visibility Visibility Public Publish September 18, 2020 1:09 pm Stick to the top of the blog Author John Danaher Enable AMP Move to trash 9 Revisions Permalink Categories Uncategorized Podcast Add New Category Tags Add New Tag Separate with commas or the Enter key. Featured image Excerpt Discussion Open publish panel NotificationsCode editor selected

Thursday, August 13, 2020

80 - Bias, Algorithms and Criminal Justice


Lots of algorithmic tools are now used to support decision-making in the criminal justice system. Many of them are criticised for being biased. What should be done about this? In this episode, I talk to Chelsea Barabas about this very question. Chelsea is a PhD candidate at MIT, where she examines the spread of algorithmic decision making tools in the US criminal legal system. She works with interdisciplinary researchers, government officials and community organizers to unpack and transform mainstream narratives around criminal justice reform and data-driven decision making. She is currently a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly, she was a research scientist for the AI Ethics and Governance Initiative at the MIT Media Lab.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).



Show notes

Topics covered in this show include

  • The history of algorithmic decision-making in criminal justice
  • Modern AI tools in criminal justice
  • The problem of biased decision-making
  • Examples of bias in practice
  • The FAT (Fairness, Accountability and Transparency) approach to bias
  • Can we de-bias algorithms using formal, technical rules?
  • Can we de-bias algorithms through proper review and oversight?
  • Should we be more critical of the data used to build these systems?
  • Problems with pre-trial risk assessment measures
  • The abolitionist perspective on criminal justice reform

Relevant Links