Tuesday, October 27, 2020

85 - The Internet and the Tyranny of Perceived Opinion

Henrik Skaug Saetra 

Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in the Faculty of Business, Languages and Social Science at Østfold University College in Norway. He has a particular interest in political theory and philosophy, and has worked extensively on Thomas Hobbes and social contract theory, environmental ethics and game theory. At the moment his work focuses mainly on issues involving the dynamics between human individuals, society and technology. 

You download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

 

Show Notes

Topics discussed include:
  • Selective Exposure and Confirmation Bias
  • How algorithms curate our informational ecology
  • Filter Bubbles
  • Echo Chambers
  • How the internet is created more internally conformist but externally polarised groups
  • The nature of political freedom
  • Tocqueville and the tyranny of the majority
  • Mill and the importance of individuality
  • How algorithmic curation of speech is undermining our liberty
  • What can be done about this problem?

Relevant Links



Tuesday, October 20, 2020

84 - Social Media, COVID-19 and Value Change



Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert (PhD) about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

Show Notes

Topics discussed include:

  • What is a value?
  • Descriptive vs normative theories of value
  • Psychological theories of personal values
  • The nature of emotions
  • The connection between emotions and values
  • Emotional contagion
  • Emotional climates vs emotional atmospheres
  • The role of social media in causing emotional contagion
  • Is the coronavirus promoting a negative emotional climate?
  • Will this affect our political preferences and policies?
  • General lessons for technology and value change


Relevant Links


Saturday, October 10, 2020

83 - Privacy is Power


Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book Privacy is Power.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 

Show Notes

Topics discussed in this show include:

  • The most surprising examples of digital surveillance
  • The nature of privacy
  • Is privacy dead?
  • Privacy as an intrinsic and instrumental value
  • The relationship between privacy and autonomy
  • Does surveillance help with security and health?
  • The problem with mass surveillance
  • The phenomenon of toxic data
  • How surveillance undermines democracy and freedom
  • Are we willing to trade privacy for convenient services?
  • And much more

Relevant Links

Friday, October 9, 2020

Artificial Intelligence and Legal Disruption: A New Model for Analysis



Along with a team of amazing co-authors, I recently published an article examining the ways in which AI can disrupt legal norms and practices. It's a long article, with lots of detail on debates in law and technology, but I think it contains a really important and interesting model for mapping out the different forms of AI-mediated disruption of the legal system. Furthermore, since the legal system is, in effect, just a system of norms and AI is just a type of technology, the model developed also helps us to understand how technology can disrupt any normative system.

More details below.


Title: Artificial Intelligence and Legal Disruption: A New Model for Analysis

Authors: Hin-Yan Liu, Matthijs Maas, John Danaher, Luisa Scarcella, Michaela Lexer, Leonard Van Rompaey

Links: Official; Philpapers; Researchgate; Academia

Abstract: Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. 

 

 

Tuesday, October 6, 2020

In Defence of the Hivemind Society (New Paper)



Myself and Steve Petersen have just published a new paper in Neuroethics. It is a bit out of leftfield. Following a conversation I had with Steve a few years ago (via email), we decided to see if we could make a positive case for wanting to merge your mind with the minds of other people. We think a positive argument can be made to this effect and offer it up for discussion, criticism (or even ridicule) among the wider philosophical community. More details below.

Title: In Defence of the Hivemind Society

Links: Official; Philpapers; Researchgate; Academia

Abstract: The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening – something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim – the axiological openness argument and the desirability argument – and then defend it against three major objections.