Wednesday, April 19, 2023

107 - Will Large Language Models disrupt healthcare?



In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.

Relevant Links

Wednesday, April 12, 2023

The GPT Podcast Sequence: What should we think about LLMs?

Few technologies have generated as much buzz (or is it hype?), in recent years, as large language models. Open AI's various iterations of GPT are currently leading the charge, though others are catching up. To help make sense of this technology, and to sort the signal from the noise, I have been doing a series of podcasts on the social, economic, ethical and philosophical implications of LLMs. Here's every episode to date. I'll keep this updated as I add more episodes.



Tuesday, April 11, 2023

106 - Why GPT and other LLMs (probably) aren't sentient

In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.


Relevant Links

Sunday, April 2, 2023

105 - GPT: Higher Education's Jurassic Park Moment?


In this episode of the podcast, I talk to Thore Husfeldt about the impact of GPT on education. Thore is a Professor of Computer Science at the IT University of Copehagen, where he specialises in pretty technical algorithm-related research. He is also affiliated with Lund University in Sweden. Beyond his technical work, Thore is interested in ideas at the intersection of computer science, philosophy and educational theory. In our conversation, Thore outlines four models of what a university education is for, and considers how GPT disrupts these models. We then talk, in particular, about the 'signalling' theory of higher education and how technologies like GPT undercut the value of certain signals, and thereby undercut some forms of assessment. Since I am an educator, I really enjoyed this conversation, but I firmly believe there is much food for thought in it for everyone.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.

Saturday, April 1, 2023

Moral Uncertainty and Our Relationships with Unknown Minds



I have a new paper, forthcoming later this year, in the Cambridge Quarterly of Healthcare Ethics. It's about what we ought to do (or believe) when we are unsure of whether another entity has a mind. While many of looked at this topic before, I argue that a proper accounting of the false positive and false negative risks of over- and under-ascribing mindedness to other entities is needed in order to decide what to do. I look at AI as a particular case study of this, but the argument has broader significance. I have posted a preprint for the time being. The final version will be available in open access format.


Title: Moral Uncertainty and Our Relationships with Unknown Minds

Journal: Cambridge Quarterly of Healthcare Ethics

Links: Official; Philpapers; Researchgate

Abstract: We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI etc), animals, and patients with ‘locked in’ syndrome. Do these entities have basic moral standing? Could they count as true friends or intimate partners? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimise the risks of moral wrongdoing or improve the choiceworthiness of our actions. One particular argument adopted in this literature is the ‘risk asymmetry argument’, which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favouring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this paper argues that taking potential risk asymmetries seriously can help to resolve disputes about the status of human-AI relationships, at least in practical terms (philosophical debates will, no doubt, continue), however, the resolution depends on a proper, empirically-grounded assessment of the risks involved. Being sceptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take, though this in turn creates tension in our moral views that requires additional resolution.