In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant Links
- Atoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values'
I will think about this before comment. Just to ensure clarity.
ReplyDeleteWell. Clarity emerged as I was going for coffee. In a radio spot, two people talked about a letter, signed by a large number of AI researchers, executives and others. Part of the letter mentioned extinction. The consensus of the signatories was, roughly, don't do it. Clear enough for me.
ReplyDelete