Pages

Saturday, September 5, 2015

Interview about Superintelligence, the Orthogonality Thesis and AI Doomsday Scenarios



Adam Ford interviewed me this morning about some of issues arising from AI and existential risk. We covered the arguments from Nick Bostrom's book Superintelligence, focusing in particular on his orthogonality thesis, argument for AI doom, as well as some of my criticisms of his argumentative framework. We also took some interesting deviations from these topics.

Viewing notes: Adam's connection was lost at around the 33 min mark, so you should skip from there to roughly the 38 min mark. Also, I am aware that I fluffed Hume's example about the destruction of the earth and the scratching of one's finger. I realised it at the time, but hopefully the basic gist the idea got through. I also didn't quite do justice to normative theories of rationality and how they feed into criticisms of the orthogonality thesis.

If you want to read more about these topics, my conversation with Adam was based on the following blog posts and papers:


For all my other writings on intelligence explosions and related concerns, see here.

No comments:

Post a Comment