I have a new paper coming out in Minds and Machines. It deals with the debate about AI risk, and takes a particular look at the arguments presented in Nick Bostrom's recent book Superintelligence. Fuller details are available below. The official version won't be out for a few weeks but you can access the preprint versions below.
Title: Why AI Doomsayers are Like Sceptical Theists and Why it Matters
Journal: Minds and Machines
Links: (Official; Academia; Philpapers)
Abstract: An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.