Hal, from 2001: A Space Odyssey |
Advances in robotics and artificial intelligence are going to play an increasingly important role in human society. Over the past two years, I've written several posts about this topic. The majority of them focus on machine ethics and the potential risks of an intelligence explosion; others look at how we might interact with and have duties toward robots.
Anyway, for your benefit (and for my own), I thought it might worth providing links to all of these posts. I will keep this updated as I write more.
- The Singularity: Overview and Framework: This was my first attempt to provide a general overview and framework for understanding the debate about the technological singularity. I suggested that the debate could be organised around three main theses: (i) the explosion thesis -- which claims that there will be an intelligence explosion; (ii) the unfriendliness thesis -- which claims that an advanced artificial intelligence is likely to be "unfriendly"; and (iii) the inevitability thesis -- which claims that the creation of an unfriendly AI will be difficult to avoid, if not inevitable.
- The Singularity: Overview and Framework Redux: This was my second attempt to provide a general overview and framework for understanding the debate about the technological singularity. I tried to reduce the framework down to two main theses: (i) the explosion thesis and (ii) the unfriendliness thesis.
- The Golem Genie and Unfriendly AI (Part One, Part Two): This two-parter summarises what I think is the best argument for the unfriendliness thesis. The argument was originally presented by Muehlhauser and Helm, but I try to simplify its main components.
- AIs and the Decisive Advantage Thesis: Many people claim that an advanced artificial intelligence would have decisive advantages over human intelligences. Is this right? In this post, I look at Kaj Sotala's argument to that effect.
- Is there a case for robot slaves? - If robots can be persons -- in the morally thick sense of "person" -- then surely it would be wrong to make them cater to our every whim? Or would it? Steve Petersen argues that the creation of robot slaves might be morally permissible. In this post, I look at what he has to say.
- The Ethics of Robot Sex: A reasonably self-explanatory title. This post looks at the ethical issues that might arise from the creation of sex robots.
- Will sex workers be replaced by robots? A Precis: A short summary of a longer article examining the possibility of sex workers being replaced by robots. Contrary to the work of others, I suggest that sex work might be resilient to the phenomenon of technological unemployment.
- Bostrom on Superintelligence (1) The Orthogonality Thesis: The first part in my series on Nick Bostrom's book Superintelligence. This one covers Bostrom's orthogonality thesis, according to which there is no necessary relationship between intelligence and benevolence.
- Bostrom on Superintelligence (2) The Instrumental Convergence Thesis: The second part in my series on Bostrom's book. This one examines the instrumental convergence thesis, according to which an intelligent agent, no matter what its final goals may be, is likely to converge upon certain instrumental goals that are unfriendly to human beings.
- Bostrom on Superintelligence (3) Doom and the Treacherous Turn: The third part in my series on Bostrom's book. This time I finally get to look at Bostrom's argument for the AI doomsday scenario, and for why it may be difficult to avoid.
- Bostrom on Superintelligence (4) Malignant Failure Modes: The fourth part in my series on Bostrom's book. This one explains why Bostrom thinks it would be difficult to simply program the AI with the right set of values.
- Bostrom on Superintelligence (5) Limiting an AIs Capabilities: The fifth part in my series on Bostrom's book. This one looks at the possibility of hampering or restricting an AI's capabilities, and whether that could help to avoid the doomsday scenario.
- Bostrom on Superintelligence (6) Motivation Selection Methods: The sixth (and for now final) part in my series on Bostrom's book. This one considers the advantages and disadvantages of different methods for selecting the motivations of an advanced AI.
- The Legal Challenges of Robotics (Part One and Part Two): This two-part series looks at Ryan Calo's argument for moderate robotic exceptionalism, i.e. for the view that advances in robotics may force us to make systematic changes to the current legal system.
- Are AI Doomsayers like Skeptical Theists? A Precis of the Argument: This is a summary of my longer academic paper analysing and critiquing Nick Bostrom's argument for an AI doomsday scenario.
- Is effective regulation of AI possible? Eight regulatory challenges: The first of two posts looking at Matt Scherer's article on AI regulation. This one outlines eight regulatory problems, arranged into three main categories (definitional; ex post; and ex ante).
- Is anyone competent to regulate AI? - Second post looking at Matt Scherer's work. This one looks at the three main regulatory bodies in any state (the legislature; specific regulatory agencies; and the courts) and examines their competencies. It ends with a brief evaluation of Scherer's proposed regulatory model.
- A Framework for Understanding our Ethical Relationships with Technology - An attempt to map out the various ways in which we relate, control and are controlled by technology. Includes some discussion of robotics.
- Polanyi's Paradox: Will Humans Maintain Any Advantage over Machines - A critical appraisal of the arguments of the economist David Autor who thinks that humans will maintain an advantage over machines. Although I respect his work, I argue that Autor underestimates the potential of technology.
- Interview about Superintelligence, the Orthogonality Thesis and AI Doomsday Scenarios - Video of a longform interview I did on the arguments in Nick Bostrom's book Superintelligence.
No comments:
Post a Comment