Pages

Sunday, December 27, 2015

Academic Papers 2015


It's that time of year when all good bloggers indulge in some end-of-year review. Here's the first of several from me. This one lists all the academic papers I had accepted for publication in 2015. I've included abstracts and links below:


  • Why AI doomsayers are like sceptical theists and why it matters? Minds and Machines 25(3): 231 - 246 - An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.

  • Common Knowledge, Pragmatic Enrichment and Thin Originalism Jurisprudence, forthcoming DOI: 10.1080/20403313.2015.1065644 - The meaning of an utterance is often enriched by the pragmatic context in which it is uttered. This is because in ordinary conversations we routinely and uncontroversially compress what we say, safe in the knowledge that those interpreting us will “add in” the content we intend to communicate. Does the same thing hold true in the case of legal utterances like “This constitution protects the personal rights of the citizen” or “the parliament shall have the power to lay and collect taxes”? This article addresses this question from the perspective of the constitutional originalist — the person who holds that the meaning of a constitutional text is fixed at some historical moment. In doing so, it advances four theses. First, it argues that every originalist theory is committed to some degree of pragmatic enrichment, the debate is about how much (enrichment thesis). Second, that in determining which content gets “added in”, originalists typically hold to a common knowledge standard for enrichment, protestations to the contrary notwithstanding (common knowledge thesis). Third, that the common knowledge standard for enrichment is deeply flawed (anti-CK thesis). And fourth, that all of this leads us to a thin theory of original constitutional meaning — similar to that defended by Jack Balkin and Ronald Dworkin — not for moral reasons but for strictly semantic ones (thinness thesis). Although some of the theses are extant in the literature, this article tries to defend them in a novel and perspicuous way. (Official; Philpapers; Academia.edu)

  • Human Enhancement, Social Solidarity and the Distribution of Responsibility Ethical Theory and Moral Practice, forthcoming DOI: 10.1007/s10677-015-9624-2 - This paper tries to clarify, strengthen and respond to two prominent objections to the development and use of human enhancement technologies. Both objections express concerns about the link between enhancement and the drive for hyperagency (i.e. the ability to control and manipulate all aspects of one’s agency). The first derives from the work of Sandel and Hauskeller and is concerned with the negative impact of hyperagency on social solidarity. In responding to their objection, I argue that although social solidarity is valuable, there is a danger in overestimating its value and in neglecting some obvious ways in which the enhancement project can be planned so as to avoid its degradation. The second objection, though common to several writers, has been most directly asserted by Saskia Nagel, and is concerned with the impact of hyperagency on the burden and distribution of responsibility. Though this is an intriguing objection, I argue that not enough has been done to explain why such alterations are morally problematic. I try to correct for this flaw before offering a variety of strategies for dealing with the problems raised. (Official; Philpapers; Academia.edu)

  • The Threat of Algocracy: Reality, Resistance and Accommodation, Philosophy and Technology, forthcoming - One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithm-driven decision-making does pose a significant threat to the legitimacy of such processes. Modeling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat, and addresses two possible solutions (named, respectively, “resistance” and “accommodation”). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation. (Official; Philpapers; Academia.edu)


No comments:

Post a Comment