Tuesday, October 30, 2018

What do I believe? A thematic summary of my academic publications




I have published quite a number of academic papers in the past 7-8 years. It has gotten to the point now that I find myself trying to make sense of them all. If you were to read them, what would you learn about me and my beliefs? Are there any coherent themes and patterns within these papers? I think there are and this is my attempt to hunt them out. I'm sure this will seem self-indulgent to some of you. I can only apologise. It is a deliberately self-indulgent exercise, but hopefully the thematic organisation is of interest to people other than myself, and some of the arguments may be intriguing or pique your curiosity. I'm going to keep this overview updated.

Reading note: There is some overlap in content between the sections below since some papers belonged to more than one theme. Also, clicking on the titles of the papers will take you directly to an open access version of them.


Theme 1: Human Enhancement, Agency and Meaning

What impact does human enhancement technology have on our agency and our capacity to live meaningful lives? I have written several papers that deal with this theme:


    • Argument: Far from undermining our responsibility, advances in the neuroscience of behaviour may actually increase our responsibility due to enhanced control [not sure if I agree with this anymore: I have become something of a responsibility sceptic since writing this].

    • Argument: Enhancing people's cognitive faculties could increase the democratic legitimacy of the legal system.

    • Argument: Enhancement technologies may turn us into 'hyperagents' (i.e. agents that are capable of minutely controlling our beliefs, desires, attitudes and capacities) but this will not undermine the meaning of life.

    • Argument: Enhancement technologies need not undermine social solidarity and need not result in the unfair distribution of responsibility burdens.

    • Argument: Cognitive enhancement drugs may undermine educational assessment but not in the way that is typically thought, and the best way to regulate them may be through the use of commitment contracts.

    • Argument: We should prefer internal methods for enhancing moral conformity (i.e. drugs and brain implants) over external methods (nudges/AI assistance/automation).

    • Argument: There are strong conservative reasons (associated with agency and individual achievement) for favouring the use of enhancement technologies.

    • Argument: Moral enhancement technologies need not undermine our freedom + freedom of choice is not intrinsically valuable; it is, rather, an axiological catalyst.


Theme 2: The Ethics and Law of Sex Tech

How does technology enable new forms of sexual intimacy and connection? What are the ethical and legal consequences of these new technologies? Answering these questions has become a major theme of my work.

    • Argument: Contrary to what many people claim, sex work may remain relatively resilient to technological displacement. This is because technological displacement will (in the absence of some radical reform of the welfare system) drive potential workers to industries in which humans have some competitive advantage over machines. Sex work may be one of those industries.

    • Argument: There may be good reasons to criminalise robotic rape and robotic child sexual abuse (or, alternatively, reasons to reject widely-accepted rationales for criminalisation).

    • Argument: Consent apps are a bad idea because they produce distorted and decontextualised signals of consent, and may exacerbate other problems associated with sexual autonomy.

    • Argument: Quantified self technologies could improve the quality of our intimate relationships, but there are some legitimate concerns about the use of these technologies (contains a systematic evaluation of seven objections to the use of these technologies).

    • Argument: Response to the critics of the previous article.


    • Argument: No single argument defended in this paper. Instead it presents a framework for thinking about virtual sexual assault and examines the case for criminalising it. Focuses in particular on the distinction between virtual sexual assault and real world sexual assault, responsibility for virtual acts, and the problems with consent in virtual worlds.

    • Argument: Makes the case for taking sex robots seriously from an ethical and philosophical perspective.

  • Should we Campaign Against Sex Robots? (with Brian Earp and Anders Sandberg). In Danaher, J and McArthur, N. (eds) Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press, 2017.
    • Argument: A systematic evaluation and critique of the idea that we should campaign against the development of sex robots. 

    • Argument: There may be symbolic harms associated with the creation of sex robots but these are contingent and reformable and subordinate to the consequential harms; the consequential harms are unproven and difficult to prove; and so the best way to approach the development of sex robots is to adopt an experimental model.

    • Argument: The best response to the creation of objectifying and misogynistic sex robots is not to ban them or criminalise them but to build 'better' ones. In this respect, those who are concerned about sex robots can learn from the history of the feminist porn wars.

    • Argument: Humans can have loving intimate relationships with robots; this need not erode or distort our understanding of intimacy.


Theme 3: The Threat of Algocracy

What are the advantages and disadvantages of algorithmic governance in politics, law and everyday life? How does algorithmic governance affect individual choice and freedom? How does it affect the legitimacy of political decision-making? This has been another major theme of my work over the past few years (with several new papers on the way in the next few months).

    • Argument: Algorithmic governance poses a significant threat to the legitimacy of public decision-making and this threat is not easily resisted or accommodated.

    • Argument: Because algorithmic decision-support tools pose a threat to political legitimacy, we should favour the use of internal methods of moral enhancement.

    • Argument: The rise of smart machines to govern and manage our lives threatens to accentuate our moral patiency over our moral agency. This could be problematic because moral agency is central to modern civilisation.

    • Argument: No specific argument. The paper uses a collective intelligence methodology to generate a research agenda for the topic of algorithmic governance. This agenda is a detailed listing of research question and the methods by which to answer them.

    • Argument: An evaluation of some of the ways in which algorithmic governance technologies could be productively used by two or more people in intimate relationships.

    • Argument: Contrary to some of the popular criticisms, the use of AI assistants in everyday life does not lead to problematic forms of cognitive degeneration, significantly undermine individual autonomy, nor erode important interpersonal virtues. Nevertheless there are risks and we should develop a set of ethical principles for people who make use of these systems.


Theme 4: Automation, Work and the Meaning of Life

How will the rise of automating technologies affect the future of employment? What will humans do when (or if) they are no longer needed for economic production? I have written quite a number of papers on this theme over the past five years, as well as a long series of blog posts. It is also going to be the subject of a new book that I'm publishing in 2019, provisionally titled Automation and Utopia, with Harvard University Press.

    • Argument: Sex work may remain relatively resilient to technological displacement. This is because technological displacement will (in the absence of some radical reform of the welfare system) drive potential workers to industries in which humans have some competitive advantage over machines. Sex work may be one of those industries.

    • Argument: Technological unemployment does pose a major threat to the meaning of life, but this threat can be mitigated by pursuing an 'integrative' relationship with technology.

    • Argument: Partly an extended review of David Frayne's book The Refusal of Work; partly a defence of the claim that we should be more ashamed of the work that we do.

    • Argument: People who think that there is a major economic 'longevity dividend' to be earned through the pursuit of life extension fail to appropriately consider the impact of technological unemployment. That doesn't mean that life extension is not valuable; it just means the arguments in favour of it need to focus on the possibility of a 'post-work' future.

    • Argument: Does exactly what the title suggests. Argues that paid employment is structurally bad and getting worse. Consequently we should prefer not to work for a living.


Theme 5: Brain-Based Lie Detection and Scientific Evidence

Can brain-based lie detection tests (or concealed information tests) be forensically useful? How should the legal system approach scientific evidence? This was a major theme of my early research and I still occasionally publish on the topic.

    • Argument: Why lawyers need to be better informed about the nature and limitations of scientific evidence, using brain-based lie detection evidence as an illustration.

    • Argument: The use of blinding protocols could improve the quality of scientific evidence in law and overcome the problem of bias in expert testimony.

    • Argument: (a) Reliability tests for scientific evidence need to be more sensitive to the different kinds of error rate associated with that evidence; and (b) there is potential for brain-based lie detection to be used in a legal setting as long as we move away from classic 'control question' tests to 'concealed information' test.

    • Argument: The P300 concealed information test could be used to address the problem of innocent people pleading guilty to offences they did not commit.



    • Argument: A defence of a 'legitimacy enhancing test' for the responsible use of brain-based lie detection tests in the law.


Theme 6: God, Morality and the Problem of Evil

The philosophy of religion has been a major focus of this blog, and I have spun this interest into a handful of academic papers too. They all deal with the relationship between god and morality or the problem of evil. I keep an interest in this topic and may write more such papers in the future.


    • Argument: Skeptical theism has profound and problematic epistemic consequences. Attempts to resolve or ameliorate those consequences by drawing a distinction between our knowledge of what God permits and our knowledge of the overall value of an event/state of affairs don't work.

  • Necessary Moral Truths and Theistic Metaethics. (2013) SOPHIA, DOI 10.1007/s11841-013-0390-0.
    • Argument: Some theists argue that you need God to explain/ground necessary moral truths. I argue that necessary moral truths need no deeper explanation/grounding.

    • Argument: There is no obligation to worship God. Gwiazda's attempt to defend this by arguing that there is a distinction between threshold and non-threshold obligations doesn't work in the case of God.

    • Argument: An attempt to draw an analogy between the arguments of sceptical theists and the arguments of AI doomsayers like Nick Bostrom. Not really a philosophy of religion paper; more a paper about dubious epistemic strategies in debates about hypothetical beings. 

    • Argument: In order to work, divine command theories must incorporate an epistemic condition (viz. moral obligations do not exist unless they are successfully communicated to their subjects). This is problematic because certain people lack epistemic access to the content of moral obligations. While this argument has been criticised, I argue that it is quite effective.


Theme 7: Moral standards and legal interpretation

Is the interpretation of legal texts a factual/descriptive inquiry, or is it a moral/normative inquiry? I have written a couple of papers arguing that it is more the latter. Both of these papers focus on the 'originalist' theory of constitutional interpretation. 

    • Argument: If we analogise laws to speech acts, as many now do, then we must pay attention to the 'success conditions' associated with those speech acts. This means we necessarily engage in a normative/moral inquiry, not a factual one.

    • Argument: Legal utterances are always enriched by the pragmatic context in which they are uttered. Constitutional originalists try to rely on a common knowledge standard of enrichment; this standard fails, which once again opens to door to a normative/moral approach to legal interpretation.


Theme 8: Random

Papers that don't seem to fit in any particular thematic bucket.

    • Argument: A critical analysis of Matthew Kramer's defence of capital punishment. I argue that Kramer's defence fails the moral test that he himself sets for it.

    • Argument: The widespread deployment of autonomous robots will give rise to a 'retribution gap'. This gap is much harder to plug than the more widely discussed responsibility/liability gaps.

    • Argument: Using Samuel Scheffler's 'collective afterlife' thesis, I argue that we should commit to creating artificial offspring. Doing so might increase the meaning and purpose of our present lives.

    • Argument: Human identity is more of a social construction than a natural fact. This has a significant effect on the plausibility of certain techniques for 'mind-uploading'.


    • Argument: Our conscience is not a product of free will or autonomous choice. This has both analytical and normative implications for how we treat conscientious objectors.







No comments:

Post a Comment