Wednesday, August 8, 2018

Episode #43 - Elder on Friendship, Robots and Social Media

AlexisElder01222018-2 web.jpg

 In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy - primarily Chinese and Greek - in order to think about current problems. She is the author of a number of articles on the philosophy of friendship, and her book Friendship, Robots, and Social Media: False Friends and Second Selves, came out in January 2018. We talk about all things to do with friendship, social media and social robots.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:37 - Aristotle's theory of friendship
  • 5:00 - The idea of virtue/character friendship
  • 10:14 - The enduring appeal of Aristotle's account of friendship
  • 12: 30 - Does social media corrode friendship?
  • 16:35 - The Publicity Objection to online friendships
  • 20:40 - The Superficiality Objection to online friendships
  • 25:23 - The Commercialisation/Contamination Objection to online friendships
  • 30:34 - Deception in online friendships
  • 35:18 - Must we physically interact with our friends?
  • 39:25 - Social robots as friends (with a specific focus on elderly populations and those on the autism spectrum)
  • 46:50 - Can you be friends with a robot? The counterfeit currency analogy
  • 50:55 - Does the analogy hold up?
  • 56:13 - Why are robotic friends assumed to be fake?
  • 1:03:50 - Does the 'falseness' of robotic friends depend on the type of friendship we are interested in?
  • 1:06:38 - What about companion animals?
  • 1:08:35 - Where is this debate going?
 

Relevant Links




Wednesday, July 25, 2018

Episode #42 - Earp on Psychedelics and Moral Enhancement

Brian Earp.jpg

 In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about moral enhancement and the potential use of psychedelics as a form of moral enhancement.

You can download the episode here or listen below. You can also subscribe to the podcast on iTunes and Stitcher (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:53 - Why psychedelics and moral enhancement?
  • 5:07 - What is moral enhancement anyway? Why are people excited about it?
  • 7:12 - What are the methods of moral enhancement?
  • 10:18 - Why is Brian sceptical about the possibility of moral enhancement?
  • 14:16 - So is it an empty idea?
  • 17:58 - What if we adopt an 'extended' concept of enhancement, i.e. beyond the biomedical?
  • 26:12 - Can we use psychedelics to overcome the dilemma facing the proponent of moral enhancement?
  • 29:07 - What are psychedelic drugs? How do they work on the brain?
  • 34:26 - Are your experiences whilst on psychedelic drugs conditional on your cultural background?
  • 37:39 - Dissolving the ego and the feeling of oneness
  • 41:36 - Are psychedelics the new productivity hack?
  • 43:48 - How can psychedelics enhance moral behaviour?
  • 47:36 - How can a moral philosopher make sense of these effects?
  • 51:12 - The MDMA case study
  • 58:38 - How about MDMA assisted political negotiations?
  • 1:02:11 - Could we achieve the same outcomes without drugs?
  • 1:06:52 - Where should the research go from here?

Relevant Links




Thursday, July 12, 2018

Episode #41 - Binns on Fairness in Algorithmic Decision-Making

Reuben Binns.jpg

In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).


 

Show notes

  • 0:00 - Introduction
  • 1:46 - What is algorithmic decision-making?
  • 4:20 - Isn't all decision-making algorithmic?
  • 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate
  • 12:02 - Limitations of the COMPAS debate
  • 15:22 - Other examples of unfairness in algorithmic decision-making
  • 17:00 - What is discrimination in decision-making?
  • 19:45 - The mental state theory of discrimination
  • 25:20 - Statistical discrimination and the problem of generalisation
  • 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination
  • 34:40 - Algorithmic typecasting: Could we all end up like William Shatner?
  • 39:02 - Egalitarianism and algorithmic decision-making
  • 43:07 - The role that luck and desert play in our understanding of fairness
  • 49:38 - Deontic justice and historical discrimination in algorithmic decision-making
  • 53:36 - Fair distribution vs Fair recognition
  • 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?

 

Relevant Links

  • 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm
  • 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same)



Tuesday, July 10, 2018

Tranhumanism as Utopianism: A Critical Analysis




The poem ‘The Land of Cockaygne’ is one of the original works of utopian literature. A satire, written in Ireland in the 14th century, the poem describes a corrupt community of monks living in the mythical land of Cockaigne (different spellings for different dialogues). Cockaigne is a medieval idyll. It is the land of plenty, where no one goes without, where no one has to work, and there is an abundance of food and gluttony.

But there is something quaint about this medieval idyll. As Steven Pinker points out in his book Enlightenment Now, if Cockaigne represents the ideal society, then arguably we’ve managed to create it in Western countries in the 21st century:

Hungry Europeans [once] titillated themselves with food pornography, such as tales of Cockaigne, a country where pancakes grew on trees, the streets were paved with pastry, roasted pigs wandered around with knives in their backs for easy carving, and cooked fish jumped out of the water and landed at one’s feet. Today we live in Cockaigne, and our problem is not too few calories but too many. As the comedian Chris Rock observed, “This is the first society in history where the poor people are fat.” 
(Pinker 2018, 69)
It’s an attractive idea, but I think Pinker is wrong. While we in the West (and elsewhere) may have created a world of relative food abundance, it’s clear that the mythical land of Cockaigne wasn’t just about food. It was also about sex, health, entertainment and politics. Cockaigne was a place where there was no illness or disease, an abundance of sexual pleasure and entertainment, and where the traditional power elites were taken down a peg or two. It was a ‘flat’ society, where everyone had what they wanted when they wanted it. I don’t think we have reached that quite yet.

Whether we should want to is another question. Utopias are, in principle, ideal societies. They radically improve upon our current lot. But what would actually count as a utopia is a matter of some debate. I don’t have any particular interest in whether Cockaigne represents the best that humanity can hope for — I suspect it does not — but I do have an interest in modern day utopian projects. Are they feasible? Are they desirable?

Transhumanism would seem to be the quintessential modern day utopian project. Transhumanists are trying to create a world of technological abundance and perfection. A world where we can control everything (our intelligence, our happiness, our lifespans) through technology. It’s a contemporary Cockaignian fantasy, updated for the technologically advanced age in which we now live.

But transhumanism can be criticised for these utopian leanings. Michael Hauskeller, in his article ‘Reinventing Cockaigne: Utopian themes in Transhumanist Thought’ claims that transhumanist philosophy — for all its sophisticated arguments and principles — is contaminated by its implicit and foundational utopianism. In this post, I want to try to understand Hauskeller’s argument. Is it true that Transhumanism is a utopianist philosophy? Does this really have disastrous consequences? Let’s see.


1. Is Transhumanism Utopian?
Hauskeller presents two main arguments in his article. The first is that there are direct analogues between transhumanism and classic works of utopian literature and, as a result, it is right and proper to refer to transhumanism as a form of utopianism. The second is that this utopianism has a contaminating effect on other transhumanist arguments. I’m far more interested in the second argument. Indeed, I’d be willing to simply concede the first argument just so we can get to the second. But Hauskeller spends far more time on the first argument and he does say some interesting things about the kind of utopianism you can find in the transhumanist literature. Let’s review them briefly.

First, some transhumanist works wear their utopian leanings on their sleeves. Nick Bostrom, for example, has written a ‘Letter from Utopia’ which is in an imaginary letter from a resident of a future transhumanist society to those of us living the early-21st century. In this imagined future there is an end to all suffering, ageing and disease. There is also an abundance of pleasure. Some of the things Bostrom says in this letter could be taken almost directly from the medieval myth of Cockaigne. For example:

Pleasure! A few grains of this magic ingredient are dearer than a king’s treasure, and we have plenty of it here in Utopia. It pervades everything we do and everything we experience. We sprinkle it in our tea. 
(Bostrom 2010)

David Pearce is another transhumanist who shares this Cockaignian outlook. He wants to eliminate suffering and ensure that we can experience sublime happiness all the time. He refers to this project, variously, as ‘paradise engineering’ and the ‘naturalisation of heaven’. The parallels between the work of both authors and the medieval myth of Cockaigne are, as Hauskeller points out, quite striking.

Second, Hauskeller argues that transhumanists share some of the utopian myths that you find among 16th century alchemists. Alchemy is popularly understood as the attempt to convert base metals into gold, but Hauskeller argues that this popular conception only scratches the surface of what the alchemists were trying to do. They were trying to unlock the secrets of the universe and attain a utopian existence. Breaking down the ontological barriers between different substances — base metals and gold — was the way to do this. It would allow them exercise perfect control over the natural order and attain the Elixir of Life. To do this, alchemists searched for a magical device — the Philosopher’s Stone — that would provide the means to their utopian ends.

Hauskeller argues that there are obvious parallels with the transhumanist project. Transhumanists are also trying to exercise perfect control over nature (specifically their own bodies and brains) and to find the Elixir of Life. They see technology, particular biotech and nanotech, as the means to do this. Thus, technology takes on a similar role to that of the Philosopher’s Stone:

Biotechnology promises to be the real Philosopher’s Stone, that elusive device that the alchemists so desperately tried to find and which would finally give them the power to reinvent the world so that it would match their desires. 
(Hauskeller 2012, 7)

Finally, Hauskeller argues that transhumanism conceives of the ideal form of existence not as a fixed endpoint but, rather, as an continual upward cycle of improvement. Through technology, we can constantly improve and enhance ourselves and our societies. This is not something that has to be brought to a halt. In other words, they echo and adopt one of the key shifts in modern day utopian thought away from ‘blueprint’ models of utopia to ‘horizonal’ models. I discussed this distinction previously, but the essence of it is nicely summed up in this quote from HG Wells (which Hauskeller uses):

The Modern Utopia must not be static but kinetic, must shape not as a permanent state but as a hopeful stage leading to a long ascent of stages. 
(Wells, A Modern Utopia)

Parallels and analogies of this sort lead Hauskeller to conclude that transhumanism is utopian to its core. It is a direct descendant of classic utopianism and it carries the torch of utopianism into the future. As I said, I’m happy to concede this point to Hauskeller. The deeper question is: does it matter? Does it undermine the transhumanist project in some way?


2. Does it matter? The Contamination Argument
There are two reasons for thinking that it doesn’t. The first is simply that utopianism is a good thing. It is a good thing that people articulate and defend possible ideal societies. We shouldn’t rest on our laurels and assume that our current way of life is the best. We should be open to the possibility of radical improvement. Transhumanism is a breath of fresh air in this regard. There are plenty of techno-pessimists and morose social critics out there. They all lament the state of humanity. Isn’t it nice to have people defend a more positive and hopeful outlook? So what if transhumanism is laced with utopian language and ideals.

The second reason is possibly more important. Even if it turns out that utopianism is not such a positive thing, there is still the fact that transhumanists have independent arguments for each of their pet projects. In other words, there are specific reasons why they think that, say, cognitive enhancement is a good thing, or that life extension is a good thing, or that happiness engineering is a good thing. Indeed, some of their arguments have become extremely elaborate over the years as they have responded to critics. Those arguments ultimately stand and fall on their own merits. Whether they are undergirded by a generally utopian outlook or leaning is, strictly speaking, irrelevant to their critical assessment. Call this the independence hypothesis:

Independence Hypothesis: The arguments for specific transhumanist projects (cognitive enhancement, genetic engineering, life extension, happiness engineering etc.) stand and fall on their own merits, i.e. they are independent of any underlying utopianism.

Hauskeller rejects both of these reasons. He think that utopianism is problematic and that transhumanist arguments are not independent from it. He favours what I would call a ‘contamination argument’ against transhumanism. He doesn’t set it out in formal terms, but I will make an attempt to do so here:


  • (1) If a set of arguments (A1…An) in favour of a set of conclusions (C1…Cn) is (a) motivated by an underlying theory/ideology; (b) that ideology is flawed or problematic and (c) those flaws carry over into or get reflected in the premises of the arguments, then those arguments are contaminated by that theory/ideology.

  • (2) The arguments that transhumanists offer in support of their projects are (a) motivated by an underlying theory/ideology of utopianism, (b) that ideology is flawed and problematic and (c) these flaws get reflected in the premises of the arguments.

  • (3) Therefore, transhumanism is contaminated by utopianism.


I don’t know that Hauskeller would agree with this formalisation, but I think it captures what he is trying to do. Consider the following quote from his article:

Utopian ideas and images do not merely serve as motivational aids to get people to support the radical enhancement agenda, they also affect the very arguments that are proposed in favour of human self-transformation and in particular in support of the claim that it is our moral duty to develop and use technologies that make this happen. As philosophical arguments they appear to be self-contained, but in truth utopian ideas form the fertile soil from which those arguments grow, so that without them they would wither and die. 
(Hauskeller 2012, 11)

That sounds like a contamination argument if ever I saw one. Following my formalisation, for the contamination argument to succeed, Hauskeller will need to show that transhumanist arguments are (a) motivated by utopianism, (b) that utopianism is flawed and problematic, and (c) that these flaws carry over into the premises of transhumanist arguments. He thinks he can do this. Since I have already, effectively, conceded the first of these points, that leaves us with the other two.

As best I can tell, Hauskeller offers three main arguments in favour of (b) and (c). The first is that utopian visions or ideals tend to be incompletely sketched out. So a utopianist will come along and paint a seemingly pleasant picture — pleasure being sprinkled in our tea, lives being extended indefinitely, cognition being enhanced to an extreme — and extoll all the benefits of this utopian existence, but they won’t think it all the way through. They won’t consider all the unintended side effects of realising these utopian aims. What if we spend all day drinking endless cups of pleasure-infused tea, never lifting a finger to do great things? What if in our lust for life extension we become excessively risk averse and never take the risks needed to innovate and make things even better? Thinking things through is important. Utopian projects are laced with uncertainty. We don’t know exactly how things will pan out if we pursue them, and these unintended side effects might be pretty bad (even if they have a low probability of materialising). We cannot make do with the incomplete sketches of the utopian.

The claim then is this incompleteness carries over to the arguments in favour of transhumanist projects. Defenders of these arguments don’t think everything through. Is this true? Hauskeller gives some examples from the transhumanist literature and I think he makes a reasonable case. But I don’t think it is as significant as he lets on. Philosophical arguments are rarely complete in their initial presentation. It is part and parcel of the ordinary scholarly process that objections are formulated and replied to by the original defenders -- thrust and parry, argument and objection, example and counterexample. Through the constant iteration of the scholarly back-and-forth the arguments can be refined and strengthened. It was probably fair to say that transhumanist arguments were once guilty of incomplete specification, and naive utopianism, but I think if you follow the scholarly conversation through to the present day, you find that they are much less so. At least, that’s my sense of the current state of play. This does mean, however, that the arguments may have lost some of their utopian lustre. They may be more modest as a result of refinement. But that’s not necessarily a bad thing. I have long favoured what I would call a 'modest' form of transhumanism.

Hauskeller’s second argument is that utopianists often present their views with an air of inevitability. Social progression or human evolution is supposedly tending towards their utopian idyll. If we just let the cosmic dance play out to the end we will arrive at the utopian paradise. There is an element of this in Marxism and Hegelianism. There is also an element of it in transhumanist argumentation. Although transhumanists do offer arguments in favour of their projects, they often presuppose within the premises of those arguments the notion that the project is part of humanity’s destiny and/or that resistance is, in some sense, futile. This is problematic because it obscures the fact that we have a choice. Things are not inevitable. We must actively choose to pursue the transhumanist project, not simply sit back and enjoy the ride.

There are, of course, deep metaphysical questions at play here. Maybe there is some ultimate destiny to the universe? Maybe a particular future is inevitable? It would take too long to properly probe the metaphysical depths here. Nevertheless, I am willing to concede to Hauskeller that this tendency toward fatalism is a bit of a problem for transhumanists. It is often a way of avoiding hard argumentative work. The transhumanist will say — and I have been guilty of this myself — that ‘sure, you could object to X, but X is going to happen anyway so you may as well get used to it!’ Unless there are very good reasons for thinking that X is going to happen anyway, I think this move should be avoided. Strong independent reasons for thinking that X is desirable should be articulated.

Hauskeller’s third argument is that utopianism tends to make the better the enemy of the good. In other words, utopians are so busy imagining and planning for some wonderful future that they overlook or ignore what is good about our current form of existence. Indeed, they go further. In advocating for their utopian vision, they often denigrate or criticise what we currently have. They need to get people enthusiastic about the future and one way to do this is breed dissatisfaction with the present.

Again, I think there some element of truth to this. For example, I don’t think transhumanists should oversell the idea of life extension or digital ‘immortality’. Although I think it would be, on balance, a good thing if we could radically extend the human lifespan, I suspect that (a) this isn’t going to happen any time soon and (b) we are going to have to embrace death at some point. Fixating on the idea that death is a great evil that could be overcome if we only reprioritised our R&D seems naive to me. We have to live with our mortality. That said, I’m not convinced that the overselling of the future is always present in transhumanist arguments and I have, in some published work, challenged Hauskeller for assuming that transhumanist projects (specifically radical enhancement projets) necessarily entail making the (future) better the enemy of the (present) good. Indeed, I think that certain transhumanist aims are about recognising and conserving what is good about our current existence.


3. Conclusion
In sum, Hauskeller criticises transhumanism for its latent or implicit utopianism. In this post I have conceded that transhumanism may be utopianist in its leanings, but pushed back against the notion that this is a major problem. Although Hauskeller makes some reasonable critiques of transhumanist rhetoric, the more careful, extensively developed, philosophical arguments for transhumanist projects can, I think, escape any charge of contamination.

I want to close with one final point. Even though I conceded the utopian leanings of transhumanism to Hauskeller, there are some well-known transhumanists who resist this idea. Stefan Lorenz Sorgner, for example, has recently written a defence of an anti-utopian transhumanism. I recommend checking it out if you are interested in this debate.




Sunday, July 8, 2018

Building Better Sex Robots: Lessons from Feminist Pornography







Here's another new paper. This one looks at the ever-popular topic of sex robots through the lens of feminist pornography. This is a draft of a book chapter that is set to appear in an edited collection entitled AI Love You: Developments on Human-Robot Intimate Relations, which is edited by Youfang Zhou and Martin Fischer, and will be coming out with Springer, sometime in the future. I provide a link to an OA version of the draft below.

Title: Building better sex robots: Lessons from Feminist Pornography
Book: AI Love You: Developments on Human-Robot Intimate Relations
Links: Philpapers
Abstract: How should we react to the development of sexbot technology? Taking their cue from anti-porn feminism, several academic critics lament the development of sexbot technology, arguing that it objectifies and subordinates women, is likely to promote misogynistic attitudes toward sex, and may need to be banned or restricted. In this chapter I argue for an alternative response. Taking my cue from the sex positive ‘feminist porn’ movement, I argue that the best response to the development of ‘bad’ sexbots is to make better ones. This will require changes to the content, process and context of sexbot development. Doing so will acknowledge the valuable role that technology can play in human sexuality, and allow us to challenge gendered norms and assumptions about male and female sexual desire. This will not be a panacea to the social problems that could arise from sexbot development, but it offers a more realistic and hopeful vision for the future of this technology in a pluralistic and progressive society.   




Friday, July 6, 2018

Towards an Ethics of AI Assistants: An Initial Framework




I have new paper in the journal Philosophy of Technology. It's called 'Towards an Ethics of AI Assistants'. It looks at some of the leading ethical objections to the personal use of AI assistants and tries to develop some principles that could be of use to both the users and designers of this technology. Details and links to OA versions are below.

Title: Towards an Ethics of AI Assistants: an Initial Framework
Journal: Philosophy of Technology
Links: Official; Philpapers; Academia; Researchgate 
Abstract: Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling, and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a ‘smart’ algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex in the sense that there are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating some of the most typical objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.


This paper has been picked up by a few people already, including Wessel Reijers (a philosopher based at Dublin City University), who kindly said the following about it:

   






Friday, June 29, 2018

Episode #40: Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars

Sven-Nyholm.jpg

In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more. You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).



Show Notes:

  • 0:00 - Introduction
  • 1:22 - What is a self-driving car?
  • 3:00 - Fatal crashes involving self-driving cars
  • 5:10 - Could self-driving cars ever be completely safe?
  • 8:14 - Limitations of the Trolley Problem
  • 11:22 - What kinds of accident scenarios do we need to plan for?
  • 17:18 - Who should decide which ethical rules a self-driving car follows?
  • 23:47 - Why not randomise the ethical rules?
  • 25:18 - Experimental findings on people's preferences with self-driving cars
  • 29:16 - Is this just another typical applied ethical debate?
  • 31:27 - What would a utilitarian self-driving car do?
  • 36:30 - What would a Kantian self-driving car do?
  • 39:33 - A contractualist approach to the ethics of self-driving cars
  • 43:54 - The responsibility gap problem
  • 46:12 - Scepticism of the responsibility gap: can self-driving cars be agents?
  • 53:17 - A collaborative agency approach to self-driving cars
  • 58:18 - So who should we blame if something goes wrong?
  • 1:03:40 - Is there a duty to hand over driving to machines?
  • 1:07:30 - Must self-driving cars be programmed to kill?

Relevant Links