Pages

Friday, February 24, 2017

Are we ready for robot relationships? (Video Debate)



[If you like this blog, consider signing up for the newsletter...]

On the 21st February 2017, I participated in a British Academy debate on the topic 'Are we ready for robot relationships?'. The debate took place at DeMonfort University, Leicester UK. It featured contributions from Luke Dormehl, Margaret Boden, Kathleen Richardson, Nicole Dewandre and myself. You can watch the video of the debate above.

My opening statement starts at around 22 minutes and I make four points in this opening statement:


  • Robot relationships are already happening and are likely to increase in number. Consequently, there is little point debating our readiness for them: they are going to happen whether we are ready or not.

  • It is worth asking whether robot relationships are a good or bad thing, but in doing so we have to be careful. The concept of a 'relationship' is vague. There are many different types of relationship in human society and there are different ethical standards and norms that apply to each. Robots might be appropriate partners in some relationships, but not others.

  • If we limit ourselves to friendships, then we still have the problem that there are many styles of friendship. Robots may not be capable of being our Aristotelian (virtue) friends, but this doesn't really matter. They can still be our utility/pleasure friends.

  • Some people worry that robot friendships will replace or undermine human friendships, but it could also be the case that robot friendships complement and facilitate human friendships.


I was the only participant who defended a broadly positive outlook on robot relationships, but I did this largely for the purposes of balance within the debate. I share some of the concerns articulated by the others.

On the whole, I think the conversation generated by the debate was positive. I would encourage people to watch it and to see what the other participants had to say.




Monday, February 20, 2017

Episode #19 - Andrew Ferguson on Predictive Policing


fergheadshot-300x295.png

[If you like this blog, consider signing up for the newsletter...]

In this episode I talk to Andrew Guthrie Ferguson about the past, present and future of predictive policing. Andrew is a Professor at the David A Clarke School of Law at the University of the District of Columbia. He was formerly a supervising attorney at the Public Defender Service for the District of Columbia. He now teaches and writes in the area of criminal law, criminal procedure, and evidence. We discuss the ideas and arguments from his recent paper 'Policing Predictive Policing.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (via RSS).


Show Notes

  • 0:00 - Introduction
  • 2:55 - Why did Andrew start researching this topic?
  • 4:50 - What is predictive policing?
  • 6:25 - Hasn't policing always been predictive? What is the history of prediction in policing?
  • 8:50 - How does predictive policing work? (Understanding Predictive Policing 1.0)
  • 16:18 - Why the interest in this technology post-2009?
  • 18:50 - The shift from place-based to person-based prediction (Predictive Policing 2.0 and 3.0)
  • 24:35 - Are the concerns about person-based prediction overstated?
  • 28:18 - How does predictive policing differ from policies like 'broken windows' policing?
  • 31:40 - Are predictive policing systems racially biased? (Data vulnerabilities)
  • 41:44 - Do predictive policing systems actually work?
  • 52:46 - Are predictive policing systems transparent/accountable?
  • 58:26 - How do these systems change police practice?
  • 1:02:50 - Alternative visions for the use of predictive powers
  • 1:10:22 - What about data security, privacy and data protection?
  • 1:14:15 - Is the future dystopian or utopian?

Relevant Links



Saturday, February 18, 2017

Can you be friends with a robot? Aristotelian Friendship and Robotics


Image by Dick Thomas Johnson - flickr

[If you like this blog, consider signing up for the newsletter...]


Let’s talk about Davecat.

Davecat is the pseudonym of a Michigan-based man. He is married and has one mistress. Neither of them is human. They are both dolls — RealDolls to be precise. Davecat is an iDollator; he promotes love with synthetic beings. His wife is called Sidore. They met at goth club in the year 2000 (according to a story he tells himself). They later appeared together on the TLC show Guys and Dolls. That’s when Elena saw them (Elena is his mistress). She was in Russia at the time, but moved to the USA to live with Davecat and Sidore. They are happy together.

Now let’s talk about Boomer.

Boomer died on a battlefield in Iraq. He was given a military funeral, complete with a 21-gun salute. He was awarded a Purple Heart and a Bronze Star medallion. The odd thing was that Boomer wasn’t a human being. Boomer was a MARCbot — a bomb disposal robot used by the military. Boomer’s comrades felt they owed him the military send off. He had developed a personality of his own and he had saved their lives on many occasions. It was the least they could do. Relationships between soldiers and bomb disposal robots are not uncommon. Julie Carpenter details many of them in her book Culture and Human-Robot Interaction in Militarized Spaces.

Both of these stories demonstrate something important. Humans can form powerful emotional attachments to non-living objects, particularly objects that resemble other humans (in the case of RealDolls) or living beings (in the case of the MARCbot). As we now enter the era of social robotics, we can expect the opportunities for forming such relationships to grow. In the not too distant future, we will all be having relationships with robots, whether we like it or not. The question is what kinds of relationships can we have with them and is this a good or bad thing?

Some people are worried. They think human-robot relationships are emotionally shallow and that their proliferation will cut us off from emotionally richer human-human relationships. In this post I want to look at an argument such people might make against robot-relationships — based on the concept of an Aristotelian friendship. I will give some critical responses to that argument. My position is that many philosophers overstate the case against robot relationships and that there is something to be said in their favour.


1. The Many Forms of Friendship
I’m going to limit my argument to the concept of friendship. There are, obviously, many kinds of relationships in human social life. Friendship is merely one among them, but it is a relationship style of considerable importance and, depending on how it is conceptualised, it can shed light on other social relationships. I’m going to conceptualise it broadly, which enables such cross-comparison.

I’m going to suggest that there are three main styles of friendship:

Utility friendships: This a relationship between two or more individuals whose primary value lies in the instrumental gains that can be achieved through the friendship by one or more of those individuals. For instance, you might value your wealthy friends not so much for who they are but because of the gains their wealth can bring to you.

Pleasure friendship: This is a relationship between two or more individuals whose primary value lies in the pleasure that one or more of those individuals derives from their interactions. For instance, you might have a regular tennis partner and derive great pleasure from the matches you play together.

Aristotelian friendship: This is a relationship between two or more individuals whose primary value lies in the mutual sharing of values and interests, and the mutually enriching effect of the interactions they share on the virtues and dispositions of the individuals. (This is also sometimes referred to as a ‘virtue’ friendship).

Utility and pleasure friendships are characterised by self-interest. The value of the friendships lies in the benefits they bestow on the participants. They are not necessarily mutually enriching. In a utility friendship, all the instrumental gains could flow to one of the individuals. Aristotelian friendships are different. They require mutual benefit.

I refer to such relationships as ‘Aristotelian’ because they were first formally identified by Aristotle and they were the type of friendship he valued most. This is a common view. Many philosophers who write about friendship argue that, although there can be value to utility/pleasure friendships, there is something special about Aristotelian friendships. They are a great good: something to which an ideal human life should have access. It would be a shame, they say, if the only kinds of friendships one ever experienced were of the utility or pleasure type. Indeed, some people go so far as to suggest that Aristotelian friendships are ‘true’ friendships and that other types are not.

Aristotelian friendships have been analysed extensively in the philosophical literature. There are many alleged preconditions for such relationships. I won’t go through them all here, but I will mention four of the more popular ones:

Mutuality condition: There must be mutual sharing of values and interests. This is the most obvious condition since it is built into the definition of the friendship.

Honesty/authenticity condition: The participants in the friendship must be honest with each other. They must present themselves to each other as they truly are. They must not be selective, duplicitous or manipulative.

Equality condition: The participants must perceive themselves to be on an equal footing. One party cannot think themselves superior to the other (the idea is that if they did this would block mutuality).

Diversity condition: The participants must interact with one another in a varied and diverse set of circumstances (this facilitates a higher degree of mutuality than you might get in a pleasure friendship between two tennis-playing partners).

Whether all of these conditions are essential or not is a matter of some debate, but their combination certainly makes it easier to enter into an Aristotelian friendship.

It is important to recognise that Aristotelian friendships are an ideal. Not every friendship will live up to that ideal. Many of the friends you have had in your life probably fall well short of it. That doesn’t mean those friendships lacked value; it just means they weren’t as good as they could possibly have been.

Because it is an ideal the risks entailed by an Aristotelian friendship are greater than those of other friendships. If you think you are in a true Aristotelian friendship with someone else, it is much worse to find that they have been lying to you or manipulating you, than it would be if you only thought yourself to be in a pleasure or utility friendship. My tennis playing partner could be lying to me about his job, his family, and his educational history and it wouldn’t really affect the pleasure of our interactions. It would be different if he was my Aristotelian friend.

That’s enough on the concept of friendship. Let’s look at how this concept can be used to make the case against robot relationships.


2. Robots Cannot be Your Aristotelian Friends
The first, and most obvious, argument you can make against robot relationships is that they can never realise the ideal of Aristotelian friendship. To put it formally:


  • (1) Aristotelian friendships require mutuality (shared interests, values, concerns), authenticity (of self-presentation), equality and diversity.
  • (2) Relationships with robots cannot satisfy all of these conditions.
  • (3) Therefore, relationships with robots can never be Aristotelian friendships.


We are granting premise (1) for the purposes of this discussion. That means premise (2) is the only thing up for grabs. The defender of that premise will claim that robots can never satisfy the mutuality condition because robots can never have inner mental lives: they cannot truly share with us; they do not have their own interests, values and concerns. They will also claim that robots cannot be authentic in their interactions with us. The manufacturers of the robots will trick them out with certain features that suggest the robot cares about us or has some inner mental life (maybe through variations in gesture and the intonation of the robot’s voice). But these are tricks: they mislead us as to the true nature of the robot. They will then argue that we can never be on an equal footing with a robot. The robot is too alien, too different, from us. It will be superior to us in some ways (e.g. in facial recognition and computation) but inferior in others. We will never be able to overcome the feeling of inequality. Finally, they will argue that most robots (for the foreseeable future) will be capable of interacting with us in limited ways. They will not be fully-functioning androids, capable of doing everything a human is capable of doing. Consequently, we will not be able to achieve the diversity of interaction with them that is needed for a true Aristotelian friendship.

Is this a good argument? Should it turn us against robot frienships? There are two major problems. The first, and less important, is that it is possible to push back against the defence of premise (2). There are two ways of doing this. You could take the ‘future possibility’ route and argue that even though robots are not yet capable of satisfying all these conditions, they will be (or may be) capable of doing so in the future. As they develop more sophisticated mental architectures, maybe they will become conscious and develop inner mental lives; maybe they will present authentic versions of themselves; and maybe they will be able to interact with us in more diverse ways (indeed, this last condition seems pretty likely). Alternatively, you could take the ‘performative/behaviourist’ route and argue that it doesn’t really matter if robots are not objectively/metaphysically capable of satisfying those conditions. All that matters is that they perform in such a way that we think they are satisfying those conditions. Thus, if it seems to us as though they share our values and interests, that they have some inner mental life, that they are, more or less, equal to us, then that’s good enough.

I know some people are abhorred by this second suggestion. They insist that the robot must really have an inner mental life; that it cannot simply go through the motions in order for us to form an Aristotelian bond with it. But I’m never convinced by this insistence. It just seems obvious to me that all human-human Aristotelian friendships are founded on a performative/behaviourist satisfaction of the relevant conditions. We don’t have access to someone’s inner mental life; we can never know whether they really share our values and concerns, or whether they are authentically representing themselves (whatever that might mean). All we ever have to go on is their performance. The problem at the moment is that robotic performances just aren’t good enough. If they get good enough, they will be indistinguishable from human performances. Then we’ll be able to form Aristotelian friendships with them.

I know some people will continue to be abhorred by that claim. They will argue that it involves some manipulation or deception on the part of the robot manufacturers. But, again, I’m not convinced by this. For example, if a robot really seems like it cares for you or shares your interests, and if all its objective performances confirm this, then how is that deceptive or misleading? And if the robot eventually betrays your trust or, say, acts in ways that benefit its manufacturers and not your relationship with it, how is this any different from the betrayals and manipulations that are common in human-human friendships? Robot relationships might be no better than human relationships, but if they are performatively equivalent, I don’t see that they will be much worse.

That line of thought is a tough sell. Fortunately, you don’t need to accept it to reject the argument. The other problem with it, and by far the more important problem, is that it doesn’t really matter if robot relationships fail to live up to the Aristotelian ideal. There is no reason why we cannot form utility or pleasure friendships with robots. These relationships will have value and don’t require mutuality. They can be unidirectional. Clearly Davecat has formed some such bond with his RealDolls; and clearly the soldiers who worked with Boomer did too. As long as we can keep relationship types separate in our minds, there is no reason to reject a relationship simply because it falls short of the Aristotelian ideal.

The way to resist this is to argue that engaging in robot relationships cuts us off from the great good of Aristotelian friendships. That’s what the next argument tries to do.


3. The Corrosive Impact of Robot Relationships
The second argument you can make against robot relationships will claim that, even if we accept that robot relationships can only ever be of the pleasure/utility type, there is a danger that if we embrace them we will no longer have access to the great good of an Aristotelian friendship. This would be terrible because Aristotelian friendships are a form of human flourishing.

The argument is simple:


  • (4) If pleasure/utility relationships with robots would cut us off from Aristotelian friendships, then robot relationships would be a terrible thing to encourage.
  • (5) Pleasure/utility relationships with robots will cut us off from Aristotelian friendships.
  • (6) Therefore, robot relationships would be a terrible thing to encourage.


Premise (5) needs support and such support can come from two angles:


  • (7) Forced replacement: It is possible that some people will be forced to only interact with robots in the future: their potential human interactions will be eliminated. This will block them from accessing Aristotelian friendships (because robots cannot be our Aristotelian friends).
  • (8) Corrosion problem: If people enter into pleasure/utility relationships with robots they will be motivated to adopt a more shallow, utility and pleasure seeking attitude with their human friends. This means that even though Aristotelian friendships remain an open possibility, they are less likely to be achieved.


The forced replacement argument is often made in relation to the elderly. There is a noticeable drive to use robots in the care of elderly people. The elderly are often socially isolated. If they have no families, the only human contact they have is, sometimes, with their care workers. Now, admittedly, the care relationship is distinguishable from friendship. But the elderly do sometimes enter into friendships with their carers. If all human contacts are replaced by robots, they will no longer have access to the possibility of an Aristotelian friendship.

The corrosion problem has previously been identified in relation to online friendships and the style of interaction they encourage. The kinds of interactions and friendships we can have online are, according to critics, remarkably shallow. They often consist of perfunctory gestures like posting status updates and liking or emoticonning those updates. These interactions can have utility and can be pleasurable (the new likes and retweets give you a jolt of pleasure when you see them), but they are not deep and diversified. Some worry that such shallow interactions carry over to the real world: we become accustomed to the online mode of interaction and perpetuate it in our offline interactions. By analogy you could argue that the same thing will happen if robot relationships become normalised.

Is this argument any good? It’s probably more formidable than the first but I think the fears to which it alludes are overstated. I don’t deny that there is drive toward the use of robots in certain relationship settings — such as care of the elderly. And (assuming we can’t form Aristotelian friendships with robots) it would be bad if that were the only kind of interaction an elderly person had. But I think the forced replacement idea is fanciful. I don’t think anyone is going to force people to only interact with robots.

What is more likely to happen is that people ignore the elderly because they find it too unpleasant or uncomfortable to interact with them due their care requirements. They will prefer to outsource this to professionals and will not wish to engage with loved ones or parents in states of senescence. On top of that, we are in the midst of a significant demographic shift toward aging populations. This means the care burden in the future will increase. It is probably impossible and unfair to expect the shrinking younger generations to shoulder that burden. Some robotic outsourcing might be necessary.

But, in fact, I think the robots could actually help to facilitate better friendships with those for whom we need to care. Remember the conditions for an Aristotelian friendship. One of them is that participants should be on an equal footing. This is often not possible in a caring relationship. One party sees the other as an inferior: someone in a state of decline or dependency. It is only through the good will of one party that they are enabled to flourish. Giving the more dependent partner some robotic assistance may actually enable a better friendship between the two humans. In this way, robots could complement or promote, rather than corrode and undermine, Aristotelian friendships. A dyadic relationship of inequality between two humans is replaced by a triadic relationship of greater equality between two humans and a robot.

This could be a generalisable point. The comments I made previously about online friendships have been challenged in the philosophical literature. Some people argue that online interactions can be (even if they often aren’t) deep and that the ‘gating/filtering’ features that are often lamented (e.g. the anonymity or selective presentation) can be a boon. In the real world, we frequently interact with one another on unequal terms. If I see you and talk to you I will be able to determine things about your ethnic or socio-cultural background. I might look down on you (sub-consciously or consciously) as a result. But you can hide some of these things online, putting us on a more equal footing. I’m not saying that adding a robot to a dyadic relationship can do the same things as online gating/filtering, but it could have an analogous effect in the real world.

Furthermore, I think there are other reasons to suspect that robot friendships could promote, rather than corrode, Aristotelian friendships. I think in particular of Peter Singer’s arguments about the expanding circle of moral concern. It could be that personifying robots, ascribing to them properties or characteristics of humanity, will train-up our empathic concern for others. We no longer treat them in an objectifying, tool-like way. Some people hate this idea — they say that robots should always be our slaves — but I think there could be benefits from seeing them in a more humanising light. Again, it could encourage us to have more fulfilling interactions with our fellow human beings.


4. Conclusion
To sum up, Aristotelian friendships are held to be a great good - something to which an ideal human life should have access. People might object to robot relationships on the grounds that (a) they can never attain the Aristotelian ideal and/or (b) even if they have other benefits, they cut us off from the Aristotelian ideal.

There are reasons to doubt this. Robots might be able to attain the Aristotelian ideal if they are performatively equivalent to human friends. And even if they can’t, there is reason to suspect that they could complement or promote Aristotelian friendships amongst humans, not corrode or undermine them.

Tuesday, February 14, 2017

The Ethics of Trigger Warnings: A Review of the Arguments




[If you like this blog, consider signing up for the newsletter...]

[Trigger Warning: This post is about trigger warnings]

I have taught a number of controversial topics in my time. I have taught about the ethics of sex work, the criminalisation of incest, the problems of rape and sexual assault, the permissibility of torture, the effectiveness of the death penalty, the problems of racial profiling and bias in the criminal justice system, and the natural law argument against homosexuality (to name but a few). I try to treat these topics with a degree of seriousness and detachment. I tell students that the classroom is a space for exploring these issues in a discursive and reflective manner. I also warn them to be respectful to their classmates when expressing opinions. They may not realise how the issues we discuss affect others in the class.

That said, I’ve never had an explicit policy or practice of issuing trigger warnings. To be honest, I had never even heard the term until about 2013. But from then, until roughly the end of 2015, there seemed to be an explosion of interest in the ethics of trigger warnings. Students started demanding them in a number of US universities. And opinion writers fumed and fulminated about them in newspapers and websites. A lot of heat was generated but little light. Commentators divided into pro and anti camps and became deeply entrenched in their positions.

My general sense is that the debate about trigger warnings has waned since its peak. There is certainly still a vigorous debate about ‘coddling’ on university campuses, and a persistent desire to create ‘safe spaces’ in order to accommodate oppressed minorities, but the specific debate about trigger warnings seems (to me at any rate) to have faded into the background.

So that means now is probably a good time to reflect on its merits and see whether it casts any light on the more general debate about safe spaces and campus ‘coddling’. Fortunately there are some academic resources that help us to do this. Wendy Wyatt’s article “The Ethics of Trigger Warnings” is a particularly useful guide to the topic and I’m going to summarise and evaluate some of its contents over the next two posts.

Wyatt’s article comes in two halves. The first half reviews the arguments of the ‘pro’ and ‘anti’ groups. The second half defends Wyatt’s own view on the ethics of trigger warnings. There is something of a disconnect between both halves. Although she promises to evaluate and engage with the arguments of the pro and anti sides presented in the first half, on my reading she doesn’t really do this in the second half. She summarises their views and then develops her own. Her position certainly builds upon and refers back to the arguments of the pro and anti sides, but she does not specifically evaluate the merits of their arguments.

I’m going to try to make up for this omission in the remainder of this post. I will review the seven ‘anti’ and four ‘pro’ arguments that she identifies in her article. I will try to formalise them into simple arguments and reveal their hidden assumptions. I will then subject them to some critical evaluation. My evaluation will be light. As you will see, the debate about trigger warnings raises lots of issues in both philosophy, psychology and sociology. It would be beyond my knowledge to fully evaluate all these issues. So, instead, I will limit myself to identifying possible weaknesses and areas requiring greater scrutiny.

Note: I write this with considerable trepidation. Dipping your toe into any campus politics debate seems to be a surefire way to attract the ire (or worship) of some. I hope to avoid both fates.


1. Seven Arguments Against Trigger Warnings

As Wyatt notes, most media commentators are sceptical (and maybe even contemptuous) of trigger warnings and the culture from which they emerge. They think that trigger warnings are counterproductive to the aims and ethos of higher education, and possibly harmful to students and society more generally. Seven objections, in particular, have been voiced by the critics.

1.1 It’s contrary to the purpose of higher education
The first objection is that trigger warnings subvert the point of higher education. Higher education is supposed to challenge students, to push them outside their comfort zones, and to encourage them to confront divisive and contentious issues. Trigger warnings enable students to stay within their comfort zones and avoid divisive topics.

To put this objection in more formal terms.


  • (1) The purpose of higher education is to push students outside their comfort zones, to confront difficult, uncomfortable and divisive topics.
  • (2) Trigger warnings enable students to stay within their comfort zones and avoid difficult, uncomfortable and divisive topics.
  • (3) Therefore, trigger warnings are contrary to the purpose of higher education.


I’m inclined to accept premise (1), with some caveats. The major caveat is that I doubt that this is the primary or sole purpose of higher education. I suspect it is really only a secondary consequence of one of many purposes of higher education. In other words, I’m not convinced that confronting difficult and uncomfortable topics is an end-in-itself for higher education. I think the primary purposes of higher education are plural — teaching students important skills, imparting important knowledge, encouraging them to be critical and self-reflective, preparing them for employment etc. — and that in many instances achieving these purposes will require them to confront difficult or uncomfortable topics. I cannot imagine teaching an ethics course, for instance, without doing this. But some subjects — e.g. physics or advanced mathematics — seem to me like they would not necessarily require students to confront uncomfortable or divisive topics (although I guess certain aspects of physics could be uncomfortable to those with contrary religious beliefs). On top of this, I don’t think anyone want students to feel uncomfortable just for the sake of it. I think this is unavoidable if they are going to deal with certain topics.

The second premise strikes me as being more problematic. Most supporters of trigger warnings will argue that their purpose is not to enable students to avoid uncomfortable topics but rather to enable them to prepare to engage with such topics. Thus, students are not supposed to use trigger warnings as an excuse to avoid learning outcomes, but as a tool to facilitate better participation in the activities that lead to those outcomes. This response strikes me as plausible when it comes to stating the intention behind their use, but I think some caution is needed. There is a danger that trigger warnings foster ‘respect creep’, whereby they start off as a tool for enabling participation but end up as an excuse to avoid participation due to the need to respect the vulnerabilities of the students in question. This ‘slippery slope’ style concern becomes something of a theme in the remainder of this post.



1.2 They threaten free speech and academic freedom
The second objection to trigger warnings focuses on the value of free speech and academic freedom, and suggests that these warnings undermine those values because they are tantamount to a form of censorship. In argumentative form:


  • (4) Free speech and academic freedom are important values and are undermined by censorship.
  • (5) Trigger warnings censor the expression of certain ideas.
  • (6) Therefore, trigger warnings undermine free speech and academic freedom.


I don’t think this is a good objection to trigger warnings. I’d probably be willing to grant premise (4) for the sake of argument, although I should enter some objections. I am not a free speech absolutist. It seems obvious to me that certain forms of speech are properly regulated and censored (e.g. fraud and coercive threats). The key question is whether there is an appropriate and trustworthy censor. Oftentimes there is not. Brian Leiter’s paper ‘The Case Against Free Speech’ outlines a position on this with which I am sympathetic. That said, I do think free speech and academic freedom are important values in universities.

The bigger issue is with premise (5). Trigger warnings do not seem to me to amount to censorship. Their original intention is not to prevent the expression of ideas. Their intention is to facilitate student engagement with ideas in the classroom: you give the trigger warning and then express the controversial idea or opinion. Furthermore, their relevance is to the classroom environment alone, where special obligations exist between the teacher and his/her students, which may justify some reduction in speech protection. Free speech and academic freedom relate more to the university and academic life as a whole, not to what happens in the classroom. To that extent, other manifestations of campus ‘coddling’ such as ‘no platforming’ or the desire to make the entire university a ‘safe space’ are greater threats to free speech and academic freedom. If trigger warnings are a slippery slope to those actions, then there may be reason to oppose them on these grounds. But if you can block the slippery slope, there is less reason to be concerned.



1.3 - Warnings encourage infantilisation
The third objection focuses on both the character-building function of education and the need to treat students as adults. The idea is that trigger warnings encourage students to see themselves as delicate, vulnerable, infantilised adults — always at the risk of being traumatised by reality — and that this is a bad thing. In argumentative form:


  • (7) It is bad for adults to view themselves as delicate and vulnerable (i.e. to have an infantilised self-conception).
  • (8) Trigger warnings encourage an infantilised self-conception.
  • (9) Therefore, trigger warnings are bad.


As someone who is undoubtedly an infantilised adult, I find it difficult to fully embrace premise (7). Being infant-like in certain respects seems like a good thing (e.g. maintaining a child-like sense of wonder or curiosity). Of course, this argument is specifically targeted at the vulnerable aspects of infant-likeness. I’m more sympathetic to the notion that this is bad. But I often wonder why it is bad. Is it bad for instrumental reasons? If you view yourself as a perpetual potential victim will you have a tougher time in life? The real-world — we are often told — won’t be sympathetic to our vulnerabilities. We need to toughen up. I’m sceptical of this kind of instrumental argument. I’m not convinced that the ‘real world’ is necessarily unsympathetic to individual vulnerabilities. I think we are increasingly building a culture in which vulnerabilities are protected and insulated, well beyond the walls of the university (although it is, admittedly, difficult to see that at this moment in our political history). I think it makes far more sense to argue that viewing oneself as a perpetual potential victim is intrinsically bad (maybe because it denies or suppresses agency/autonomy).

Anyway, if we grant premise (7), we are still left with something of a hurdle to clear in relation to premise (8). The problem is that the causal connection between trigger warnings and an infantilised self-conception is indirect. There is no reason to think being exposed to a trigger warning necessarily causes this. It’s more plausible to suppose that trigger warnings, as part of a general confluence of factors favouring infantilisation, have this effect. But if the causal effect is more subtle and indirect, the specific argument against trigger warnings is weaker.




1.4 - The Impossibility of Anticipation
The fourth objection to trigger warnings is that it is impossible to anticipate someone’s triggers. There may be some relatively clearcut cases. If I showed a video of an ISIS beheading it would probably be remiss of me not to issue some trigger warning. I can be pretty confident that this type of content would be traumatising. But beyond the paradigmatic cases, there is much more uncertainty. Sufferers of PTSD can be triggered by the strangest and least predictable things. You end up then with an odd result: everything is potentially triggering and so blanket warnings need to be issued. But blanket warnings are unlikely to be effective as they provide no specific guidance and are likely to be ignored or overlooked.

To put this more formally:


  • (10) If it is impossible to anticipate every trigger, then blanket trigger warnings will need to be given for all course material, no matter how innocuous it may seem.
  • (11) It is impossible to anticipate every trigger.
  • (12) Therefore, blanket trigger warnings need to be given.
  • (13) If blanket trigger warnings need to be given, they are unlikely to be effective.
  • (14) Therefore, trigger warnings are unlikely to be effective.


This strikes me as being a better critique of trigger warnings. From what I have read, premise (11) does appear to be true. Victims of PTSD sometimes claim that their triggers are odd and unpredictable. And there is something of a reductio involved in giving blanket trigger warnings. If that’s what you are doing, it would seem that the warning is not so much about helping and respecting students with vulnerabilities but rather about social signalling.

That said, even though I think this is a better critique, I have some concerns about premises (10) and (13). I think someone might resist the slide from the impossibility of perfect anticipation to the need for blanket warnings by limiting themselves to the paradigm cases. I also think blanket warnings might be effective for some, even if they are useless for most.




1.5 Political Correctness Run Amok
A fifth objection to trigger warnings is that they are yet another symptom of ‘political correctness’ run amok. Wyatt cites several commentators making this claim, often supplemented by some critique of an associated ideological agenda (e.g. social justice, feminism). It’s difficult to know what to make of this objection because it is usually defended in enthymematic form, i.e. with its underlying normative principle implied rather than expressed. If we make that normative principle explicit, problems emerge.

The argument must be something like the following:


  • (15) Trigger warnings are political correctness run amok.
  • (16) Political correctness is bad.
  • (17) Therefore, trigger warnings are bad.


Let’s grant (15) for the sake of argument. That leaves us with (16). My problem with this is simple. ‘Political correctness’ is just way too vague for me to know what to make of this argument. You would need to have a more specific conception of political correctness for the argument to have any weight. But if you make it more specific you probably reduce the debate to one about a specific political agenda (such as feminism and/or social justice - both of which are also vague). It is going to be very difficult to evaluate such an agenda in a comprehensive way and the argument is consequently going to remain highly contested.

For what it is worth, I think certain manifestations of political correctness are morally sensible and probably commendable (e.g. condemning the use of racial slurs), and others are more silly and counterproductive. I suspect the same is true for trigger warnings: some (such as the hypothetical warning one might give before displaying an ISIS beheading video) are morally sensible and others less so.




1.6 - Trigger Warnings are Ineffective and Possibly Harmful
A sixth objection focuses on the origins of trigger warnings in relation to PTSD. The originally conceived purpose for trigger warnings was to help prevent distress amongst those suffering from some sort of post-traumatic stress disorder. But opponents argue that trigger warnings encourage people to avoid triggers and this is contrary to the long-term health and well-being of PTSD sufferers. Gradual exposure to and desensitisation toward triggers is the more effective treatment.

I would state this argument like this:


  • (18) The most effective treatment for PTSD is gradual exposure to (and desensitisation toward) triggers; avoidance of triggers is counterproductive to the long-term health and well-being of the sufferer.
  • (19) Trigger warnings encourage PTSD sufferers to avoid triggers.
  • (20) Therefore, trigger warnings are counterproductive to the long-term health and well being of PTSD sufferers.


A few points about this argument. First, as noted above, most proponents of trigger warnings will resist premise (19). They will argue that the purpose is not to encourage avoidance but to enable participation. But there may well be a gap between intended purpose and effect: trigger warnings may not be intended to encourage avoidance but they end up working that way. Second, and more importantly, I think this argument does help make a significant point. There is a danger that professors and lecturers think they are helping their students by issuing trigger warnings, and that this is all they need to do to help sufferers of PTSD, but this may not be the case if they are simply fostering avoidance. There is, consequently, a danger that in embracing trigger warnings professors will wash their hand of other pastoral duties they may owe to their students.

Finally, I question the assumption underlying this argument, namely: that trigger warnings are about helping sufferers of PTSD. That might have been true originally but I suspect nowadays that trigger warnings have evolved into being about something else. In particular, I think they have probably evolved into a social signalling tool. They say to students ‘you are welcome here’ or ‘I share a certain set of values and assumptions with you’ and so on. Some might view those signals positively; some might view them negatively.




1.7 - Society at risk argument
The final objection is the most general. It claims that trigger warnings have negative consequences for society at large as they help to breed a culture of hypersensitivity and victimhood.


  • (21) It is bad to have a culture of hypersensitivity and victimhood.
  • (22) Trigger warnings help to foster a culture of hypersensitivity and victimhood.
  • (23) Therefore, trigger warnings are damaging to society.


Premise (21) sounds plausible, but I suspect that is because the terms used within it are deliberately hyperbolic and pejorative. A culture of victimhood probably is a bad thing, but that doesn’t mean that there aren’t genuine victims who need our respect and assistance. Likewise, ‘hyper’-sensitivity is obviously excessive, but surely some degree of sensitivity is a good thing? Proponents of trigger warnings are likely to reframe the societal consequences as a positive. Where a critic sees hypersensitivity and victimhood, they will see tolerance, respect and care. It’s difficult to say who is right in the abstract. I’m certainly concerned about the slide toward hypersensitivity and victimhood, but I struggle sometimes to rationalise my concern.

Premise (22) also seems problematic for reasons stated previously. Trigger warnings are unlikely to cause these negative effects in and of themselves. Their effect, rather, will be part of a general confluence of factors. This makes it difficult to weight this argument. If there are some benefits to trigger warnings, there may be reason encourage them even if they contribute to a culture of hypersensitivity. It might be better to target the other factors that contribute to such a culture.




1.8 - Interim Summary
Before we proceed to address the positive arguments, let’s quickly get our bearings. My general sense from this analysis is that some objections to trigger warnings (e.g. the impossibility of anticipation objection) are better than others (e.g. the political correctness objection). I also think it’s clear that many of the objections to trigger warnings work on the assumption that they will not function as originally intended and will have dangerous spillover effects. If they functioned purely as intended — as a way to facilitate rather than discourage classroom participation — they would be relatively innocuous.


2. Four Arguments in Favour of Trigger Warnings

You might think it is unnecessary to go through the arguments in favour of trigger warnings at this stage. After all, some of them were implicit in the analysis of the arguments we just reviewed. Nevertheless, some engagement with the positive argument is worthwhile, if only because it will change one’s perspective on the debate. Fortunately, we can be briefer in this discussion since much of the relevant territory has already been covered.


2.1 - Appropriate accommodation for students with mental illnesses
The first argument in favour of trigger warnings is that they are an appropriate way to accommodate students with mental health problems. This is part of the original rationale for trigger warnings and seems like the most sensible argument in their favour. After all, we make reasonable accommodations for students with health problems or disabilities all the time.


  • (24) It is right and proper that we provide reasonable accommodation for students with disabilities and other health problems.
  • (25) Trigger warnings are a reasonable accommodation for students with mental health problems.
  • (26) Therefore, it is right and proper to issue trigger warnings.


Premise (24) is relatively unobjectionable. It is a widely accepted principle of equality law in many jurisdictions. The difficulty is in determining what is ‘reasonable’. Anyone who has dealt with a university disability service will have some sense of the difficulties that can arise. For example, I like to assess students’ public speaking in some of my classes but I have frequently been told that students with anxiety disorders cannot be compelled to undertake this form of assignment. I have issues with this since I think developing public speaking skills is important, but I often take the path of least resistance and don’t kick up a fuss (and, in any event, many students with anxiety disorders tell me they are happy to take the tests since they would like to develop these skills themselves).

Premise (25) is the more contentious one. Defenders of trigger warnings will claim that offering a trigger warning is like warning someone with epilepsy that strobe lights will be used in a performance. They are only for those with mental health problems and can be easily ignored by everyone else. But as we saw above, opponents will argue that things are not so simple. Trigger warnings, they say, may encourage all students to think they have mental health problems (that they are more vulnerable than they really are or should be). They may also argue that trigger warnings will not be very effective forms of reasonable accommodation. As per argument 1.6, they may argue that they will be counterproductive for true sufferers of PTSD.




2.2 - Trigger warnings as small acts of empathy that minimise harm
The second argument is somewhat similar to the first. Where the first argument focused specifically on obligations to those with disabilities and mental health problems, the second argument focuses on general principles of decency and humanity. The idea is that some people are genuinely vulnerable and we owe it to them to make the classroom as welcoming a place as possible. Trigger warnings allow us to do that.


  • (27) It is right and proper to treat others (students in particular) with respect, decency and tolerance.
  • (28) Trigger warnings allow us to treat students with respect, decency and tolerance.
  • (29) Therefore, it is right and proper to issue trigger warnings.


The difficulties with this argument will be familiar by now. I don’t think anyone would deny that we ought to treat others with respect and decency. What they might deny is whether this is a paramount or primary duty. Maybe, as educators, our primary duty is to the truth not to tolerance and respect? They may also deny whether trigger warnings are they best way to show respect and decency. Many will favour a ‘you have to be cruel to be kind’ mentality, which argues that being overly deferential to a student’s perceived vulnerabilities will be counterproductive to their long-term success and well-being.



2.3 - Trigger warnings ensure transparency
The third argument in favour of trigger warnings is different from the preceding two. Where they focused on our duties to the more vulnerable students, this one focuses on making the classroom better for all, irrespective of their underlying psychological makeup. The idea is that giving trigger warnings facilitates transparency and choice: it allows students to know what they are going to face on a given course and allows them to choose when to face potentially disturbing or traumatic material.


  • (30) It is good to facilitate transparency and choice in education.
  • (31) Trigger warnings help to facilitate transparency and choice.
  • (32) Therefore, trigger warnings are good for education.


Premise (30) is bound to raise the hackles of some academics. Transparency and choice may sound unobjectionable at first glance — there are many ways in which universities are keen to promote both — but I also know many professors and lecturers who think there are limits to this. They will argue that education is, ultimately, premised on an asymmetrical relationship: the teacher should know more (and better) than the students about some things, particularly about the content students ought to be taught. Students cannot determine everything about their education. There are legitimate compulsory college classes — subjects deemed important for all students taking a particular course — and there is a widespread belief that teachers should get to determine what is pedagogically appropriate material for their classes. So premise (30) is unlikely to be embraced in an unqualified form.

Fortunately, that may not matter. It may be possible for premise (31) to work with a more qualified form of premise (30). After all, its not like trigger warnings need to facilitate complete transparency and choice. Indeed, if we are reduced to issuing blanket trigger warnings there may be very little transparency involved. And if trigger warnings are issued in course catalogues or module descriptions — and not for individual lectures or module topics — then choice could be facilitated at the point of entry into a class without compromising an individual lecturer's ability to determine course content.

On top of this, the kinds of concerns I list here assume that students are being enabled to avoid or drop out of classes or topics they find objectionable. If trigger warnings function as most of their proponents intend — as ways to facilitate rather than discourage participation — these concerns may not be well-founded.




2.4 - Trigger warnings foster more authentic and honest discussion
The final argument in favour of trigger warnings turns the typical objection to them on its head. Many people associate trigger warnings with political correctness, censorship and the suppression of authentic and honest discussion. Professors are allegedly unable to confront important issues because they have to kowtow to the vulnerabilities of their students. But what if the opposite is true? What if trigger warnings actually enable a more honest confrontation with the truth?

The argument might work like this:


  • (33) Authentic/honest discussions of controversial subject matter is only possible if people use precise terminology and avoid euphemisms.
  • (34) Trigger warnings enable people to use precise terminology and avoid euphemisms.
  • (35) Trigger warnings encourage authentic/honest discussions of controversial subject matter.


I think this is a pretty interesting argument. It again derives its force from the intended purpose of trigger warnings. If they don’t simply function as an excuse for people to avoid difficult subject matter and instead facilitate participation in classroom discussions, they may also help to foster a more honest dialogue. If you give the trigger warnings up front, you now have some justification for using exact terms in your discussions of violence, sexual abuse and racism, instead of hiding behind euphemisms or ignoring the topics entirely. People have been properly warned that they may encounter disturbing material so why not use this to actually discuss disturbing material?

Whether this argument works, of course, depends on the indirect and unintended consequences of trigger warnings. As noted several times already, they may not facilitate participation as intended, they simply facilitate avoidance. And they may form part of a general confluence of factors that limits honest discussion on the campus.




3. Conclusion
That brings us to the end of this post. Hopefully this review of the arguments has been useful. I haven't come down decisively in favour of one point of view here. I'm somewhat conflicted myself. I'm probably constitutionally or dispositionally inclined toward the anti-view, but I think many of the argument in favour of that view are less persuasive than they first appear. They do not focus on the primary intended effects of trigger warnings. Instead they worry about things that are far more difficult to assess, like the long-term or downstream consequences of a trigger warning culture. That said, the arguments in favour of trigger warnings have many weakspots too. They may have good intentions lying behind them but there isn't strong evidence to suggest that they work as intended (there are anecdotes of course) and they may encourage a degree of complacency that is counterproductive to their intended aims.

As I said at the outset, this review of arguments only covers the first half of Wyatt's article. Her main goal was not to evaluate each of these arguments but to defend her own take on the use of trigger warnings. I'll look at that in a future post.

Thursday, February 9, 2017

Pornography and the Philosophy of Fiction




[If you like this blog, consider signing up for the newsletter...]

Pornography is now ubiquitous. If you have an internet connection, you have access to a virtually inexhaustible supply of the stuff. Debates rage over whether this is a good or bad thing. There are long-standing research programmes in psychology and philosophy that focus on the ethical and social consequences of exposure to pornography. These debates often raise important questions about human sexuality, gender equality, sexual aggression and violence. They also often touch upon (esoteric) aspects of the philosophy of speech acts and freedom of expression. Noticeably neglected in the debate is any discussion of the fictional nature of pornography and how it affects its social reception.

That, at any rate, is the claim made by Shen-yi Liao and Sara Protasi in their article ‘The Fictional Character of Pornography’. In it, they draw upon a number of ideas in the philosophy of aesthetics in an effort to refine the arguments made by participants in the pornography debate. Their thesis is simple but has several parts. I’ll call it the “genre is important” thesis:

”Genre is important”-Thesis: When debating the social consequences of pornography, we ought to pay more attention to the genre of the pornography under consideration. Some pornographic representations are ‘response realistic’ and hence persuade users to respond to real world sexual interactions in the same way that they respond to the fictional ones. Some are ‘response unrealistic’ and hence do not persuade users to respond to real world sexual interactions in the same way that they respond to fictional ones.

Over the remainder of this post, I want to look at the argument they develop in support of this thesis. As I hope becomes clear, I think Liao and Protasi make some valid and persuasive points, but I do wonder about the practical significance of what they have to say.


1. Pornography as Fiction
First I need to clarify some of the terminology Liao and Protasi use in their argument. The most important is how they understand the term ‘fiction’:

Fiction = Any representation that prompts imaginings.

This definition comes from the work of Kendall Walton. It has some subtleties. Under this definition, a ‘fiction’ need not be fictional. A fictional representation may be based on real-world events or depict real actions (as in the case of a documentary film). It may also, of course, be based on fake or invented events and actions (as in a drama or satire). This is important because pornography usually does represent real-world events and actions (to state the obvious: the people represented often really are having sex). This does not prevent them from being fictions in the defined sense.

The more important part of the definition concerns the prompting of imagination. Liao and Protasi have a longish argument in their paper as to why sexual desire (as an appetite) involves imagination and hence why pornographic representations often prompt imaginings. That argument is interesting, but I’m going to skip over the details here. The important point is that in satisfying our sexual appetites we often engage the imagination (imagining certain roles or actions). Indeed, the sexual appetite might be unique among appetites as being the one that can be satisfied purely through the imagination. Furthermore, the typical user of pornography will often engage their imaginations when using it. They will imagine themselves being involved (directly or indirectly) in the represented sexual acts.

Why is it important to understand pornography as fiction? It is important because some people who argue against pornography appeal to its fictional character. The most noteworthy example of this is Anne Eaton. Back in 2007 she published an article called ‘A Sensible Antiporn Feminism’, which is required reading for anyone with an interest in this issue (followed by a ‘response to critics’ in 2008). In that article she presents a very careful analysis of the claim that pornography causes harm to women.* She distinguishes various forms that this claim could take and ends up endorsing one version of it.

She argues that inegalitarian pornography may cause harm to women because of the way in which it engages our imagination. She defines inegalitarian pornography as any representation that eroticises relationships of inequality between the genders. Most mainstream hardcore pornography would count as inegalitarian under this definition, though so too would BDSM pornography. She then argues that the problem with such pornography is that it encourages us to export our responses to the inegalitarian representations in the fictional realm out into our real-world sexual interactions:

In so far as inegalitarian pornography succeeds in rendering inegalitarian sex — in all its forms — sexy, it convinces its users that inegalitarian sex is in fact desirable, i.e. worthy of desire…[this results in] the deformation of our emotional capacities and the resulting taste for inegalitarian sex of differing varieties and strengths. 
(Eaton 2008, 4)

The idea is that when you view inegalitarian pornography you are sexually aroused by its representations — that is the intention behind its creation. You are then encouraged to export that arousal out into the real world. The backbone of Eaton’s argument is, then, a transference principle:

Transference Principle: Inegalitarian pornographic representations encourage us to transfer our attitudes towards the representations out into the real world.

Liao and Protasi’s argument takes aim at this transference principle. They argue that not all inegalitarian pornography encourages such transference. It depends on its genre.


2. Response Realistic Fiction vs Response Unrealistic Fiction
Liao and Protasi’s argument hinges on the distinction between response realistic and response unrealistic fictions. Roughly, the distinction is as follows:

Response Realistic Fiction: Makes some normative claims on real-world attitudes and responses. In other words, it encourages reactions to fictional representations that are similar to reactions to analogous real-world events.
Response Unrealistic Fiction: Does not make normative claims on real world attitudes and responses. In other words, it does not encourage reactions to fictional representations that are similar to reactions to analogous real-world events.

Whether fiction is response-realistic or not depends on its genre. It is easiest to explain this with an example. Liao and Protasi contrast dramas and satires in their paper. Take the TV show The Wire. This is a highly realistic, oftentimes gritty, look at the drug war. Its fictional representations (the characters and events in the show) try to be as faithful to real-world analogues as possible. This is very clear when you watch the show. It is, thus, a response realistic fiction. It encourages reactions to the fiction that transfer out to the real world.

Contrast that with the film Dr Strangelove. This is a highly satirical, occasionally ridiculous, look at nuclear deterrence. Its fictional representations are amped-up and hyperbolic. They are not intended to be faithful to their real-world equivalents. But that doesn’t mean that it is not meant to be taken seriously. Stanley Kubrick obviously wanted us to laugh at the movie, but he also wanted us to realise how catastrophic the logic of mutually assured destruction can be. He wanted us to take the threat of real-world nuclear war seriously. The film was, consequently, a response unrealistic fiction. It did not encourage reactions to the fiction that transferred out to the real world. Where we laughed in response to the film we should probably cry (or get angry) in the real world.

Hopefully this explains the distinction between the two kinds of fiction. Before I proceed to Liao and Protasi’s critique of Eaton, I need to clarify one further detail. In the definition of the two types of fiction, I referred to the ‘normative claims’ they make on real world responses. What does this mean? Liao and Protasi cash this out in terms of a ‘responsibility for’ relationship. The idea is that fiction — depending its nature and genre — takes responsibility for certain real world reactions. Consequently, it can be normatively criticised or blamed for certain real world responses and not for others. This is quite distinct from saying that the fictional representation causes a real world response. Somebody who watches Dr. Strangelove might end up laughing at real world events associated with nuclear deterrence and consequently not take them seriously. But the fiction does not normatively endorse that response. It doesn’t take responsibility for it.

This focus on what the fiction is responsible for ends up being quite important.


3. The Importance of Genre in the Porn Debate
All the pieces of the puzzle are now in place. We can proceed to Liao and Protasi’s main argument. You can probably guess how it goes. As I mentioned earlier, they take issue with Eaton’s defence of the transference thesis and they do so by arguing that different genres of porn normatively endorse different real world responses.

The problem is that Eaton’s definition of inegalitarian pornography is too all-encompassing. It includes all eroticised depictions of unequal sexual relations. It doesn’t pay attention to the different genres of porn that include such depictions. In particular, it doesn’t pay attention to the distinction between “mainstream” hardcore pornography and BDSM pornography. Both genres eroticise relationships of inequality, but only the former is response realistic. BDSM pornography is intended to be normatively quarantined from the real world. Just because women (say) in BDSM representations find pain to be sexually pleasurable does not mean that women in the real world will too. The ordinary consumer of such pornography will, according to Liao and Protasi, recognise these genre conventions.

This gives us the following argument against Eaton’s transference principle:


  • (1) Fictional representations only normatively warrant (are only ‘responsible for’) the transference of fictional responses to the real world when they are response realistic.
  • (2) Not all inegalitarian pornography is response realistic: mainstream, hardcore pornography might be, but BDSM pornography is not.
  • (3) Eaton’s transference principle assumes that all inegalitarian pornography normatively warrants the transference of fictional responses to the real world.
  • (4) Therefore, Eaton’s transference principle is false.


What should we make of this argument? I find much to admire in it. I think Liao and Protasi have presented a fascinating and in many instances plausible account of how different fictional representations function in our imaginations. That said, I have two major concerns about the argument and its practical significance.

First, I think that Liao and Protasi subtly shift the goalposts in the course of their discussion. When I read Eaton’s paper, I interpreted her argument as being about the harms that are caused by exposure to inegalitarian pornography, not about the harms for which such porn is responsible. Liao and Protasi seem to shift from the former to the latter in their analysis. I grant that, given its genre, BDSM porn might not be normatively responsible (whatever that might mean) for the transference of fictional responses to the real world, but that does not mean that it is not causally responsible for such transference. What’s more, I think most of the debates about pornography are properly concerned with causation, not responsibility (except, perhaps, for certain debates about legal remedies for the real-world harms of pornography, if any). What might be true — and maybe this is all that Liao and Protasi intended to show — is that genre conventions and norms may impact causation. Thus, if BDSM pornography does not normatively warrant real world responses it may be less likely to cause such responses. That sounds plausible and may be empirically supported.

Second, I worry about the sustainability of any claim to the effect that a given pornographic representation belongs to a particular genre. As Liao and Protasi themselves point out, fictions can belong to multiple genres. A given episode of The Wire might include some satirical or comedic elements alongside its realistic ones. Furthermore, I’m not sure what conditions justify or determine genre membership. Is it the intentions of the creator? The beliefs of the general community? Or the beliefs of the interpreter? Each of these has its problems and could undermine the argument outlined above. For instance, if the beliefs of the interpreter are what determine genre membership, then I can’t see why a consumer of mainstream, hardcore pornography could not view it as an unrealistic, fantastical representation of (say) female sexual desire. In other words, I can’t see why an individual user could not quarantine their responses to the fictional representations from their real world responses. This would make it effectively equivalent, in their imaginations, to BDSM porn (in Liao and Protasi’s analysis).

Those are just some quick disagreements. I’ll close on a point of agreement. Liao and Protasi end their paper by suggesting that those who are interested in the effects of pornography could learn a lot by looking at the psychology of fiction and the research that has been done on our reactions to fictional representations. They also think that future research into the effects of pornography could benefit from paying closer attention to its multiplicity of genres. I think this is sensible advice.


* This is distinct from the claim that pornography results from harm or constitutes harm.

Monday, February 6, 2017

Understanding the Algorithmic Self (Videos)

Screen Shot 2017-01-31 at 17.44.17.png


[If you like this blog, consider signing up for the newsletter...]

On Friday 27th of January, I hosted a workshop focused on self-tracking and quantification at NUI Galway. The workshop dealt with two main questions:

  • How does self-tracking and quantification affect our self understanding?

  • How does self-tracking and quantification affect how we are governed (by ourselves, our lovers, our friends and our employers)?

To answer these questions, I brought together four speakers to deal with the practice of self-tracking in four different domains of personal life. You can watch the talks below.

1. Why the Algorithmic Self? (Introduction)



This was an introductory talk by me (John Danaher) explaining the purposes behind this particular workshop.


2. The Algorithmic Self at Play (Jane Walsh - NUIG)



Dr Jane Walsh from the mHealth research cluster at NUIG spoke about the rise of self-tracking and wearables in health and fitness. She looked at the explosion in this technology in the recent past, the lack of evidence for its effectiveness in changing behaviour, and the potential risks that arise from its use.

3. The Algorithmic Self at Work (Phoebe Moore - MDX)



Dr Phoebe Moore from Middlesex University spoke about her research on quantified self practices in the workplace. She explained how contemporary practices tie into the history of scientific management techniques and how they connect with trends toward agility and precarity in the modern workforce. She also presented data from a British Academy/Leverhulme research project she did (with Lukasz Piwek and Ian Roper) with a Dutch company.

4. The Algorithmic Self in Love (John Danaher - NUIG)



I spoke about self-tracking in intimate relationships, presenting some of the main arguments from a paper I am writing with Sven Nyholm (TUE) and Brian Earp (Yale/Oxford) entitled 'The Quantified Relationship'. The paper deals with eight objections to the practice of intimate quantification and makes the case for cautious optimism about this technology.

5. The Algorithmic Self as Citizen (John Morison - QUB)



Professor John Morison from Queen's University Belfast spoke about the practices of tracking and surveillance in the political sphere. His talk focused in particular on the consequences of algorithmic governance for citizenship and politics, arguing that it could spell the death of the liberal democractic subject.

Sunday, February 5, 2017

Symbols and Consequences in the Sex Robot Debate (TEDxWHU)


Onstage at TEDxWHU

[If you like this blog, consider signing up for the newsletter...]


[Note: This is (roughly) the text of a talk I delivered at TEDxWHU on the 4th February 2017. A video of the talk should be available within a few weeks.]

There is a cave about 350km from here, in the Swabian Jura. It is called the Hohle Fels (this picture is the entrance to it). Archaeologists have been excavating it since the late 1800s and have discovered a number of important artifacts from the Upper Paleolithic era. In June 2005, they announced an interesting discovery. They had unearthed an unusually shaped object. It was 20 cm long and 3 cm wide, and made from highly polished stone. It was estimated to be 28,000 years old. Its intended shape and function were, according to Professor Nicholas Conard of the dig team, ‘clearly recognisable’. I'm going to put a picture of it on screen now and ask if you agree: Is its shape and function clearly recognisable? I won't state the obvious, but artifacts of this sort have been discovered at archaeological dig sites around the world, many dating back thousands of years. Most were probably used in religious rituals or ceremony, but some members of the archaeological team at Hohle Fels speculated that, due to its reasonably lifelike size and shape, this one may have been used for sexual stimulation.

Why am I telling you about this? I am telling you because it illustrates the long history that human beings have had with the creation and use of artifacts for sexual stimulation and gratification. This is a history that stretches from the Hohle Fels artifact all the way through to the first mechanical and electronic vibrators in the mid-to-late 1800s, to the dazzling diversity of sex toys available on the market today. In January 2010 we got a glimpse of where the future of this industry may lie. At the Adult Video Network expo in Las Vegas, Nevada, Douglas Hines — founder of the company TrueCompanion — unveiled Roxxxy the world’s ‘first sex robot’. Though sex robots have been a long-standing trope in science fiction, it now seems like they might become a reality. And indeed since 2010 several other companies have entered the market and started to develop prototype models.

I am a legal academic/ethicist and I have an interest in the social, legal and ethical implications of this technology. I want to focus on some of those implications in this talk. Now, the official title of my talk is ’Symbols and their Consequences in the Sex Robot Debate’, and this probably prompts several questions one of which is: is there really a debate going on about this? When I mention that I have an interest in this topic to my colleagues they usually respond with a mix of derision and bemusement. But I’m here to tell you that there is indeed an academic debate about this technology, admittedly small and niche at the moment, but growing in size everyday. And within that debate some people take the topic very seriously indeed. The clearest example of this is Kathleen Richardson, an anthropologist from De Montfort University in the UK who, in September 2015, started the Campaign Against Sex Robots, modelling it on the longer-standing Campaign Against Killer Robots (which tries to preemptively ban autonomous weapons). The campaign argues for an organised effort against the development of sex robots. What I want to ask in this talk is whether this attitude of resistance is warranted? I do so by considering what I take to be one of the most common arguments against the development of this technology — an argument that focuses on symbolic meaning and social consequences. I’ll try to convince you that while this argument is worth taking seriously, it is ultimately unlikely to justify a campaign against the development of sex robots.

So what’s the argument? It’s best explained by way of an example. I don’t know if anyone here has seen the Channel 4 television series Humans, but for those who haven’t it is a provocative and sometimes insightful drama about social robots. It depicts a near-future society in which realistic humanoid robots have become commonplace, acting as workers, home helpers, carers and sexual playthings for their human creators. The majority of the robots are less-than-human in their intelligence and ability, and apparently lack consciousness and awareness (although the main plotline concerns a group of these robots that has achieved human-level consciousness and intelligence).

In one episode, a group of (human) teenagers are having a house party. At the house party there is a robot serving drinks and catering to the attendees’ needs. The robot looks like a human female. Some of the young men hurl abuse at her. One of them switches her off and then tells his friends that he is going to drag her upstairs to have sex with her. He is goaded on by his friends. At this point one of the main (human) female characters intervenes, telling her male peers to stop. When asked why, she responds by asking them whether it would be okay for them to knock out a real human female and have sex with her in similar conditions? They renege on their plan.

The writers of the show do not pause at this point and have the female protagonist expand on her objection. Like all good fiction writers they have learned to ‘show not tell’. But I’m interested in the telling: I want to know why her objection had the effect it did. Presumably the objection had nothing to do with the potential harm to the robot. The robots within the show are after all — apart from the core group — deemed to be devoid of moral status, lacking the requisite consciousness and intelligence to be moral victims. What’s more, the assumption within the current debate about sex robots is that this is likely to be true for some time. We may create fully conscious and self-aware robots some day, but they are a long way off and for the time being the current incarnation of this technology will consist of machines that are not capable of being morally harmed. So there is something of a puzzle: if the robot cannot be harmed, why is it wrong for the young men to have sex with it? The answer, I suggest, must lie elsewhere: in the symbolic meaning of the act (the passive, switched off robot, standing in for real women), and the consequences that might ensue from its performance (how the young men will relate to real women).

This combined concern for symbolic meaning and its consequences is shared by several of the leading opponents of sex robots. Their concern can be expressed as a formal philosophical argument (and since I am an academic and dilettante philosopher this is the style of expression I favour). Let me introduce you to that argument now. It works like this:


  • (1) Sex robots (or the act of having sex with them) symbolically represents ethically problematic sexual norms (i.e. it says something negative about us and our attitude toward sexual interactions). (Symbolic Claim).
  • (2) If sex robots (or the act of having sex with them) will symbolically represent ethically problematic sexual norms, then their development and/or use will have negative consequences. (Consequential Claim).
  • (3) Therefore, the development and/or use of sex robots will have negative consequences and we should probably do something about this. (Warning Call Conclusion).


This argument is abstract (and not formally valid, in case any logicians are reading) — more like a template that can be filled in with particular examples of problematic symbolic meaning and negative consequences. Different opponents of sex robots fill out the template in different ways. Let me give two examples from the academic literature.

The first comes from an article by Sinziana Gutiu — a Canadian lawyer — entitled ’Sex Robots and the Roboticization of Consent’. In her article, Gutiu worries explicitly about the symbolism of sex robots, in particular the way in which they represent women. (Brief aside: although it is possible (and one presumes likely one day) that we will create male or transgender sex robots, there is little point in denying that, at the moment, they tend to take the female form and to be marketed at heterosexual men.) Gutiu worries that the robots that are and will be created will embody stereotypical norms of beauty and will represent women as passive sexual objects. To quote from her article:

[Robots like] Aiko, Actroid DER and F, as well as Repliee Q2 are representations of young, thin, attractive oriental women, with high-pitched, feminine voices and movements. Actroid DER has been demoed wearing a tight hello kitty shirt with a short jean skirt, and Repliee Q2 has been displayed wearing blue and white short leather dress and high-heeled boots
…sex robot[s will] look and feel like…real [women] who [are] programmed into submission and [who] function as a tool for sexual purposes. The sex robot [will be] an ever-consenting sexual partner and the user has full control of the robot and the sexual interaction.

She then worries about the broader social consequences of this. She worries that the encouragement of sex robots will undermine female sexual autonomy by perpetuating false beliefs about female sexuality and sexual consent. This will reinforce gender inequalities and may also have a negative effect on the men who use these robots: they will either treat women as sexual objects or may withdraw from society and become increasingly isolated and misanthropic in their lifestyles.

The aforementioned Kathleen Richardson — founder of the Campaign Against Sex Robots — is another example of someone who is concerned with the symbolism and consequences of sex robots.  The major objection to sex robots in Richardson’s work stems from what she perceives to be the analogy between human-sexbot interactions and human-prostitute interactions. She argues that the goal of the designers and engineers of sex robots is to create an interactive experience between the robot and the human user that is roughly equivalent to the interaction between a sex worker and their client.

This is problematic for two reasons. First, human-sex worker interactions are themselves ethically problematic. They are based on asymmetries of power. The client’s will and interests dominate over those of the sex worker. There is no concern for the inner mental life, wants or needs of the worker. The sex worker is thus objectified and instrumentalised. By symbolically mimicking such interactions, sex robots express approval for this style of interaction. Second, in doing so, sex robots will encourage their users to perpetuate negative attitudes towards women. In her paper she takes aim, in particular, at the work of David Levy, author of the 2007 book Love and Sex with Robots which makes the case for a future of intimate relationships with robots. Here’s a quote from Richardson that gives a sense of her concerns:

David Levy proposes a future of human-robot relations based on the kinds of exchanges that take place in the prostitution industry. Levy explicitly creates ‘parallels between paying human prostitutes and purchasing sex robots’…….by drawing on prostitution as the model for human-robot sexual relations, Levy shows that the sellers of sex are seen by the buyers of sex as things and not recognised as human subjects. This legitimates a dangerous mode of existence where humans can move about in relations with other humans but not recognise them as human subjects in their own right.

So that’s how the symbolic-consequences argument works in practice. Is it any good? There is certainly something to be said for it. There probably is something symbolically questionable about robots that are used for sexual purposes (I didn’t even mention the most obvious example where the robot looks or acts like a child). And the consequences to which Gutiu and Richardson point have a degree of prima facie plausibility. But I want to close by defending two propositions that I think render the argument less persuasive than it first appears.

The first proposition calls into question the symbolic claims made against this technology. It is that:

Proposition 1: The problematic symbolism of sex robots is contingent in two important ways: it is removable and reformable.

This is a general point. We often think that the symbolic meaning of a representation or practice is fixed and that this should colour our ethical attitude toward those practice — for instance people think it is obviously offensive to eat the bodies of the dead or to pay people for bodily services — but the symbolic meaning of a representation or practice is usually highly culturally contingent and, in the right circumstances, capable of being changed. One of the most famous examples of cultural contingency concerns our symbolic treatment of the dead. In Herodotus’s Histories there is a famous passage comparing the burial practices of the Greeks and the Callatians. The story goes that King Darius of Persia once asked the Greeks if they would eat the bodies of their dead relatives as a mark of respect. The Greeks were abhorred by the notion, arguing that the way to show respect was to burn the bodies on a funeral pyre. Darius then asked the Callations if they would burn the bodies of their dead relatives as a mark of respect. The Callations were abhorred by the notion, arguing that this was to treat the bodies as trash. The proper way to show respect was to eat them. Both the Greeks and the Callations agreed on the need to show respect. But they had very different views about the symbolic act that best communicated this respect.

As the philosophers Jason Brennan and Peter Jaworski point out, this cultural contingency has important ethical ramifications. It means that, under certain conditions, we should actively try to reform the symbolic meaning of a practice. They give the example of the Fore tribe from Papa New Guinea. The Fore tribe used to eat the brains of their dead relatives, believing this carried important meaning. Then they found out that doing so may be the cause of prion disease. As a result they changed the cultural meaning of the practice and encouraged people to stop doing it.

What this means is that the consequences of sex robots are all important to determining our ethical attitude toward them. If the consequences are good — if contra Gutiu and Richardson using them reduces harm to women and has positive social-psychological effects — then perhaps we should change the symbolic meaning that attaches to them. If the consequences are bad — if Gutiu and Richardson are right — then we may have some reason for concern. But this brings me to the second proposition that I wish to defend:

Proposition 2: The social consequences of sex robots are likely to be highly contentious and/or uncertain.

How do I know this? Well, obviously I don’t: the consequences are unknown right now because this technology is in its infancy. But there are parallel debates that might prove instructive. The obvious one is the debate about the social consequences of exposure to pornography, which has been raging now for over forty years. This debate has produced a large number of empirical studies, but very little agreement on the actual effects. Some studies suggest that it has a negative effect; some that it has no discernible effect; some that it has a positive effect. Many researchers lament the disorganised and oftentimes low quality nature of the research. This is unsurprising given that there are significant ideological agendas at stake in the debate. I suspect we will be landed in a very similar position when it comes to understanding the consequences of sex robots.

So where does that leave us? It leaves us with an uncertain future and I think this means we ought to fallback on some fundamental value commitments. Either we commit to liberty and freedom and allow this technology to develop; or we embrace uncertainty and encourage experimentation; or we are risk averse and commit to close regulation or prohibition. In any event, I would suggest that focusing on the symbolic meaning of these artifacts and their consequences won’t give us the answer.

Thank you for your attention.