Pages

Wednesday, September 30, 2015

Automation and Income Inequality: Understanding the Polarisation Effect




(Previous Entry)

Inequality is now a major topic of concern. Only those with their heads firmly buried in the sand would have failed to notice the rising chorus of concern about wealth inequality over the past couple of years. From the economic tomes of Thomas Piketty and Tony Atkinson, to the battle-cries of the 99%, and on to the political successes of Jeremy Corbyn in the UK and Bernie Sanders in the US, the notion that inequality is a serious social and political problem seems to have captured the popular imagination.

In the midst of all this, a standard narrative has emerged. We were all fooled by the triumphs of capitalism in the 20th century. The middle part of the 20th century — from roughly the end of WWII to 1980 — saw significant economic growth and noticeable reductions in inequality. We thought this could last forever: that growth and equality could go hand in hand. But this was an aberration. Since 1980 the trend has reversed. We are now returning to levels of inequality not seen since the late 19th century. The 1% of the 1% is gaining an increasing share of the wealth.

What role does technology have to play in this standard narrative? No doubt, there are lots of potential explanations of the recent trend, but many economists agree that technology has played a crucial role. This is true even of economists who are sceptical of the more alarmist claims about robots and unemployment. David Autor is one such economist. As I noted in my previous entry, Autor is sceptical of authors like Brynjolfsson and McAfee who predict an increase in automation-induced structural unemployment. But he is not sceptical about the dramatic effects of automation on employment patterns and income distribution.

In fact, Autor argues that automating technologies have led to a polarisation effect — actually, two polarisation effects. These can be characterised in the following manner:

Occupational Polarisation Effect: Growth in automating technologies has facilitated the polarisation of the labour market, such that people are increasingly being split between to two main categories of work: (i) manual and (ii) abstract.

Wage Polarisation Effect: For a variety of reasons, and contrary to some theoretical predictions, this occupational polarisation effect has also led to an increase in wage inequality.

I want to look at Autor’s arguments for both effects in the remainder of this post.


1. Is there an occupational polarisation effect?
The evidence for an occupational polarisation effect is reasonably compelling. To appreciate it, and to understand why it has happened, we need to consider the different types of work that people engage in, and the major technological changes over the past 30 years.

Work is a complex and multifaceted phenomenon. Any attempt to reduce it to a few simple categories will do violence to the complexity of the real world. But we have to engage in some simplifying categorisations to make sense of things. To that end, Autor thinks we can distinguish between three main categories of work in modern industrial societies:


Routine Work: This consists in tasks that can be codified and reduced to a series of step-by-step rules or procedures. Such tasks are ‘characteristic of many middle-skilled cognitive and manual activities: for example, the mathematical calculations involved in simple bookkeeping; the retrieving, sorting and storing of structured information typical of clerical work; and the precise executing of a repetitive physical operation in an unchanging environment as in repetitive production tasks’ (Autor 2015, 11).

Abstract Work: This consists in tasks that ‘require problem-solving capabilities, intuition, creativity and persuasion’. Such tasks are characteristic of ‘professional, technical, and managerial occupations’ which ‘employ workers with high levels of education and analytical capability’ placing ‘a premium on inductive reasoning, communications ability, and expert mastery’ (Autor 2015, 12).

Manual Work: This consists in tasks ‘requiring situational adaptability, visual and language recognition, and in-person interactions’. Such tasks are characteristic of ‘food preparation and serving jobs, cleaning and janitorial work, grounds cleaning and maintenance, in-person health assistance by home health aides, and numerous jobs in security and protective services.’ These jobs employ people ‘who are physically adept, and, in some cases, able to communicate fluently in spoken language’ but would generally be classified as ‘low-skilled’ (Autor 2015, 12).


This threefold division makes sense. I certainly find it instructive to classify myself along these lines. I may be wrong, but think it would be fair to classify myself (an academic) as an abstract worker, insofar as the primary tasks within my job (research and teaching) require problem-solving ability, creativity and persuasion, though there are certainly aspects of my job that involve routine and manual tasks too. But this simply helps to underscore one of Autor’s other points: most work processes are made up of multiple, often complementary, inputs, even when one particular class of inputs tends to dominate.

This threefold division helps to shine light on the polarising effect of technology over the past thirty years. The major growth area in technology over that period of time has been in computerisation and information technology. Indeed, the growth in that sector has been truly astounding (exponential in certain respects). We would expect such astronomical growth to have some effect on employment patterns, but that effect would depend on the answer to a critical question: what it is that computers are good at?

The answer, of course, is that computers are good at performing routine tasks. Computerised systems run on algorithms, which are encoded step-by-step instructions for taking an input and producing an output. Growth in the sophistication of such systems, and reductions in their cost, create huge incentives for businesses to use computerised systems to replace routine workers. Since those workers (e.g. manufacturers, clerical and admin staff) traditionally represented the middle-skill level of the labour market, the net result has been a polarisation effect. People are forced into either manual (low-skill) or abstract (high skill) work. Now, the big question is whether automation will eventually displace workers in those categories too, but to date manual and abstract work have remained difficult to automate, hence the polarisation.

As I said at the outset, the evidence for this occupational polarisation effect is reasonably compelling. The diagram below, taken directly from Autor’s article, illustrates the effect in the US labour market from the late 1970s up to 2012. It depicts the percentage change in employment across ten different categories of work. The three categories on the left represent manual work, the three in the middle represent routine work, and the four on the right represent abstract work. As you can see, growth in routine work has either been minimal (bearing in mind the population increase) or negative, whereas growth in abstract and manual work has been much higher (though there have been some recent reversals, probably due to the Great Recession, and maybe due to other recent advances in automating technologies, though this is less certain).


(Source: Autor 2015, 13)



Similar evidence is available for a polarisation effect in EU countries, but I’ll leave you read Autor’s article for that.


2. Has this led to increased wage inequality?
Increasing polarisation with respect to the types of work that we do need not lead to an increase in wage inequality. Indeed, certain theoretical assumptions might lead us to predict otherwise. As discussed in a previous post, increased levels of automation can sometimes give rise to a complementarity effect. This happens when the gains from automation in one type of work process also translate into gains for workers engaged in complementary types of work. So, for instance, automation of manufacturing processes might increase demand for skilled maintenance workers, which should technically increase the price they can obtain for their labour. This means that even if the labour-force has bifurcated into two main categories of work — one of which is traditionally classed as low-skill and the other of which is traditionally classed as high-skill — it does not follow that we would necessarily see an increase in income inequality. On the contrary, both categories of workers might be expected to see an increase in income.

But this theoretical argument depends on a crucial ‘all else being equal’-clause. In this respect it has good company: many economic arguments depends on such clauses. The reality is that all else is not equal. Abstract and manual workers have not seen complementary gains in income. On the contrary: the evidence we have seems to suggest that abstract workers have seen consistent increases in income, while manual workers have not. The evidence here is more nuanced. Consider the diagram below.


(Source: Autor 2015, 18)


This diagram requires a lot of interpretation. It is fully explained in Autor’s article; I can only offer a quick summary. Roughly, it depicts the changes in mean wages among US-workers between 1979-2012, relative to their occupational skill level. The four curves represent different periods of time: 1979-1989, 1989-1999 etc. The horizontal axis represents the skill level. The vertical axis represents the changes in mean wages. And the baseline (0) is set by reference to mean wages in 1979. What the diagram tells us is that mean wages have, in effect, consistently increased for high skill workers (i.e. those in abstract jobs). We know this because the right-hand portion of each curve trends upwards (excepting the period covering the great recession). It also tells us that low-skill workers (the left-hand portions of the curves) saw increases in the 1980s and 1990s, followed by low-to-negative changes in the 2000s. This is despite the fact that the number of workers in those categories increased quite dramatically in the 2000s (the earlier diagram illustrates this).

As I said, the evidence here is more nuanced, but it does point to a wage polarisation effect. It is worth understanding why this has happened. Autor suggests that three factors have contributed to it:

Complementarity effects of information technology benefit abstract workers more than manual workers: As defined above, abstract work is analytical, problem-solving, creative and persuasive. Most abstract workers rely heavily on ‘large bodies of constantly evolving expertise: for example, medical knowledge, legal precedents, sales data, financial analysis’ and so on (Autor 2015, 15). Computerisation greatly facilitates are ability to access such bodies of knowledge. Consequently, the dramatic advances in computerisation have strongly complemented the tasks being performed by abstract workers (though I would note it has also forced abstract workers to perform more and more of their own routine administrative tasks).

Demand for the outputs abstract workers seems to be relatively elastic: Elasticity is a measure of how responsive some economic variable (demand/supply) is to changes in other variables (e.g. price). If demand for abstract work were inelastic, then we would not expect advances in computerisation to fuel significant increases in the numbers of abstract workers. But in fact we see the opposite. Demand for such workers has gone up. Autor suggests that healthcare workers are the best examples of this: demand for healthcare workers has increased despite significant advances in healthcare-related technologies.

There are greater barriers to entry into the labour market for abstract work: This is an obvious one, but worth stressing. Most abstract work requires high levels of education, training and credentialing (for both good and bad reasons). It is not all that easy for displaced workers to transition into those types of work. Conversely, manual work tends not to require high levels of education and training. It is relatively easy for displaced workers to transition to these types of work. The result is an over-supply of manual labour, which depresses wages.

The bottom line is this: abstract workers have tended to benefit from the displacement of routine work with higher wages; manual workers have not. The net result is a wage polarisation effect.


3. Conclusion
I don’t have too much to say about this except to stress its importance. There has been a lot of hype and media interest in the ‘rise of the robots’. This hype and interest has often been conveyed through alarmist headlines like ‘the robots are coming for our jobs’ and so on. While this is interesting, and worthy of scrutiny, it is not the only interesting or important thing. Even if technology does not lead to a long-term reduction in the number of jobs, it may nevertheless have a significant impact on employment patterns and income distribution. The evidence presented by Autor bears this out.

One final point before I wrap up. It is worth bearing in mind that the polarisation effects described in this post are only concerned with types of work and wage inequalities affected by technology. Wage and wealth inequality are much broader phenomena and have been exacerbated by other factors. I would recommend reading Piketty or Atkinson for more information about these broader phenomena.

Monday, September 21, 2015

Technological Unemployment and the Value of Work (Series Index)




Machines have long been displacing human labour, from the wheelbarrow and plough to the smartphone and self-driving car. In the past, this has had dramatic effects on how society is organised and how people spend their days, but it has never really led to long-term structural unemployment. Humans have always found other economically productive ways to spend their time.

But several economists and futurists think that this time it is different. The type, scope and speed of technological change is, they argue, threatening to put us out of work for good. This raises two important questions. The first is factual and has to do with whether these economists and futurists are right. Is it really different this time round? Are we all soon to be out of work? The second is axiological and has to with the implications of such long-term unemployment for human society? Will it be a good thing if we are all unemployed? Will this make for better or worse lives?

I've explored the answers to these two questions across a number of blog posts over the past two years. I thought it might be worth assembling them together into this handy index. As is fairly typical for this blog, I focus more on the axiological issues, but I will be writing more about the factual question soon so you can expect that section to grow over the coming months.


1. Will there be technological unemployment?



  • Why haven't robots taken our jobs? The Complementarity Effect - This was a more sceptical look at the argument for technological unemployment, drawing upon the work of David Autor. Although I think there is much wisdom to what Autor says, I'm not sure that it really defeats the argument for technological unemployment.





2. Should we welcome technological unemployment?


  • Should there be a right not to work? - This post presents a Rawlsian argument for a right not to work. It is based on the notion that an appropriately just state should be neutral with respect to its citizens conceptions of the good life and that a life of leisure/idleness is a particular conception of the good life.

  • Should libertarians hate the internet? A Nozickian Argument against Social Networks - This post may be slightly out of place here since it is not directly about technological unemployment. Rather, it is about the 'free labour' being provided by users of social media sites to the owners of those sites. It asks whether such provision runs contrary to the principles of Nozickian justice. It argues that it probably doesn't.

  • Should we abolish work? - This is a slightly more comprehensive compendium and assessment of anti-work arguments. I divide them into two broad classes -- 'work is bad' arguments and 'opportunity cost' arguments -- and subject both to considerable critical scrutiny.

  • Does work undermine our freedom? - This post looks at Julia Maskivker's argument against compulsory work. 'Compulsory' work is a feature of the current economic-political reality, but this reality could be altered in an era of technological unemployment.

  • The Automation Loop and its Negative Consequences - The first of three posts dealing with the arguments in Nicholas Carr's book The Glass Cage. This one looks at the phenomenon of automation and two problematic assumptions people make about the substitution of machine for human labour.




  • The Philosophy of Games and the Postwork Utopia - If automating technologies take over, what will we do with our time? Could we spend it playing games? Some people argue that this would be the ideal life. This post looks at their arguments.




  • The Shame of Work: Review of David Frayne's excellent book The Refusal of Work, explores the theory of antiwork and its practical reality.



Saturday, September 12, 2015

Sexual Assault, Consent Apps and Technological Solutionism




Sexual assault and rape are significant social problems. According to some sources, one in five American women will be victims of sexual assault or rape at some point during their university education. Though this stat is somewhat controversial  (see here for a good overview) similar (sometimes higher) figures are reported in other countries. For example, in Ireland one estimate is that 31% of women experience some form of 'contact abuse' in their lifetime. The figure for men is lower, but higher than you might suppose, with abuse more likely to occur during childhood.

Clearly we should do something to prevent this from happening. Obvious (and attempted) solutions include reform of legal standards and processes, and challenging prevailing social attitudes and biases. These things are hard to change. But the modern age is also noteworthy for its faith in the power of technology. Many are smitten by technology’s ability to solve our problems, from trivial things like counting calories, contacting friends and navigating around an unfamiliar city, to more complex problems like food production and disease prevention. No problem seems immune to the pervasive reach of technology. Could the problems of sexual assault and rape be the same?

That is certainly the belief of some. In what seems like an almost farcical apotheosis of the ‘is there an app for that?’-trend, two companies have launched sexual consent-apps in the past year: (i) the (short-lived) Good2Go app; and (ii) the more recent We-Consent app. Both are (or were) designed to ensure that the partners to any potential sexual encounter validly consented to that encounter. The rationale behind both being that the presence or absence of consent (and/or reasonable belief in consent) is critical to determining whether a sexual assault took place.

Now I’m all for technology, but in both instances these apps seem spectacularly mis-judged. Criticisms have already proliferated. In this post, I want to take a more detailed look at the philosophical and ethical problems associated with these apps. In doing so, I will suggest that both are indicative of a misplaced belief in the power of technology to solve social problems.


1. What problems need to be solved?
What gives rise to the problem of sexual assault and rape? There are many answers to that question. Part of the problem lies in pervasive and pernicious social attitudes, part of the problem lies in existing legal standards, part of the problem has to do with the procedures used to investigate and adjudicate upon sexual assault cases (be they criminal or civil). It is not possible to do justice to the full suite of problems here, and it is not necessary either since the apps with which I’m concerned are only intended to address a particular aspect of the issue.

The aspect in question concerns the role of consent in sexual encounters. Most legal standards stipulate that the presence or absence of consent is what makes the crucial difference: it’s what turns a mutually enjoyable activity into a criminal one. For instance, the crime of rape (in England) is defined as the intentional penile penetration of the vagina, anus or mouth of another when (a) that other does not consent and (b) the perpetrator does not have reasonable belief in consent (in England, ‘rape’ is a gender-specific crime and can only be perpetrated by a man; there is a gender-neutral crime called ‘assault by penetration’). Consent is thus critical to what we call the ‘actus reus’ and the ‘mens rea’ of the crime.

Consent is primarily a subjective attitude — a willingness to engage in an activity with another — but it is signalled and evidenced through objective conduct (e.g. through saying ‘yes’ to the activity). Ideally, we would like for people to rely upon common knowledge signals of consent, that is: signals that are known (and known to be known etc) to indicate consent by both parties to the activity. But one of the major problems in sexual assault and rape cases is the widespread disagreement as to what counts as an objective signal of consent. Many people infer consent from dubious things like dress, past sexual behaviour, body language, silence, lack of resistance and so on. Oftentimes people are unwilling to have open and explicit conversations about their sexual desires, fearing rejection and social awkwardness. They rely upon indirect speech acts that allow them some plausible deniability. Furthermore, there are a range of factors (intoxication, coercion, deception) that might cast doubt on an otherwise objective signal of consent. The result is that many sexual assault and rape cases break down into (so-called) he-said/she-said contests, with both parties highlighting different potential signals of consent or non-consent, or different interpretations of those signals.

In short, there are significant epistemic problems associated with inferring consent. For present purposes, these problems can be said to break down into two major types:

Social Bias/Awkwardness Problems: These are what prevent people from having open and honest conversations about sexual desires/willingness to engage in sex, and lead them to rely on more dubious indirect signals. These problems occur prior to the sexual encounter itself (i.e. they are ex ante problems).

Evidential Problems: These are what give rise to the he-said/she-said dynamic of many sexual assault and rape trials. Most sexual encounters occur in private. Only the participants to the encounter are present. We rely on their testimony to tell us what happened. But they may disagree about which signals were present or how they ought to be interpreted. Hence, we may lack good, uncontested evidence of what took place (these are ex post problems).



What’s interesting about the two consent apps under consideration here is that they often claim to be directed at solving the first set of problems, but then function in a way that is clearly designed to address the second set of problems. Indeed, despite the protestations of their creators, it seems like the second problem is where they are most likely to have their impact and that impact does not seem to be positive. To see this, we need to consider how they work.


2. How the Apps Work
I am going to focus on two sexual consent apps in this piece. I am not aware of any others, though I haven’t performed an exhaustive search. The first is the Good2Go app, which was released in September 2014, only to be scrapped in October 2014. The creator now promises a re-launch in November 2015, with the focus being exclusively on consent-related education. The second is the We-Consent app, which is actually one of three apps, each designed to address issues surrounding the giving and withdrawing of consent. As far as I am aware, the We-Consent app is still in existence and available for download.

What’s interesting about both apps is how the creators explicitly state that their goal is to address the bias/awkwardness problem. The apps, we are told, are designed to facilitate open and honest conversations about sexual consent. Consider the marketing blurb on the frontpage of the We-Consent website. It tell us that:

Affirmative consent begins with you… talk about “yes means yes” before acting…show respect, ALWAYS DISCUSS mutual consent.

The company’s mission statement says that:

We are the affirmative consent member division of isce.edu — a group devoted to changing the societal conversations around sexual interactions… the We-Consent Mobile App [is designed] to encourage discussion about affirmative consent between intended partners. 
(Note: the ISCE is the Institute for the Study of Coherence and Emergence)

And the focus on ‘starting the conversation’ is confirmed by the company’s founder Michael Lissack (who also happens to be the executive director of the ISCE) in an interview with the Chronicle of Higher Education:

So what’s the main purpose? The main purpose is to change the conversation. If these apps work the way they should, in a year or two if people go to a frat party, instead of the base assumption being everyone in attendance is available for hooking up, the base assumption will be, if you wish to hook up, talk about it first.

This attitude seems to be shared by Lee Ann Allman, the creator of the Good2Go app. Most of the materials associated with this app have been taken offline after Apple withdrew its approval. But some of the underlying philosophy can be pieced together from media discussions. For example, in a discussion on the Guardian, Allman is quoted as saying that the app should ‘help alleviate the culture of confusion, fear and abuse on campus’. It is also apparent from Allman’s desire to re-launch the product with an exclusive educational purpose. On the webpage we are told that:

Good2Go, a product of Sandton Technologies, will now focus on developing educational materials for college and university students, administrators, and faculty member to help them understand consent...

In many ways, this is laudable stuff. If these apps really could facilitate open and honest conversations about consent and sexual desire, then they might help prevent some incidents of sexual assault and rape. But in terms of their functionality, both apps are also clearly designed to address the evidential problems. They do so by encouraging the potential participants to a sexual encounter to use their smartphones as devices for signalling consent. The signals are then recorded, encrypted, and stored on a database where they can be retrieved and used as evidence in a civil or criminal investigation into sexual assault or rape. This helps to circumvent the he-said/she-said dynamic alluded to earlier on.

The apps perform this function in slightly different ways. Good2Go, in its original form, was a text-based communication app. If you wished to have sex with someone, you would send them a message asking them ‘Are we good2go?’. They would then be given three optional responses: (i) ‘No thanks’ (which would be accompanied by the message ‘Remember! No means No! Only Yes means Yes, but can be changed to NO at anytime!’; (ii) ‘Yes, but…we need to talk’ and (iii) ‘I’m Good2Go’. If the third option was chosen, the app asked the person to gauge their sobriety level, using four options: sober, mildly intoxicated, intoxicated but Good2Go, or pretty wasted. If ‘pretty wasted’ was chosen, the app would not permit consent to be given. Otherwise, everything would be ‘Good2Go’. A record of the interaction would be stored, verifying the identity of the partners by using their phone numbers.




We-Consent is different in that it adopts a video-messaging format, assisted by some pre-recorded voice messages. If you wish to have sex with someone, you open the app and record a message stating your name and the name of your intended partner. You then hand your phone to your partner (or point the video camera at them) and get them to record a response. If they confirm consent through a clear ‘yes’ the app delivers a pre-recorded response stating that the sexual encounter is permissible. The videos are recorded and stored in a double-encrypted form for retrieval at a later date. The functionality here is slightly more straightforward, but conscious of the need to facilitate the withdrawal of consent, the app-makers have created two additional apps, ‘What-about-no’ and ‘Changed-mind’, which allow people to communicate messages of themselves withdrawing consent at a later time. Again, the record of this ‘no’ is recorded and stored on a database for later retrieval. You can watch videos demonstrating how the We-Consent and What-about-no apps work on the company’s webpage.



So, in short, although the creators maintain that the apps are designed primarily to address the bias/awkwardness problems, their functionality is also clearly designed to solve the evidential problems by creating an independently verifiable record of the consent (or non-consent).


3. Why these apps make things comparatively worse
Criticisms of these apps have proliferated online. I share this critical perspective: I think both apps are highly questionable. But I want to conduct a more comprehensive evaluation than has been done to date. I think any evaluation of these apps must do two things. First, it must evaluate them as potential solutions to both sets of problems (i.e. bias/awkwardness and evidential). Second, it must evaluate them using a contrastive methodology. That is to say, it should look to whether these apps improve things relative to the current status quo. That status quo is one that may be characterised by pernicious beliefs surrounding the meaning of different consent signals and significant evidential problems, and in which other proposed solutions to those problems typically involve reforming legal standards (e.g. making it slightly easier to prove non-consent) and improving consent-related education.

Let’s look at the consent apps and the evidential problem first. In a simple sense, these apps do ‘solve’ some of the evidential problems. An encrypted and independently verifiable record of what was signalled between the parties would be a boon to law enforcement. It would represent an evidential advantage relative to the current status quo in which such evidence is not available. But this is obviously a naive way of looking at it. There are at least three significant problems created by these apps that may serve to negate that evidential advantage.

The first is that the apps create decontextualised records of consent signals. I know that is a hideously academic way of putting it, but it captures an important truth. The meaning of a particular signal is always relative to its context (to use some technical terms, the meaning is a function of both pragmatics and semantics). The Good2Go app strips away that context by limiting the record to a series of text messages; the We-Consent app strips away the context by relying on short video recordings of the faces of the potential sexual partners. This is important because there are contextual factors that could render seemingly clear signals of consent practically worthless. The obvious one is coercion. If I record a video message (or tap a button) stating my willingness to consent at gunpoint (with the gun fortuitously invisible to the recording), my signal is worthless. The gun is an extreme example; the same is true if I signal while surrounded by a threatening group of frat boys, or if my friend is being threatened and so on. Other contextual factors that are stripped away by these apps might include degrees of intoxication (though the Good2Go app tried to address this problem) and deception. Eyewitness testimony certainly has its problems, but at least it tends to include contextual information. This facilitates more appropriate interpretations of the signals. The danger with the consent apps is that their verifiable but decontextualised record would be seen to trump this more contextualised eyewitness testimony.

A second problem with the apps concerns the withdrawal of consent. If there is a prior record of you signalling consent (stored on a database and capable of being retrieved at a later time) then the only way to withdraw consent in a legally secure manner is to record another signal of withdrawal. As far as I am aware, the Good2Go app did not even attempt to facilitate such a recording (apart from including the reminder that consent could be withdrawn at any time). The We-consent app does attempt to do so through its companion apps What-about-no and Changed-mind, but both require that the person retrieve their phone in the midst of a sexual encounter and use it to record their withdrawal of consent. Not only is this unlikely to happen, it may be impossible if the other party prevents it (or if the phone is simply too far away).

This brings me to the third problem. By creating a record, both apps may add an air of menace and coercion to sexual encounters that would otherwise be lacking. This could be detrimental to both negative and positive sexual autonomy (i.e. the ability to avoid sex if you don’t want it, and to have sex if you do). If you know that there is a prior record of positive consent, you may be reluctant or unwilling to withdraw consent, even if that is your true preference. Consequently, you might be pressured into continuing with something you would rather bring to an end. Likewise, these apps may have an impact on positive sexual autonomy by making people less likely to initiate sexual encounters they would prefer to have, for fear that they couldn’t bring them to an end when they wished and for fear that there would be permanent and potentially hackable record of all their sexual partners.

For these reasons, I’m inclined to conclude that the apps represent a dis-improvement from the current status quo.

Do they fare any better when assessed as potential solutions to the bias/awkwardness problem? I don’t think so. Although their mere existence (and presence on one’s phone) might direct attention toward the issue of consent — and so might encourage people to take more care to learn their putative partner true desires — they once again seem to create problems that negate any advantage.

An obvious one is ambiguity. This is particularly true for the Good2Go app since it uses a euphemism (‘Good2Go’) as a way of communicating consent. Euphemisms may help people to overcome awkwardness, but they are more uncertain in their meaning than direct forms of speech (e.g. ‘Yes I agree to engage in the following sexual activity X with you’). If you want people to have more open and honest conversations about sexual desire, then it might be better to facilitate direct forms of speech.

This links to another problem. Both apps may stifle the appropriate conversations by giving people limited conversational options. Good2Go gives you only one way of asking for consent and three ways of responding. We-consent is also limited, requiring you to simply state ‘Yes’ or ‘no’ to sexual relations. But these limited options may not allow you to truly express all that needs to be expressed. And because the devices are being used as a proxy for the awkward conversation, they may actually serve to discourage people from having (or seeking) that longer conversation. That said, at least Good2Go tries to facilitate this by including the ‘Yes but…’ option, though as others have pointed out it might have been better if it was simply a ‘we need to talk..’ option.

Another problem with apps of this sort is that they may bias the outcome of any conversation by presuming certain defaults. This criticism has been thrown at Good2Go in particular. The three response options are biased in favour of consent (two out of the three involve affirmation). Given known biases for extremeness avoidance this may result in more people choosing the intermediate option (‘Yes but…’) than is truly desirable. Also, in its measures of sobriety, it assumes that you have to be ‘pretty wasted’ to be unable to consent. The validity of intoxicated consent is contested, but this may err too much in favour of the possibility of intoxicated consent.

There is also the question of how likely people are to use these apps. I may be wrong, but I have a hard time imagining someone whipping out their smartphone and using it to both initiate and record responses to sexual advances. If anything, that would seem to add awkwardness to the situation, not take it away.

Finally, you have to confront the fact that these apps are largely geared towards men (still generally viewed as the ‘initiators’ of sexual encounters). Michael Lissack, the creator of We-Consent, is explicit about this when he describes athletic teams (who he assumes to be male) and fraternities as the target audience for his app. And the evidential functionality of the apps is (arguably) geared toward protecting men from ‘false’ accusations of sexual assault and rape. As such, these apps may largely reinforce patriarchal attitudes towards sexual assault and rape. Empowerment of female sexual agency does not seem to be to the forefront.

On the whole then, it seems like the apps do not represent a contrastive improvement from the existing status quo surrounding bias and awkwardness.


4. Could technology ever solve these problems?
This evaluation of Good2Go and We-consent raises a further question: could technology ever be used to solve the two problems? These apps clearly have their failings, but maybe there are other technological solutions? Maybe. It’s hard to evaluate all the potential possibilities, given the diversity of technology, but if we limit ourselves to information and communication technologies, then I would suggest that it is unlikely that we will find a solution there.

Natasha Lomas — author of a Techcrunch article critiquing Good2Go — suggested that an app including funny conversational prompts might be a better way to overcome the bias/awkwardness problem. You could also improve things by allowing for more diverse responses or by allowing users to generate their own (but then it’s just a text message conversation and we already have apps for that). I suspect, however, none of these messaging systems would be wholly desirable. One problem is that even if you removed the explicit recording and storage aspect, these apps would still create records of the conversation. This might encourage the menacing air I mentioned earlier and discourage conversation. A purely educational app, with no recording of responses and just provision of information, might fare better. This may be what ‘Good2Go’ ends up becoming. The technology in that case would serve as a way to package and distribute the information. But then this would need to supplemented by plenty of ‘offline’ education too.

In terms of the evidential problem, I’m not sure that there is any desirable technological solution. The obvious one would involve more complete video and audio recordings of sexual encounters. They would need to be ‘complete’ to allow for consent to be withdrawn, and they would need to be far more extensive than what can be provided by a single smartphone or wearable tech camera in order to avoid the problem of decontextualisation. But then, if the recordings need to be that complete and extensive, you have a significant invasion of sexual privacy. Dave Eggers imagines something like this happening in his dystopian satire The Circle, and although I am pretty convinced that privacy is dying out, I’m not sure that sexual privacy is something that should be given up in this manner. There is a trade-off that needs to be considered here in terms of positive and negative sexual autonomy. In any event, even a complete recording of a sexual encounter will require interpretation of the signals being sent back and forth between the participants. People may continue to misinterpret those signals in ways that harm victims of sexual assault. You would need to overcome the social biases and prejudices mentioned at the outset to make a dent in that problem.

In the end, I suspect that consent apps are indicative of something Evgeny Morozov calls ‘technological solutionism’. Morozov defines this as an ideology that sees complex social problems as “neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized — if only the right algorithms are in place!” (Morozov 2013). Here, we see the complex social problem of sexual assault, and the associated issues with giving and receiving consent, being defined in a way that renders them amenable to a technological solution. If we simply use the right prompts, direct the conversation using the right menu of options, and then record the output, we will reduce sexual assault and rape.

I don’t think the problem can be solved in that way.

Friday, September 11, 2015

Philosophers' Carnival ♯179


August can be dry month in the philosophy blogosphere. It's traditionally vacation month in Europe and a time when many academics must correct repeat exams and prepare classes for the new semester. But that doesn't seem to have dampened enthusiasm among the internet's diligent coterie of philosophical bloggers. As this month's host of the Philosophers' Carnival, I have the pleasure of collating and introducing some of the best posts from the past 30 days or so.

In no particular order:



  • Terance Tomkow continues his impressive contributions to philosophy blogging with an excellent introduction to counterfactuals. Confused about possible worlds and the ways things could have turned out? Start here.

  • Wolfgang Schwarz investigates the updating of beliefs based on evidence with a clever thought experiment involving a broken duplication machine.












That's it. Be sure to send in your submissions for next month's iteration, and like the Carnival on facebook.

Tuesday, September 8, 2015

Why haven't robots taken our jobs? The Complementarity Effect




You’ve probably noticed the trend. The doomsayers are yelling once more. They are telling us that technology poses a threat to human employment — that the robots are coming for our jobs. This is a thesis that has been defended in several academic papers, popular books and newspaper articles. It has been propounded by leading figures in the tech industry, and repeatedly debated and analysed in the media (particularly new media).

But is it right? Last year I presented a lengthy analysis of the pro-technological unemployment from Brynjolfsson and McAfee. Their book, The Second Machine Age, is at the forefront of the current doomsaying trend. In it, they make a relatively simple argument. It starts with the observation that machines are able to displace more and more human labour. It adds to this the claim that while in the past humans have always found other sources of employment, this may no longer be possible because the pace and scope of current technological advance is such that humans may have nowhere left to go.

Recently, Brynjolfsson and McAfee’s thesis has attracted the attention of their economic brethren. Indeed, the Journal of Economic Perspectives has just run a short symposium on the topic. One of the contributors to that symposium was David Autor, who wrote an interesting and sober analysis of the impact of technology on employment entitled ‘Why are there still so many jobs? The history and future of workplace automation’. Autor doesn’t deny the impact of technology on employment, but he doesn’t quite share Brynjolfsson and McAfee’s pessimism.

He makes three main arguments:

The Complementarity Argument: Most doomsaying discussions of technology and work focus on the substitution effect, i.e. the ways in which technology can substitute for labour. In doing so, they frequently ignore the complementarity effect, i.e. the ways in which technology can complement and actually increase the demand for human labour.

The Polarisation Argument: Recent technological advances, particularly in computerisation, have facilitated the polarisation of the labour market. Demand for skilled but routine labour has fallen, while demand for lower skilled personal service work, and highly educated creative work has risen. This has also facilitated rising income inequality.

The Comparative Advantage Argument: The polarisation effect is unlikely to continue much further into the future. Machines will continue to replace routine and codifiable labour, but this will amplify the comparative advantage that humans have in creative, problem-solving labour.

Through these three arguments, we see how Autor’s paints a nuanced picture of the relationship between work and technology. The robots aren’t quite going to take over, but they will have an impact. I want to try explain and assess all three of Autor’s arguments over the next few posts. I start today by delving deeper into the complementarity argument.


1. Autor’s Challenge
Anyone with even a passing interest in the history of workplace automation will be familiar with the Luddites, particularly since the term ‘luddite’ has passed into popular usage. The Luddites were a movement in the early days of the industrial revolution. They were made up of textile workers and weavers. They went about sabotaging machines in textile factories (such as power looms) which they perceived as a threat to their skilled labour. Although their concerns were real, many now look back on the luddites as a naive and fundamentally misconceived movement.

The Luddites feared that machines would rob them of employment, and while that may have been true for them in the short term, it was not indicative of a broader trend. The number of jobs has not dramatically declined in the intervening 200 years. What the Luddites missed was the fact that displacement of humans by labour-saving technologies in one domain could actually increase aggregate demand and open up opportunity for employment in other domains.

Agriculture provides a clear illustration of this phenomenon. There is very clear evidence for a substitution effect in agriculture. As Autor notes:

In 1900, 41 percent of the US workforce was employed in agriculture; by 2000, that share had fallen to 2 percent (Autour 2014), mostly due to a wide range of technologies including automated machinery. 
(Autour 2014, 5)

And yet despite this clear evidence of a substitution effect, we haven’t witnessed a rise in long-term structural unemployment. This despite the fact that other industries have witnessed similar forms of substitution. Autor thinks that this should be puzzling to those like Brynjolfsson and McAfee who think that technology could lead to long-term structural unemployment. This gives rise to something I will call ‘Autor’s Challenge’:

Autor’s Challenge: ‘Given that these technologies demonstrably succeed in their labor saving objective and, moreover, that we invent many more labor-saving technologies all the time, should we not be somewhat surprised that technological change hasn’t already wiped out employment for the vast majority of workers? Why doesn’t automation necessarily reduce aggregate employment, even as it demonstrably reduces labor requirements per unit of output produced?’ 
(Autor 2015, 6)

In other words, before we start harping on about robots stealing our jobs in the future, we should try to explain why they haven’t already stolen our jobs. If we can do this, we might have a better handle on the future trends.


2. The Complementarity Effect
Autor thinks that the explanation lies in the complementarity effect. This effect adds some complexity to our understanding of the relationship between labour and technology. The previously-mentioned substitution effect supposes that the relationship between a human worker and a robot/machine is, in essence, a zero-sum game. Once the machine can do the job better than the human, it takes over and the human loses out. The complementarity effect supposes that the relationship can be more like a positive-sum game, i.e. it might be that as the robot gets better, no one really loses out and everyone gains.

Many jobs are complex. Several different ‘inputs’ (involving different skills and aptitudes) are required to produce the overall economic or social value. Consider the job of a lawyer. He or she must have a good working knowledge of the law, they must be able to use legal research databases, they must be able to craft legal argument, meet with and advise clients, schmooze and socialise with them if needs be, negotiate settlements with other lawyers, manage their time effectively, and so on. Each of these constitutes an ‘input’ that contributes to their overall economic value. They all complement each other: the better you are at all of these things, the more economic value you produce. Now, oftentimes these inputs are subject to specialisation and differentiation within a given law firm. One lawyer will focus on schmoozing, another on negotiation, another on research and case strategy. This specialisation can be a positive sum game (as Adam Smith famously pointed out): the law firm’s productivity can greatly increase despite the specialisation. This is because it is the sum of the parts, not the individual parts, that matters.

This is important when it comes to understanding the impact of technology on labour. To date, most technologies are narrow and specialised. They substitute or replace humans performing routine, specialised tasks. But since the economic value of any particularly work process tends to be produced by a set of complementary inputs, and not just a specialised task, it does not follow that this will lead to less employment for human beings. Instead, humans can switch to the complementary tasks, often benefitting from the efficiency gains associated with machine substitution. Indeed, the lower costs and increased output in one specialised domain can increase labour in other complementary domains.

Autor illustrates the complementarity effect by using the example of ATMs and bank tellers. ATMs were widely introduced to American banking in the 1970s, with the total number increasing from 100,000 to 400,000 in the period from 1995 to 2010 alone. ATMs substitute for human bank tellers in many routine cash-handling tasks. But this has not led to a decrease in bank teller employment. On the contrary, the total number of (human) bank tellers increased from 500,000 to 550,000 between 1980 and 2010. That admittedly represents a fall in percentage share of workforce, but it is still surprising to see the numbers rise given the huge increase in the numbers of ATMs. Why haven’t bank tellers been obliterated?

The answer lies in complementarity. Routine cash-handling tasks are only one part of what provides the economic value. Another significant part is relationship management, i.e. in forging and maintaining relationships with customers, and solving their problems. Humans are good at that part of the job and hence they have switched to fulfilling this role.

Increasingly, banks recognized the value of tellers enabled by information technology, not primarily as checkout clerks, but as salespersons, forging relationships with customers and introducing them to additional bank services like credit cards, loans and investment products. 
(Autor 2015, 7)

Thus, complementarity protected human employment from technological displacement. Indeed, Autor argues that it may even have improved things for these workers as their new roles required higher educational attainment and attracted better pay. The efficiency gains in one domain could consequently facilitate a positive sum outcome.

It is worth summarising Autor’s argument. The following is not formally valid, but captures the gist of the idea:


  • (1) Many work processes draw upon complementary inputs, whereby increases in one input facilitates or requires increases in another, in order to generate economic value.

  • (2) In many cases, technology can substitute for some of these inputs but not all.

  • (3) Humans are often good at fulfilling the complementary, non-substituted roles because those roles rely on hard-to-automate skills.

  • (4) Thus, even in cases of widespread technological substitution, the demand for human labour is not always reduced.


How does this chain of reasoning stack up?



3. Threats to the Complementarity Effect
There is certainly something to it: work processes clearly do rely upon complementary inputs to generate economic value. There is plenty of room for positive sum interactions between humans and robots. But it is not all a bed of roses. Autor himself acknowledges that there are three factors which modulate the scale and beneficial impact of the complementarity effect. They are:

Capacity for complementarity: In order to benefit from the complementarity effect, workers must be able to perform the complementary roles. If workers are only capable of performing the substitutable role, they will not benefit. For instance, it is possible (maybe even likely) that many bank tellers were not good at relationship management. They undoubtedly lost their jobs to ATMs (or so their roles diminished and pay packets cut).

Elasticity of labour supply: Elasticity is an economic concept used to describe how responsive demand or supply is to changes in other phenomena (usually price). Elasticity of labour supply refers to how much the supply of labour increases (or decreases) in response to changes in the price demanded for labour. This modulates complementarity in the following way: Workers capable of fulfilling the complementary roles may not benefit from the increased demand for their labour if it is possible for other workers to flood the market and fulfil complementary tasks. This may have happened with the rise in lower paid personal service workers in the wake of computerisation in the late 20th century. I’ll talk about this more in the next entry.

Output elasticity of demand and income elasticity of demand: This refers to how much demand for a particular product or service increases or decreases in response to increases in productivity and income. In essence, if there is more of a product or service being supplied, and people have more money that they can spend on that product or service, will demand actually go up? The answer varies and this affects the impact of technology on employment. In the case of agricultural produce, demand probably won’t go up. There is only so much food and drink people require each day. This likely explains why the percentage of household income spent on food has steadily declined over the past century despite huge technologically-assisted gains in agricultural productivity. Contrariwise, demand for healthcare has dramatically increased in the same period, despite the fact that this is in an area that has also witnessed huge technologically-assisted gains in productivity. Why? Because people want to be healthier (or avoid disease) and this is a sufficiently fuzzy concept to facilitate increased demand.


This last factor is crucial and provides another part of the response to Autor’s challenge. Part of the reason why there are still so many jobs is that people’s demands don’t remain static over time. On the contrary, their consumption demands usually increase along with increases in income and productivity. Autor provides an arresting illustration of this. He argues that an average US worker living in 2015 could match the standard of living of the average worker in 1915 by simply working for 17 weeks a year. So why do they work for so much longer? Because they’re not satisfied with that standard of living: they’ve tasted the possibility of more and they want it.

Something strikes me about this analysis of technology and employment. The complementarity effect is, no doubt, real. But its ability to sustain demand for human labour in the medium-to-long term seems to depend on one crucial assumption: that technology will remain a narrow, domain-specific phenomenon. That there will always be this complementary space for human workers. But what if we can create general artificial intelligence? What if robot workers are not limited to routine, narrowly-defined tasks? In that case, they could fill the complementary roles too, thereby negating the increased demand for human workers. Indeed, this was one of the central theses of Brynjolfsson and McAfee’s book. They were concerned about the impact of exponential and synergistic technological advances on human employment. They would argue that Autor’s lack of pessimism is driven by a misplaced fealty to historical patterns.

Think about it this way. Suppose there are ten complementary inputs required for a particular work process. A hundred years ago all ten inputs were provided by human workers. Ninety years ago machines were invented that could provide two of these inputs. That was fine: humans could switch to one or more of the remaining eight inputs. Then, fifty years ago, more machines were invented. They could provide two more of the inputs. Humans were limited to the remaining six, but they were happy with this because there was increased demand for those inputs and they paid better. All was good. But then, a few years ago, somebody invented new machines that not only replaced four more of the inputs, but also did a better job than the older machines on the four previously-replaced inputs. Suddenly there were only two places left for human labour to go. But still people were happy because these roles were the most highly skilled and commanded the highest incomes. The complementarity effect continued to hold. Now, fast forward into the future. Suppose somebody invents a general machine learning algorithm that fulfills the final two roles and can be integrated with all the pre-existing machines. A technological apotheosis of sorts has arrived: the technological advances of the past hundred years have all come together and can now completely replace the ten human inputs. People didn’t realise this would happen: they were tricked by the historical pattern. They assumed technology would only replace one or two inputs and that they could fill the complementary space. They neglected both the combined impact of technology, and the possibility of exponential growth.

That was the type of scenario Brynjolfsson and McAfee were warning us about and it seems unaffected by Autor’s claims for the complementarity effect. To link it back to the argument presented in the previous section, it seems like the possibility of general machine intelligence (and/or the synergistic effects of many technological advances) could cast premise (2) into doubt.

To be fair to him, Autor has a response (of sorts) to this. He is sceptical about the prospects for general machine intelligence and the likelihood of machine learning having a significant displacement effect. This features heavily in his defence of the comparative advantage argument. I’ll be looking at that in a future entry.

Saturday, September 5, 2015

Interview about Superintelligence, the Orthogonality Thesis and AI Doomsday Scenarios



Adam Ford interviewed me this morning about some of issues arising from AI and existential risk. We covered the arguments from Nick Bostrom's book Superintelligence, focusing in particular on his orthogonality thesis, argument for AI doom, as well as some of my criticisms of his argumentative framework. We also took some interesting deviations from these topics.

Viewing notes: Adam's connection was lost at around the 33 min mark, so you should skip from there to roughly the 38 min mark. Also, I am aware that I fluffed Hume's example about the destruction of the earth and the scratching of one's finger. I realised it at the time, but hopefully the basic gist the idea got through. I also didn't quite do justice to normative theories of rationality and how they feed into criticisms of the orthogonality thesis.

If you want to read more about these topics, my conversation with Adam was based on the following blog posts and papers:


For all my other writings on intelligence explosions and related concerns, see here.

Tuesday, September 1, 2015

A Rawlsian Approach to Intoxicated Consent to Sex?

Should we choose standards of consent from behind a veil of ignorance?


People are often mildly to severely intoxicated when they have sex. This creates a problem. If someone signals consent to sex whilst voluntarily intoxicated, should that consent be treated as morally/legally valid? I have been very slowly working my way through Alan Wertheimer’s excellent paper on this topic (cleverly entitled ‘Intoxicated Consent to Sexual Relations’). So slow has been my progression that I have actually written three previous posts examining the complex web of moral claims associated with it. But in doing so I have yet to share Wertheimer’s own view. Today, I finally make up for this deficit.

A brief review of previous entries is in order. First, recall that throughout this series the focus is on the heterosexual case involving a man (who may or may not be intoxicated) who has sex with an intoxicated woman. The reason for this focus is that this is probably the most common scenario from a legal perspective and the one that reveals the tensions between traditional liberal legal theories and certain feminist theories. One of the ways in which these tensions are revealed is when it comes to the relationship between personal responsibility and consent. It is widely accepted that voluntary intoxication does not absolve one of responsibility for one’s actions. This widespread agreement was utilised by Heidi Hurd in her argument that intoxicated consent should be valid. Otherwise, she says, we are in the unusual position that an intoxicated man is responsible for raping an intoxicated woman but she herself not responsible for signaling consent. Conversely, there are those who argue that the kind of victim-blaming that goes on in such sexual offence cases is perverse. Susan Estich makes this case by arguing that just as we would not hold someone responsible for being assaulted if they walk down a dark alleyway at night, so too should we not hold a woman responsible just because she was intoxicated at the time of a sexual assault.

Both Hurd’s and and Estrich’s arguments were examined in a previous entry. Both were found to be lacking. Hurd’s argument was problematic because it assumed that the kinds of mental capacities involved in making ascriptions of responsibility were the same as those involved in assessing the validity of consent. This is not the case: there is good reason to suppose that higher mental capacities (ones that are more likely to be impaired by even mild degrees of intoxication) are required for valid consent. Likewise, Estrich’s arguments were found to be lacking because her analogies involved cases where people were clearly the victims of crime. The difficulty in the intoxicated consent case is that if the signaled consent is valid no crime has taken place. So you really have to determine the validity of the consent before you appeal to these moral equivalencies.

The upshot of all this is that there is no straightforward relationship between claims about intoxicated responsibility and intoxicated consent. There are more complex moral variables at play. Wertheimer’s goal is to reveal these variables and see whether they can help us to answer our opening question: is intoxicated consent valid? As we shall see, Wertheimer’s answer to this question involves a quasi-Rawlsian approach to setting the standards for sexual consent.


1. Intoxicated Consent in non-Sexual Cases
A useful window into the complex variables at play is to look at intoxicated consent in non-sexual cases. Wertheimer starts with the following:

Major Surgery: ‘Consider consent to a medical procedure. It seems entirely reasonable that a patient’s voluntary intoxicated consent to a major surgery should not be treated as valid if B’s intoxication is or should be evident to the physician, even if the physician has provided all the relevant information. A physician cannot say, “She was drunk when she came in to sign the consent form. She’s responsible for her intoxication, not me. End of story.”’ (Wertheimer 2001, 389)

This sounds reasonable. If someone walked into a doctor’s surgery after a few drinks and tried to consent to having her leg amputated, a doctor would surely be obliged to tell her to come back at another time. But what does this intuition reveal about the relationship between intoxication and consent? Wertheimer thinks it reveals that principles of consent are sensitive to at least three sorts of considerations:

Relative Costs: The principles of consent are sensitive to the ‘costs of the process of obtaining consent relative to just what is at stake’. In other words, the higher the potential costs, the more important (and more rigorous) we should be in ensuring that the consent is valid. We would worry that intoxicated consent to having one’s leg amputated would not be valid, but we would probably not worry that intoxicated consent to the use of a tongue depressor was valid. There is less at stake in the latter case.

Possible Errors: The principles of consent are sensitive to the two kinds of error that might arise: (i) false positives, i.e. assuming that someone has consented when really they have not; and (ii) false negatives, i.e. assuming that someone has not consented when they really have. To put it another way, the standards for consent have an impact on both positive autonomy (i.e. on the ability to get what we want) and negative autonomy (i.e. on the ability to avoid what we do not want). We need to be sensitive to those impacts when setting the appropriate standards.

Feasibility: The principles of consent are sensitive to both the possibility and feasibility of obtaining high quality consent. The medical context is instructive here again. If you have an elderly patient suffering from dementia, then it may simply be impossible or infeasible to get high quality consent to medical treatment (i.e. we may always be unsure whether their signals convey their higher-order preferences). But treatment may be necessary for their well-being so we may be satisfied with a less-than-ideal standard of consent. Contrariwise, in the case of the intoxicated patient looking to have their leg amputated, higher quality consent is feasible if we simply wait until their intoxication has ended. Consequently, we should be less satisfied with low quality consent in that case.


Wertheimer considers how these three factors impact upon our moral judgments in several other cases, I won’t mention them all here. One that is worth mentioning — because it highlights tensions between certain feminist theories and liberal principles of consent — is the standard of consent deemed appropriate when seeking an abortion. Many feminists are in favour of allowing women ready access to abortion. In favouring this, they often oppose or resist high standards of consent to abortion. For instance, they will oppose setting age restrictions, requiring women to be lectured to about the development of the foetus, the stipulation of waiting periods to avoid hasty decisions, and so on. Why do they oppose these things? Wertheimer argues that it is because, first and foremost, there are no natural defaults when it comes to setting standards of consent, and second because they see how these restrictions are part of a coordinated attack on women’s positive autonomy (i.e. their desire to access services they want to access). When the standards are too high, positive autonomy is undermined (because the system errs on the side of too many false negatives).

The conclusion to be drawn from all this is that, when it comes to intoxicated consent to sex, we need to factor in the three considerations mentioned above and examine the consequences of setting high/low standards of consent.


2. So how should we view intoxicated consent?
When we do so what might our conclusion be? Analogies aren’t always helpful when it comes to better understanding the ethics of sexual interactions. Some people insist that there is something unique and special about those interactions that cannot be fully captured by analogical reasoning. But analogical reasoning is often all we have in ethical cases. In this vein, Wertheimer pursues one last analogy before considering intoxicated consent to sex. The analogy is with the case of intoxicated gambling.

The legal position is usually that gamblers bear the moral and financial burden associated with intoxicated gambling. In other words, if you go into a casino, consume copious amounts of alcohol, and gamble away a significant amount of money, then you usually suffer the consequences (it does, of course, depend on whether gambling is legal in the relevant jurisdiction). Is this the right approach to take? Maybe, but it may well depend on how much the gambler stakes on their bets. If they gamble away a few hundred or thousand dollars, we might hold them to it; but if they gamble away their house or all their earthly possessions, we might view it differently. Again, the quality of the consent required would vary as a function of what the costs are likely to be.

Why might we take this attitude toward intoxicated gambling? Here’s where Wertheimer makes his main contribution. He says that one way to work out the right standard of consent is to adopt an ex ante test. In other words, ask the would-be intoxicated gamblers, prior to the fact (i.e. before they are intoxicated and before they know whether they have won or lost on their gambles), what standard of consent they would like to apply to their intoxicated gambling. In proposing this question, Wertheimer is advocating a methodology that is somewhat akin to Rawls’s famous methodology for deriving principles of distributive justice. Rawls argued that in order to settle on a just distribution of social goods, we should imagine would-be citizens negotiating on the relevant principles behind a veil of ignorance (i.e. without knowing where they will end up in society). Wertheimer is adopting a similar veil of ignorance test for his would-be gamblers.

Wertheimer’s ex ante test: When deciding on the appropriate set of consent principles for any intoxicated activity, we should ask the would-be participants which set of principles they would prefer to govern that activity before the fact (i.e. before they have actually engaged in that activity whilst intoxicated).

What are the results of this test? A full analysis of the gambling case would require a longer paper but we can make some suggestions. One is that would-be gamblers might favour a relatively low standard of consent (at least when the stakes are low). Why is that? Because they probably find the combination of alcohol consumption and gambling to be pleasurable. Hence, they might be inclined to favour a set of consent principles that allows them to engage in that combination of activities (up to a certain level of potential loss). In this sense, they tweak the precise mix of consent principles so as to favour their positive autonomy, and err slightly on the side of more false positives than negatives.

How about intoxicated consent to sex? Again, the procedure is the same: you ask women ex ante which mix of consent principles they would favour for intoxicated sexual encounters. They could favour a strict approach — i.e. no consent signal provided whilst intoxicated is valid — or a more liberal approach — where this comes in various degrees. When choosing the standard, they will need to pay attention to the level of harm involved relative to the cost of obtaining high quality consent, the feasibility of obtaining high quality consent, and the type of sexual autonomy that ought to be favoured.

Can we say anything more concrete? This is one of the more frustrating aspects of Wertheimer’s article. After his lengthy analysis, he still doesn’t have a preferred policy proposal. But he does say three interesting things. First, he says that there are reasons to think that positive sexual autonomy might favour the validity of, at least some, instances of intoxicated consent. Indeed, it might be that the combination of alcohol consumption and sexual activity is highly valued:

It’s not just that some women may wish to engage in sex and drinking simultaneously. Rather, drinking to the point of at least moderate intoxication may be crucial to what some regard as a desirable sexual and social experience. We do well to remember that a woman may choose to become (moderately or even severely) intoxicated precisely because she wants to suspend, curtail, or weaken some of her stable psychological traits. 
(Wertheimer 2001, 395)

It’s always dangerous when a man purports to say anything about what we would ‘do well to remember’ when it comes to women’s sexual preferences. But this does seem intuitively right to me. I think moderate intoxication is part and parcel of many positive social and sexual interactions, and that people often desire the intoxicated state because of its disinhibiting effects. That said, Wertheimer’s second key point is that this potential value needs to be balanced against the emotional and physical harms of an intoxicated sexual encounter. Here, he thinks we need to know much more about the effects of such encounters, and what the potential harms of erring on the side of false positives would be. The tricky question of regret also enters the fray:

The validity of a woman’s intoxicated consent to sexual relations is not a function of her actual ex post regret or satisfaction with respect to a given sexual encounter. The point of B’s sexual consent is always ex ante: it renders it permissible for A to have sexual relations with her. But the principles of consent that establish when we should regard a woman’s consent token as valid may take account of the ex ante disvalue of her ex post regret. If the evidence suggests that women are, in fact, likely to severely regret sexual relations to which they have given intoxicated consent, that is some reason to regard intoxicated consent as invalid. 
(Wertheimer 2001, 395-6)

This brings us to Wertheimer’s third key observation, which is that the harm of any such sexual encounter is likely to vary depending on the prior relationship between the two individuals. This is problematic insofar as it seems to allow for past sexual history to influence our moral assessment of the relevant consent standards (which, as anyone who has studied the history of rape laws will know, is highly contested). Nevertheless, it is part of Wertheimer’s view that consent standards may vary relative to the potential marginal harm of a sexual encounter. And the potential marginal harm from a first time intoxicated sexual encounter is likely to be higher than the potential marginal harm arising from an encounter between two long-term partners. He uses the following example to illustrate his approach:

Suppose that a married couple hosts a New Years Eve party, get roaring drunk, falls into bed, and has sex. It would be crazy to think that the husband does something seriously wrong here simply because his wife consents while quite intoxicated, unless the wife had previously indicated that she does not want to have her intoxicated consent taken seriously… Why do I think this view would be crazy? Because (in part) there is no ‘non-autonomy based’ physical or psychological harm to a marginal sexual interaction with a person with whom one frequently has sexual relations as contrasted with the case where a woman might have avoided sexual relations with that person altogether. 
(Wertheimer 2001, 396)


3. Conclusion
To briefly sum up, there is no simple rule when it comes to intoxication and sexual consent. The consistency thesis, which holds that the same standard should apply to sexual consent as applies to responsibility, is unattractive because it assumes the capacities for consent are equivalent to the capacities for responsibility. The impermissibility thesis, which holds that intoxicated consent should never be deemed valid, is unattractive both because the analogies use to support it are unhelpful and because of its potential impact on positive sexual autonomy.

Instead, the standard for consent should vary as a function of three variables: (i) the relative costs of procuring consent vis-a-vis the potential harms of the activity being consented to; (ii) the preference for false positives over false negatives (i.e. the value of favouring positive autonomy over negative autonomy); and (iii) the feasibility and/or possibility of procuring high quality consent. In figuring out how these variables work in the case of intoxicated sexual consent, we should adopt an ex ante test. This means we should ask the would-be intoxicants which standard of consent they would prefer prior to engaging in the intoxicated variant of the activity.

In doing so, we will probably learn that: (a) there is some value in allowing for some degree of intoxicated consent (from the perspective of positive sexual autonomy); (b) this value must be balanced against the potential harms of intoxicated sexual activity (including the likely ex post regret); and c) the appropriate standard is likely to vary depending on the potential marginal harm of the sexual encounter (where this is likely to be lower in the case of long-term partners than it is in the case of new ones).