Friday, January 26, 2024

Do Counterfeit Digital People Threaten the Cognitive Elite?




In May 2023, the well-known philosopher Daniel Dennett wrote an op-ed for The Atlantic decrying the creation of counterfeit digital people. In it, he called for a total ban on the creation of such artifacts, arguing that those responsible for their creation should be subject to the harshest morally permissible legal punishments (not death, to be clear, since Dennett does not see that as legitimate).

It's not entirely clear what prompted Dennett's concern, but based on his memoir (I've Been Thinking) it's possible that part of his unease stemmed from his own experiences with the DigiDan project by Anna Strasser and Eric Schwitzgebel. Very briefly, this project involved the creation of an AI chatbot (DigiDan), trained on the writings of Daniel Dennett. DigiDan could generate responses to philosophical questions in the style of the real Daniel Dennett (I'll call him RealDan). As part of a test to see how good the AI simulation was, Strasser and Schwitzgebel got DigiDan and RealDan to answer ten philosophical questions. They then asked Dennett experts to examine the answers and see if they could tell the difference between RealDan and DigiDan. While they were above chance at doing so, they were sometimes fooled by the simulation.

Developments since the DigiDan project, which was based on the GPT3 platform, suggest that it is now relatively easy to create digital simulations of real people. It is happening all the time. Popstars, academics and social media influencers (to name a few examples) have all created digital recreations of themselves. They do so for a variety of purposes. Sometimes it is just a fun experiment; sometimes a marketing gimmick; sometimes a desire to enhance productivity (and profitability). Since the technology underlying these platforms has undergone significant performance gains in the past couple of years, it is to be expected that digital simulations are likely to proliferate and become more convincing. And, of course, simulations of real people are just one example of the broader phenomenon: the ability to create fake people-like AI systems, whether they are based on real people or not. It is this broader class of systems that attracts Dennett's ire. He calls them 'counterfeit people' in light of the fact that they are not really people (in the philosophical sense) but merely fake versions of them.

In the remainder of this article, I want to critically analyse and evaluate Dennett's argument against counterfeit people. I do so not because I think the argument is particularly good -- as will become clear, I do not -- but because Dennett is a prominent and well-respected figure and his negative attitude towards this technology is noticeably trenchant. I will add that Dennett is someone that I personally respect and admire, and that his writings were a major influence on me when I was younger.

The remainder of the article is broken into two main sections. First, I critically analyse Dennett's argument, trying to figure out exactly what it is that Dennett is objecting to. Second, I offer an evaluation of that argument, focusing in particular on what I think might be the ulterior motive behind it. Not to bury the lede: I think that one plausible interpretation of Dennett's fear, which is similar to the fears of many well-educated people (myself included), is that the creation of counterfeit people undercuts a competitive advantage or privilege enjoyed by a cognitive elite (people with advanced degrees and the like, who have, in recent times, been well-positioned to reap the rewards of the information economy). Undercutting this privilege is threatening and destabilising to members of this elite and this can explain their staunch opposition to the technology, but whether such destabilisation is, all things considered, a bad thing is more open to debate. That said, I will not be presenting a dyed-in-the-wool optimistic perspective about the advent of counterfeit people. There are many legitimate reasons for concern and while the fears of a cognitive elite need to be put in perspective, they should not be entirely discounted.


1. What is Dennett's Argument?

The first thing to do is to try to figure out what Dennett's case against counterfeit people actually is. This is far from easy. The op-ed is short (possibly heavily edited down, given how these things work) and packs quite a large number of claims into a short space. It starts with an intriguing analogy between counterfeit currency and counterfeit people:


...from the outset counterfeiting (money) was recognized to be a very serious crime...because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people...These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.

 

This suggests that the underlying argument might be a simple analogical one:


  • (1) The creation of counterfeit currency ought to be punished severely because it undermines social trust.
  • (2) Counterfeit people are like counterfeit currency (in the important respects).
  • (3) Therefore, the creation of counterfeit people ought to be punished severely.

But this is not quite right. The analogy between counterfeit currency and counterfeit people is interesting, and I will consider it again in more detail when offering some critical reflections on the argument, but to make it the centrepiece of the argument doesn't do justice to what Dennett is saying. For one thing, you can see, even in the quoted passage, Dennett slips from talking about the erosion of trust (in the case of money) and freedom (in the case of people). For another thing, later in the article Dennett talks about counterfeit people not just being a threat to freedom but to civilisation more generally.

The key paragraph (in my mind) is the following one:


Creating counterfeit people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive ignorant pawns. This is a terrifying prospect.

 

There is a lot going on in this passage. What is the ultimate thing we should worry about losing and why is it that counterfeit people put us on a pathway to losing that thing? It's clear that Dennett is worried about civilisation in general, but he seems to initially define or characterise civilisation in terms of democracy (i.e. democratic civilisation), but then there are the additional concerns about loss of agency (manipulation, control, passivity), which hearken back to his earlier concerns about freedom. There is also a bit in the middle about the redistribution and entrenchment of power, which may be linked to democracy and freedom, but also may be thought of as a distinct concern.

It's not worth belabouring the interpretation of the article. Cutting through the noise, I think Dennett's argument can be boiled down to the following simple syllogism:


  • (1) If something risks destroying or undermining one of the foundational concepts/institutions of our civilisation (specifically, democracy or freedom), then it should be outlawed and those involved in creating that risk should be severely punished.
  • (2) The creation of counterfeit people risks destroying or undermining both democracy and freedom.
  • (3) Therefore, the creation of counterfeit people should be outlawed and those involved in their creation should be severely punished.

The first premise is convoluted, but does, I believe, capture the essence of what Dennett is worried about. The second premise, of course, is the empirical/predictive claim about the effect of counterfeit people in the real world. What does Dennett say in support of this? A lot of different things, but this is probably the most important:


  • (2.1) Counterfeit people exploit our natural inclination to trust anything that exhibits human-like properties or characteristics (they hijack our tendency to adopt the 'intentional stance')

The intentional stance is a concept long-associated with the work of Dennett. I will not get into its intricacies but the gist of it is simply that, for some classes of system, we can best predict and understand that system by assuming that it has a mind and acts on the basis of beliefs, desires, and intentions. We are supported in doing so by certain externally observable characteristics of those agents/objects (behaviour, appearance, interactions etc). Counterfeit people can copy those external characteristics and hence hijack our tendency to adopt the intentional stance. This has a number of knock-on implications (I've structured this as a logical sequence of thoughts but not a valid deductive inference):


  • (2.2) The prevalence of counterfeit people sows the seeds of social mistrust because we can never simply take it for granted that we are interacting with a real person; we always have to check and, eventually, we may not be able to tell the difference.
  • (2.3) The means of creating counterfeit people is controlled by an economic and political elite (big tech) and they can exploit our tendency to trust counterfeit people to manipulate and misinform us to suit their own agendas.
  • (2.4) The challenge we face in separating real people from counterfeit people, and in protecting ourselves from manipulation and misinformation, may become so overwhelming that we simply switch off and become passive, thereby losing our freedom and agency.
  • (2.5) This is, in turn, problematic insofar as democratic governance depends on a well-informed and active citizenry that can meaningfully consent to its structures and rules.

That, in a nutshell, is Dennett's argument. Is it any good?


2. Evaluating Dennett's Argument: Who benefits from counterfeit people?

There have been several critical assessments of Dennett's argument. Eric Schliesser, for instance, wrote a long critical appraisal of it on the Crooked Timber blog and there is an extended discussion of it over on the Daily Nous blog as well (in the comments section). Some have raised valid concerns about the argument; some have defended. I will not repeat everything that has been said.

There is one point that I want to get out of the way at the outset. Some people have suggested that Dennett's staunch opposition to counterfeit people is hypocritical in some way, given his previous work on the intentional stance. The criticism runs something like this: Dennett views the intentional stance as a useful pragmatic tool for interpreting and understanding the behaviour of certain systems. But it is not just a pragmatic tool. Dennett also commits himself to a more radical view, namely, that if it is useful to act 'as if' a system has beliefs and desires, then, for all intents and purposes, that system does have beliefs and desires. This is a problem for his critique because he presumes there is some important metaphysical difference between counterfeit people and real people. But if he is right about the intentional stance, then if counterfeit people can be reliably and usefully explained from that stance, they are not really counterfeit people. They are just the same as real people and cannot be so easily dismissed or pejoratively labelled.

I think this is a bad critique of Dennett's argument. This is for three main reasons. First, even if Dennett is committed to that view of the intentional stance, it doesn't follow that current AI systems can, actually, be usefully and reliably explained from that stance. It's fair to say that it is useful in some contexts to assume that current AI systems they have beliefs and desires that are somehow similar to ours, but in other contexts this assumption breaks down. This may change in the future, of course, as AI gets better and better at approximating human-like intentionality, but in the meantime there is a meaningful distinction between person-like AI and actual human beings. Second, even if AI systems ought to be treated as intentional systems, it does not follow that they are the same as human persons. Personhood and intentionality are not equivalent. Intentionality may be a precondition of personhood, but not the only aspect of it. Other properties may be required such as sentience, sense of self as a continuing agent, and so on (Dennett has a theory of personhood too). To put the point another way, a theory of intentionality is not the same thing as a theory of moral standing or significance. AIs could be intentional without having moral standing and this may be an important difference between them and actual humans. So, again, the concern about counterfeit people remains. Finally, and perhaps most importantly, even if AI people were equivalent in all important respects to human people, this would not invalidate all of Dennett's concerns. A large part of what worries him is that powerful actors can now create large armies of counterfeit people to manipulate and exploit others for their own ends. This is a fear we already have in relation powerful actors and 'armies' of real human people. The problem is that AI allows for greater control and scalability. Similar points have been made by others before. For instance, David Wallace on the Daily Nous blog has some perceptive comments about what Dennett's views on consciousness and intentionality do and do not entail.

Other criticisms of Dennett's argument are possible. Some may say he overstates the fears about social trust and agency. Perhaps there are technical workarounds that will allow us to distinguish real people from counterfeit people. Dennett himself floats the idea of digital watermarks on counterfeit people, though we can wonder how sustainable and effective they might be. Others might say that our agency and capacity for resilience in the face of this threat are greater than we might suppose, or that there are ways in which counterfeit people might enhance our agency and capacity, e.g by enhancing our productivity or providing personalised tutoring or assistance to overcome challenges we might face. The technology can be used in agency-enhancing and agency-undermining ways. For Dennett's argument to work, we must assume the agency-undermining ways will swamp the agency-enhancing ways. Maybe we should not be so pessimistic? Still others (e.g. Eric Schliesser) might argue that Dennett has the wrong model of democracy in mind. It is not true that democracy depends on the informed consent of the governed. Quite the contrary, democracy just depends on the consent of the governed. The governed do not need to be well-informed. Critics of democracy sometimes raise this as an objection. John Stuart Mill, famously, lamented the ignorance of the masses and thought that educated people's votes should count for more. In recent times, Jason Brennan has written a book-length defence of epistocracy (rule by epistemic elite) that is premised on a similar lament.

These are all criticisms worth pursuing in more depth. But I want to focus on a different line of criticism, one that engages less with the premises of Dennett's argument than with its possible ulterior motive. Why is Dennett so afraid? Why are many members of my peer group (college-educated people and fellow academics) so afraid? Of course, I don't know what really motivates them (maybe, in a Freudian sense, they don't know either) but I can speculate. One aid to this speculation is the analogy Dennett draws between counterfeit people and counterfeit money. There is more to this analogy that initially meets the eye and more the history of counterfeit currencies than Dennett lets on in his piece. Counterfeit currencies didn't always undermine social trust and they didn't always get punished for that reason.

As Tim Worstall points out in a comment over on the Crooked Timber blog, with coined money, there were two main types of counterfeit:


Debased metal counterfeits: this was currency made with a cheaper base metal (or quantity of base metal) which, once discovered in circulation, changed perceptions as to the value of the currency, sowing seeds of suspicion, and undermining the trust needed for economic exchange.

 

Wrong source counterfeits: this was currency made by someone other than the sovereign, thereby disrupting the sovereign's control over the money supply in a given state. Such counterfeits did not always undermine social trust, but they would undermine the sovereign's power.

 

Oftentimes, historically, the main motivation for punishing counterfeiters was not because they devalued the currency but because they threatened sovereign power. Indeed, this is underscored by the fact that sovereigns themselves often debased currencies for their own political reasons (to fund wars and personal expenditures etc).

Worstall goes on to suggest that it might be useful to distinguish AI that fakes real people (and thereby undermines social trust) from AI that simply comes from the wrong source. He doesn't do much more with this comment except offer it as a suggestion. But I find it intriguing. Could it be that the ulterior concern is not about counterfeit people but about AI that comes from the wrong source?

Maybe, but I don't think the 'wrong source' is the right way of framing it. In the case of counterfeit currency, the sovereign's concern was with power, control and benefit. They didn't like that they were being disempowered to the benefit of others. It's possible that something like this may be happening with the rise of AI, particularly recent iterations of generative AI.

To explain what I mean, it is worth noting that there have been several studies in the past 18 months examining the productivity gains associated with the use of generative AI. Many of these studies, though not all, have found some meaningful productivity gain among workers in the knowledge economy. What's interesting about some of these studies, however, is that these productivity gains are not always equally distributed. One finding, which has cropped up in three different studies of three different kinds of work (here, here and here), suggests that lower-skilled workers (those with less education and less experience) benefit most. Indeed, a couple of studies suggest that higher-skilled workers don't benefit much at all.

On the one hand, these are encouraging findings. They provide tantalising evidence to suggest that generative AI might assist with equality of opportunity in the workplace. In other words, that it can work to negate some of the competitive advantage gained by those with elite educations or problem-solving ability (what I am calling, for want of a better term, the 'cognitive elite'). From a general social justice perspective, this looks like a good thing. Who wouldn't want more equality of opportunity? Who wouldn't want to suppress the unfairly won gains of an elite? But, of course, members of the cognitive elite may not see it the same way. They might be threatened by this development because it reduces an advantage they were enjoying.

It could be that fears about this loss of status and privilege motivate fears about counterfeit people. Cynically, we might even suppose that talk of counterfeit people is a distraction. It shifts focus to the sexier or more philosophically contentious concept of 'personhood', and away from the material and economic effects of the technology.


3. Conclusion: Let's Not Get Ahead of Ourselves

The preceding argument might give the impression of being naively optimistic. I would hope that I am not naively optimistic (see my article on Techno-Optimism for more). So let me offer some final and important caveats to what I have just said.

First, the equalising effects of generative AI may not hold up in practice. The studies I have cited are early and restricted to certain tasks and contexts. Whether the effect replicates and holds up across broad sectors of the knowledge economy remains to be seen. It may just be a temporary blip. As AI systems grow in capability they may, finally, and as others such as myself have suggested, effectively replace all workers. Everyone loses out, equally, but no one really gains. At least not in the long run.

Second, in commenting on these studies I have focused on the way in which it empowers lower-skilled workers in some settings. This ignores the elephant lurking in the background. Unless these workers are designing and creating their own generative AI systems (which is not impossible), they are relying on systems created by others, often powerful big tech corporations. While the lower-skilled workers may experience some modest gain in their bargaining power in the labour market, the people that really gain from this technology are those that own and control the means of AI production. So, ironically, this technology may have the same effect on the power of the cognitive elite that early waves of computerisation had an middle-skill, middle-income workers. The cognitive elite lose their power and influence. There is a modest redistribution to the lower-skilled and a big redistribution to the owners of the relevant capital. (A lot of people hated it, but I still think my earlier article on AI and cognitive inflation has some light to shed on this problem)

Third, there is no reason to think that the cognitive elite will take all this lying down. There could be a significant backlash, perhaps coming with the attempt to shut down use of AI in certain industries (strikes in the entertainment industry have already, partially, touched upon this). As social theorists like Peter Turchin have long argued, competition among the elites and elite overproduction may be responsible for many historical revolutions and upheavals. AI might be the crucial prompt for our generation's elite to revolt.

Fourth, and finally, my comments about who benefits from AI and the threat they pose to the cognitive elite, does not undermine or call into doubt Dennett's other fears about counterfeit people. The technology can still be used to manipulate and exploit. It can still pose a threat to our freedom and agency. However, I don't think this is a threat that is primarily associated with the person-like properties of AI. I think many manifestations of AI can pose a threat to freedom and agency.


Tuesday, January 9, 2024

Technology and the Dematerialisation of Sex



The 'sex scene' from Demolition Man

(This article was originally commissioned for the Wired Ideas column, but due to delays on my part, and the subsequent discontinuation of that column (as I understand it) it never appeared. Rather than consign it to the dustbin of history, I have decided to publish it here. Obviously, given the intended audience for the original piece, it is a bit shorter and snappier than most of the things I write).

As ever, science fiction got there first. In the largely forgettable 1993 action movie, Demolition Man, two characters from the 1990s, a hard-hitting cop played by Sylvester Stallone and a psychopathic criminal played by Wesley Snipes, are cryogenically frozen for their misdeeds. They are resuscitated in the year 2032. The future, they quickly learn, is very different. A good-natured, pacifist ethic that eschews violence and confrontation has become widely adopted. Physical sex is disfavoured. This is comically revealed to Stallone's character when he enthusiastically welcomes an invitation to have sex from the female lead (played by Sandra Bullock). Sex, for her, involves donning a neurostimulator helmet that allows for a 'digital transference of sexual energies' between two people. When Stallone suggests they do it 'the old-fashioned way', she reacts with disgust.

I don't suppose we will ever fully embrace the Demolition Man-style ethics of virtual sex, but we could end up in a world in which virtual sex is the ethical preference for most casual or first-time sexual encounters, with the 'old fashioned' method being reserved for special intimate relationships and procreation. 

It is important to be clear about the nature of this claim. An extended definitional analysis of what it means to 'have sex' or what counts as 'sexual activity' would take more time than it is worth. Suffice to say these concepts are contentious and open to interpretation. For the remainder of this article, I presume that sexual activity is any activity involving sexual stimulation and gratification. Although masturbation is an important form of sexual activity, I presume that most people, when they talk about 'having' sex, have a partnered or interactive form of sex in mind. I then draw a distinction between physical, in-person, sex and digital or virtual sex. The crucial point about the latter is that it does not involve direct, physical contact, between sexual partners. It involves an interaction through a digital/virtual medium and via a digital/virtual avatar (I use the terms 'digital' and 'virtual' interchangeably). What I am suggesting is that this latter form of sexual activity might become the ethical default. In other words, it will be presumed to be the primary form of permissible sex and it is only if special conditions are met that physical, in-person, sex will be deemed ethically permissible.

Three factors point toward this outcome. The first is that there is already some evidence to suggest that people are avoiding, or reducing, the amount of in-person sex they have. For example, in 2021, the US Center for Disease Control, published a study indicating that only 30% of teenagers reported that they had ever sex, down from over 50% in 1990. The ensuing suggestion of a “sex recession” among Gen Z  may be overblown—for example, some commentators have counter-argued that although younger people may not be having as much penetrative sex as previous generations, they are engaging in other kinds of sexual activity, and perhaps their sex lives are overall better and more satisfying—but the CDC finding is not an outlier. Studies in Japan,  Australia,  the UKSweden and Finland all indicate that people are having fewer sexual encounters than in previous generations. This is true both within long-term committed relationships and in more casual sexual encounters. 

There are many potential explanations for the great 21st century sex famine, from technology to the modern workplace.  The Finnish study provides one intriguing hypothesis. Every few years since the 1970s, an ongoing study called Finsex has collected data on the sexual behaviours of Finnish adults. In its 2015 iteration, it found that both male and female respondents had masturbated significantly more in recent decades, and that the more people masturbated, the less partnered sex people had. This was particularly prevalent among younger generations. The suggestion from the study's authors was that perhaps people were using masturbation as an alternative to partnered sex. To put it another way: a substitution effect was at play. People were swapping in person sex for a more convenient, and almost as good, alternative. 

Is it really that surprising that masturbation is on the up, and partnered sex on the decline, given the pervasive, always-at-the-tip-of-your-finger, availability of internet pornography? In general, people want to do things that help them promote or pursue their values. If they can access a cheaper, almost as good version of sexual pleasure, through other means that don’t require navigating the complex social dynamics of dating and casual hookups, then they might be enticed to do so via digital or virtual forums. According to one 2019 study, there is evidence to suggest that people do substitute pornography for interpersonal affection.  

This leads to the second factor supporting the move to virtual sex. Internet pornography, at least right now, may do it for some people, some of the time, but it is not so close to the real thing that we are likely to see it as the ethical default or norm for sex. But developments in sextech, both ongoing and future, will make it likely that more people will see virtual sex as a meaningful substitute for the real thing. Developments in generative AI, for instance, already allow people to create realistic and emotionally satisfying AI companions. The emotional turmoil experienced by users of the Replika AI chatbots, when changes were made to that platform in early 2023 -- changes that effectively resulted in the 'deletion' of prior companions -- provides clear evidence of this. It seems likely that people will be able to generate realistic 3D virtual sex partners, with emotionally satisfying 'personalities', in the near future. When this possibility is coupled with advances in immersive VR, and haptic teledildonics (the ability to transmit sexual touch via a digital medium), it is not hard to imagine virtual sex becoming a more plausible and desirable alternative to physical sex. And virtual sex with an AI partner is just one of the new sexual options added by technological innovation. Advances in VR and haptics, in and of themselves, will allow humans to see the virtual medium as an 'almost as good' way to interact with one another.

You may be wondering, however, how we get from this to the idea that virtual sex will become an ethicaldefault. You could accept the argument that people are turning their backs on physical sex in favor of digital sex without supposing that the substitution of virtual sex for in-person sex will become moralized in any way. How could the moralization happen? 

This is where a third factor becomes important. If the perceived cost of in-person sex—not just financial costs, but emotional, social, and health-related costs—increases to the point that people are presumed to be taking a significant ethical risk if they opt for that over the virtual equivalent, then this could precipitate a change in social moral attitudes. Variations in the perceived cost of an action are already known to play a role in changing social moral beliefs. One of the best-studied examples of changes in social moral attitudes concerns how  non-marital and casual sex become more and more permissible in the course of the 20th century. A commonly cited cause of this is that the availability of effective forms of contraception reduced the negative costs associated with casual sex, particularly for women.  This meant more people were willing to engage in sex outside of marriage, which made it more socially acceptable and, eventually, this altered social moral attitudes. Casual sex lost some of the moral stigma it once had.

The same thing can happen in reverse. If the perceived costs of an activity go up, then it can acquire a moral stigma that it didn't previously have. This is something that may be slowly happening with respect to the use of fossil-fuel based automobiles and the consumption of meat. It’s not much of a stretch to suppose that something similar may happen with in-person sex. Sex undoubtedly has significant benefits, but  it also has significant costs. Not all sex is pleasurable or satisfying. Some sex is coerced and morally unacceptable. As a society, we are becoming increasingly aware of both the prevalence of non-consensual, unwanted sexual contact and the harms that it can cause. Victims of sexual assault and violence are speaking out and calling out their attackers, and their attackers are facing both social and legal reprimands as a result. This is all well motivated: there are strong moral reasons to favour this increased moralization of sex. But this could, in turn, have an impact on the perceived permissibility of in-person sex: if it carries the risk of significant interpersonal harms, unwanted trauma and social ostracisation, then we should be very cautious about its pursuit. If this happens, substituting in-person sex for a more convenient, almost as good, and less costly form of virtual sex, could become the social norm.

Admittedly, this presumes that there is an important moral difference between in-person sex and virtual sex. Some people might dispute this, arguing that, the potential costs are equivalent: one can also be harmed by unwanted virtual sex and one can be morally chastised for perpetrating virtual sexual assault. (Indeed, I have argued for something like this view in several academic papers over the past decade.) But even I would concede that there are some differences between the two kinds of sex that can reduce the perceived moral costs of virtual sex, such as the increased physical distance between participants, and the greater flexibility when withdrawing from unwanted or unpleasant contact. In addition, costs arising from healthcare risks and unwanted pregnancy are also reduced in the virtual environment. 

This does not mean that in-person sex will disappear. There are strong emotional and biological reasons why people will still be drawn to it. It just means that the moral barriers to in-person sex will be raised and that it may become less frequent and less socially acceptable as a result.

Monday, January 8, 2024

What is Equality of Opportunity? A Framework for Analysis



Rosie the Riveter

Ensuring equal opportunities is a much-touted social goal. Governments often introduce policies and legislation aimed at eliminating forms of discrimination that prevent this from happening, and providing assistance to those that need a leg up. But what actually is equality of opportunity? And is it really a laudable social goal?

In this article, I will answer these two questions. I will start by clarifying the nature of equality of opportunity, distinguishing it from equality of outcome, and identifying its three core elements. Second, I will assess a variety of arguments suggesting that equality of opportunity is not intrinsically good but is, rather, only instrumentally or derivatively good — not something to be pursued in itself but for the sake of something else. I will ultimately conclude that equality of opportunity is probably good in itself but it is one among many laudable social goods and can, in some circumstances, be traded off against other goods.

In presenting these thoughts, I will be drawing on the work of two thinkers in particular: Peter Westen and Richard Arneson.


1. Understanding Equality of Opportunity

Equality is about ensuring parity or equivalence between two or more parties. In political philosophy, equality of opportunity is usually explained by contrasting it with equality of outcome. The latter is about ensuring parity with respect to the division of social goods or services. The most obvious illustration is income equality or wealth equality. In theory, a society could aim for perfect income equality by ensuring that everyone gets paid the exact same, regardless of effort, ability, motivation or social contribution.

While aiming for equality of outcome is laudable in some contexts (e.g. access to healthcare treatments) it is not clear that is laudable in general. Indeed, as we will see below, there are famous parodies of the idea that society should aim for perfect equality of outcome. Some difference in social outcome, particularly with respect to income and reward, is usually thought to be both desirable or tolerable, insofar as it produces other beneficial outcomes (innovation, economic growth, social diversity, cultural enrichment, freedom of choice and so on).

This is where the idea of equality of opportunity comes into play. Instead of ensuring that everyone gets an equal share of social goods, proponents of equality of opportunity suggest that we should ensure that everyone gets an equal opportunity to access or compete for those social goods. An obvious illustration would be the competition for desirable jobs, such as being a doctor or medic. Nobody should be denied the opportunity to compete for such a job simply because they are female, or black or poor. All should be given an equal chance to prove themselves (prove their talents or merits). This may result in unequal social outcomes — the medics may earn more than the store clerks — but at least everyone had a fair chance to achieve those different outcomes.

This is quite a rough sketch of the idea of equality of opportunity. To my mind, one of the clearest conceptual analyses of it comes from Peter Westen. His 1985 article ‘The Concept of Equal Opportunity’ offers an insightful analysis of the structure of equal opportunity policies. The article also has the added bonus of providing a surprising conclusion regarding the coherence of the concept. Let me explain.

Westen argues that equality of opportunity policies have three key structural elements:


Covered Agents: Every policy ought to have a set of clearly defined agents, or classes of agents, to whom it applies, e.g. all the people in a given state, all the people over the age of 18 in a given state, and so on. These are the agents, or classes of agent, between whom, equality of opportunity must be attained.
Target Goals/Outcomes: Every policy ought to be aimed at some clear target or goal. We don’t pursue equality of opportunity in the abstract. We pursue it with respect to certain desired outcomes (jobs, educational attainments, success in sport). It is the opportunity to pursue such outcomes that we are trying to equalise.
Obstacles to be Removed: Every policy ought to require the removal of some specific set of obstacles to attaining the desired outcome. These obstacles will typically apply differentially to the covered agents. For example, prejudice against women is an obstacle to women succeeding at job interviews. Laws that ban or punish such prejudice try to remove that obstacle and thereby ensure that men and women are on a more equal footing.

 

The obstacles to be removed by the policy are, in many ways, the most important and philosophically contentious aspect. As Westen points out, no equality of opportunity policy tries to guarantee that agents will achieve the desired outcome. If it did that, it would not be about equalising opportunities but about equalising outcomes. The distinction between the two concepts would erode. Instead, the goal must be to ensure that each agent has a reasonable chance of achieving the outcome.

But what counts as a reasonable chance? Making it possible for the agent to achieve the goal seems to demand too little. Under the right conditions, nearly everything is possible. So it must be about raising the probability of them achieving the outcome to some degree, but by how much? Westen doesn’t offer any prescriptions in his article. That’s not what the article is about. He suggests that one obvious aim should be to remove obstacles that are fixed and beyond the agent’s control, e.g. no one should be disadvantaged due to age, or gender, or race. Beyond that, however, things get tricky. We will consider why a bit later on. But other obstacles often can and should be removed too.

In summary, for Westen, equality of opportunity can be best defined/characterised in the following manner:


Equality of opportunity = removing obstacles to the achievement of some target goal for some set of agents so as to raise the probability of their achieving that goal (typically, though not necessarily, relative to some other set of agents) by some reasonable degree.

 

This doesn’t come directly from Westen, but is, rather, my extrapolation from his text. The bit in brackets might raise a few eyebrows. You might argue that the whole point of equality of opportunity is to raise the probability of one set of agents achieving a goal relative to some other agents. It’s about levelling the playing field and removing unfair advantages, after all. If you raised the probability for all agents, then this wouldn’t address the underlying problem.

I think this is generally correct: an equality of opportunity policy would, in the ordinary course of events, be about raising probabilities of one set of agents relative to another (women vs men for instance). But it’s not clear that this must, always and everywhere be the case. Removing obstacles may not always be to the disadvantage of one group.

This brings me to one of the curious implications of Westen’s analysis, and one that he himself emphasises. Once you break equality of opportunity policies down into their three component parts, the language of equality becomes largely redundant. Why is this? Well, because removing some obstacles will almost never, in the real world, result in perfect equality between two sets of agents. Suppose you have two candidates going for the same job: Harry and Sally. Making it illegal to favour men over women in job interviews will not mean that Harry has the exact same chance of getting the job as Sally. Harry and Sally will differ in all manner of ways. Maybe Harry has more years of education; maybe Sally is more confident and loquacious. As Westen puts it:


People who have equal opportunity by one measure of opportunity will have unequal opportunities by other measures. No two people can have an equal opportunity to attain a specified goal by every measure of opportunity unless they are both guaranteed the result of attaining the goal if they so wish. 
(Westen 1985, 845)

 

In a sense, then, we don’t aim at equalising opportunities; we aim at giving specified agents the chance to achieve target goals without the hindrance of certain obstacles. In some contexts, you could think of it as giving individuals a ‘right’ to have that chance.


2. Is Equality of Opportunity Valuable In Itself

Westen’s analysis is edifying and perhaps even sobering for advocates of equality of opportunity. It also points towards another perennially popular debate concerning the intrinsic vs instrumental value of equality of opportunity. Should we aim for equality of opportunity for its own sake or because it is a proxy for or gateway to other desirable social goods?

In a much-quoted short story, Kurt Vonnegut famously parodied the idea that equality was laudable in its own right. The story in question is Harrison Bergeron. It is set in the year 2081 and depicts a dystopian future in which the US achieves perfect equality between all citizens by ‘handicapping’ (the language is archaic) them so as to ensure no one has an unfair advantage. The famous opening paragraph captures the gist of it:


The year was 2081, and everybody was finally equal. They weren't only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General.

 

The implication is that no one would want to live in a world of such perfect equality. It would be a horrendous affront to human flourishing. There is no point ‘levelling down’ to achieve equality: that would deprive us of too many other valuable things (freedom, creativity, diversity, innovation and so on). And it’s not just Vonnegut that makes this argument. Philosophers such as Michael Huemer and Harry Frankfurt have made essentially the same point, albeit in more sophisticated and analytical ways.

All such arguments against equality tend to adopt the same structure. They ask us to imagine a world (however farfetched) in which equality (of whatever type) is achieved but people are much worse off (by some metric, e.g. less freedom, less well-being). Surely we wouldn’t want to live in such a world? Conversely, they ask us to imagine a world in which equality is violated but everyone is much better off. Surely we would prefer that world to the one of perfect equality? Therefore, it must be the case that equality is not good in itself. It must only be good because it is an instrument towards or derivative from some other good. So, for example, we might pursue equality because we think it increases freedom and well-being, on average or in most cases, but it is really our desire for freedom and well-being that motivates our pursuit of equality. This fact is revealed in the extreme hypothetical case in which freedom/well-being and equality seem to clash.

The version of this argument that I have just sketched is not particularly sophisticated. Let’s consider a more sophisticated one, and one that is specifically targeted at equality of opportunity and not just equality in general. The version I have in mind comes from the writings of Richard Arneson. Arneson’s views on equality of opportunity are complex, but in his paper ‘Four Conceptions of Equal Opportunity’ he offers a range of Harrison-Bergeron style objections to theories of equal opportunity. I will focus on his objection to Rawls’s theory of ‘fair equality of opportunity’ (FEO).

To simplify, Rawls’s theory of justice holds that a just society must first provide for basic liberties of all people, then fair equality of opportunity, and then a particular form of distributive justice that maximises the provision of resources to the least well off. The latter is often the most-discussed and debated aspect of Rawls’s theory but the preceding conditions (basic liberties and FEO) take priority over it (in a lexical order). His theory of FEO argues for the removal of unfair advantages that people have as a result of social privilege or class. More precisely, it argues that in competing for valuable opportunities, the only obstacles that are tolerable are differences in natural ability/native talent and ambition. All other obstacles should be removed (provided this does not conflict with basic liberties).

There are a lot of problems with Rawls’s theory. What exactly is native talent? How can we assess it, apart from processes of enculturation or socialisation? Why should ambition be rewarded, per se? What if ambition is itself often honed by socialisation and social privilege? But even if we set these problems to the side, and accept the parameters of FEO, it is not clear that a society that violated FEO would be unjust or undesirable. Arneson asks us to imagine the following scenario:


Imagine that an egalitarian society channels extra resources into the education and socialisation of children of low-income parents, with special resources devoted to the subset of children from disadvantaged backgrounds who have subpar endowments of natural talent. These individuals, let us suppose, then have better prospects of competitive success than individuals from advantaged backgrounds with the same native talent endowments and same level of ambition.  
(Arneson 2018, F162)

 

Clearly this society violates FEO, but is it bad? Not obviously so. Indeed, some might argue that it is good insofar as it gives an advantage to the less advantaged. Also, as Arneson points out, compensating benefits could be paid to the members of the advantaged class who lose out in the competition of life. Of course, the same logic works in reverse and Arneson sketches the opposite society too: one in which the already advantaged become more advantaged and compensate the less well off. Either way, he argues what we have here is a situation in which FEO is violated but it is not clear that we should be too bothered.

The problem with this type of argument is that it doesn’t by itself prove that equality (of opportunity) lacks intrinsic value. Equality could be one of many, plural, goods that a society should seek to realise (Arneson is aware of this problem and discusses it). On some occasions, these values may clash or conflict. On those occasions, we will need to balance or trade-off one value against another. It could well be that, when push comes to shove, freedom or well-being counts for more than equality. If we have to choose between them, then we de-prioritise equality.

But if equality is one of many plural goods, it suggests two important caveats to the sceptical argument. First, just because we can imagine hypothetical situations in which these values conflict does not mean that such value conflicts are common. In many cases, freedom/well-being might go hand-in-hand with increased equality of opportunity. Indeed, there is a good argument for thinking that increased equality of opportunity tends to also increase freedom and well-being since people are given more options and are more able to pursue those opportunities that best fit their desires and motivations. Second, in those cases in which freedom and well-being are held constant, equality can be an important tie-breaker when choosing between policies and outcomes. I discussed this previously when criticising some of Steven Pinker’s comments on equality. To quickly review the idea, imagine a society consisting of three individuals: A, B and C and 100 utils of well-being to be shared among them. In one world, 50% of the well-being flows to A, while the other two share an equal 25%; in another world, the three get equal 1/3 shares of well-being (you can think of well-being as ‘wealth’ if that makes it easier). Given that the aggregate level of well-being is the same in both worlds, it seems plausible to suppose that we should favour world 2 over world 1, precisely because it is more equal.

In sum, there may be some reason to think that equality of opportunity is not an overriding good and should not be pursued at all costs. But that does mean that it is not a good and worth pursuing in many instances.