Thursday, March 23, 2023

103 - GPT: How worried should we be?


In this episode of the podcast, I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. We talk about GPT and LLMs more generally. What are they? Are they intelligent? What risks do they pose or presage? Are we proceeding with the development of this technology in a reckless way? We try to answer all these questions, and more.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.


Thursday, March 16, 2023

Mill's Harm Principle: What is Harm and Does it Matter?




One of the most commonly discussed principles of moral and political philosophy -- if not the most commonly discussed -- is Mill's Harm Principle. Introduced in Chapter 1 of his famous polemical essay, On Liberty, the Harm Principle is set out in the following way (all quotes are from a Wordsworth classics edition - page numbers may vary across editions):


The object of this Essay is to assert one very simple principle...That principle is, that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. 
(On Liberty, p 13)

 

Thus stated, the Harm Principle seems like a robust, and in many ways attractive, statement of classical liberal or libertarian political philosophy. "Don't tread on me!", it seems to be saying. Governments cannot be too zealous or overreaching in moralistic and paternalistic interventions into individual behaviour. They must take a step back and allow us all space to experiment and grow as individuals.

Those familiar with Mill's essay, and the subsequent philosophical and legal debates it has invoked, will, however, know that the Harm Principle is anything but 'one very simple principle'. For starters, it is not clear that it is really 'one' principle. Mill asserts multiple versions of it in On Liberty and qualifies it in numerous ways. Furthermore, it is not obvious that it is 'simple' in its application. What exactly counts as 'harm'? Is 'harm' a necessary or sufficient condition for government interference?

It's not possible to do justice to all the nuances of the philosophical debate about the Harm Principle in what follows. Instead, I will take a more modest approach. I will try to highlight some complexities in Mill's formulation of the principle. I will then discuss some of the particular problems that arise in relation to the concept of harm and consider whether the inability to formulate a fully satisfying theory of harm undermines the credibility of the principle. In short, and following an argument put forward by Anna Folland, I will suggest that the Harm Principle is credible, despite the vagueness of the concept of 'harm'.


1. Mill's Formulation of the 'Very Simple Principle'

Several important questions arise from Mill's formulation of the Harm Principle. The first, and in some ways most crucial, is whether it states a necessary and/or sufficient condition for interference with individual liberty. The quoted passage above provides strong support for the idea that it states a necessary condition for interference. He talks about the 'sole end' and the 'only purpose' for which interference is warranted.

But does the principle also state a sufficient condition for interference? The initial formulation isn't clear on this point, however, Mill's subsequent discussions suggest that it does not. He analyses when, exactly, we are morally justified in interfering with harmful conduct. He thinks there are some kinds of harmful conduct (e.g. lying) that do not warrant interference. In those analyses he leans into his more general utilitarian philosophy, asking whether the benefits of interference outweigh potential costs. So, in other, words, the Harm Principle, for Mill, works as an initial filter for interventionist policies. First we ask whether we are intervening in conduct that is harmful to others. If the answer is 'yes' (and only if it is 'yes'), we ask additional questions about the costs and benefits of the proposed intervention.

This is a sensible view. If all conduct that was harmful to others justified external interference, the Harm Principle would, arguably, be overinclusive and would be anathema to many liberals and libertarians. We will explore this is more detail below when we look at issues that arise with the concept of 'harm'. But, sensible as it may be, this interpretation of the Harm Principle raises problems when considered in light of Mill's broader philosophy. Mill is not a rights theorist. He does not believe in absolute individual rights. As he himself puts it:


I forego any advantage which could be derived to my argument from the idea of abstract right, as a thing independent of utility. I regard utility as the ultimate appeal on all ethical questions; but it must be utility in the largest sense, grounded in the permanent interests of man...Those interests, I contend, authorise the subject of individual spontaneity to external control, only in respect of those actions of each, which concern the interest of other people. 
(On Liberty, p 14)

 

What is he saying here? In essence, he is saying that the Harm Principle is justified by the principle of utility. A broad, non-interventionist stance, serves the principle of utility -- the greatest happiness of the greatest number -- in the 'largest sense'. It could be that, in particular cases, utility is served by intervening in individual behaviour even when that behaviour is not harmful to others, but it is better to adopt a general rule against intervention since, over the long haul, this is more likely to undermine the greatest happiness of the greatest number. As far as I am aware, Mill does not provide a more detailed utilitarian argument in defence of this blanket policy of non-intervention. It sounds plausible to me, and I would like to believe it, but it might be wrong.

A second question that arises is: 'To whom does the Harm Principle apply?' The obvious answer is 'the government'. The government is not justified in introducing policies that interfere with individual behaviour, unless that behaviour causes harm to others. And, certainly, most contemporary policy-related debates about the Harm Principle, focus on government intervention. But Mill clearly intends the principle to apply more generally. One of the key ideas in On Liberty is that there can be a 'tyranny of the majority' and that minority views and minority lifestyles need to be protected against interference by busybody majorities. So, for Mill, interference by our social peers and social majorities is just as much a problem as government interference. Indeed, so much so, that one of the duties of an effective government is to provide for the liberty of minorities.

A third question that arises is: what counts as a liberty-undermining interference? Mill gives some guidance on this. Right after the most-quoted passage stating the 'very simple principle', he says:


[Man] cannot be rightfully compelled to do or forbear because it will be better for him to do so, because it will make him happier or because, in the opinions of others, to do so would be wise or even right. These are good reasons for remonstrating with him, or reasoning with him or persuading him, or entreating him, but not for compelling him, or visiting him with any evil in case he do otherwise. 
(On Liberty, p 13)

 

From this, it seems that liberty-undermining interference is limited to coercive interference. This would include threats of punishment or imprisonment for failing to do something, but also, perhaps, other threats of harm. Remonstrating with people or trying to persuade them through reasoned debate is fine, even when this concerns conduct that does not harm people. This might be interference but it is not liberty-undermining interference. This concession is interesting insofar as it permits some busy bodying interference by majorities in the form of public education campaigns. So, for example, a government education campaign intended to discourage people from smoking or consuming alcohol would, on Mill's reasoning, be acceptable so long as it doesn't amount to coercion.

A more complex edge case of interference would be that of 'nudging'. This an idea that comes from the work of Cass Sunstein and Richard Thaler. In essence, it involves the use of techniques for pushing (nudging) people to make decisions that serve their own welfare (or that of the general population) that bypass rational cognitive faculties. For example, placing healthy snacks in someone's eyeline nudges them to favour health ysnacks over unhealthy ones. This doesn't involve reasoned dialogue but it doesn't involve coercive interference either. Where this stands within Mill's framework is open to dispute and there is, of course, a very extensive and detailed debate about it in the philosophical and legal literature.

A fourth question that arises is: does the principle apply to all people, irrespective of age or status? Mill is clear on this point:


It is perhaps, hardly necessary to say that this doctrine is meant to apply only to human beings in the maturity of their faculties. We are not speaking of children, or of young persons below the age which the law may fix as that of manhood or womanhood. 
(On Liberty, p 13)

 

This is a reasonable, mainstream position. Of course, the age fixed by the law is, always, going to be somewhat arbitrary. If the capacity to make decisions for oneself is largely a function of cognitive ability and emotional intelligence (etc) then there will, undoubtedly, be people officially designated 'children' by the law that ought to be treated as equivalent to adults, and vice versa. We may be able to overcome this problem by applying functional capacity tests -- i.e. assess whether people have the capacity to make decisions for themselves in certain domains -- but these can be difficult to employ in practice. A bright-line cutoff between youth and maturity, for all its arbitrariness, is often the simplest approach.

What is probably less well-documented is the fact that Mill thought the maturity restriction could apply to entire civilisations as well as individuals:


For the same reason, we may leave out of consideration those backward states of society in which the race itself may be considered in its nonage...Despotism is a legitimate mode of governance in dealing with barbarians, provided the end be their improvement, and the means be justified by actually effecting that end...there is nothing for them but implicit obedience to an Akbar or a Charlemagne, if they are so fortunate as to find one. 
(On Liberty, p 13-14)

 

This will, no doubt, reek of racism and cultural prejudice to the modern reader. Mill is suggesting that entire peoples can be so backward and immature that they need the helping hand of a benevolent dictator. That said, I think there may be something to the point he is making. Hobbes, after all, makes a similar argument, namely, that liberty is, in some sense, a luxury for societies that have achieved a level of prosperity and security. I agree with that, to a point, and Mill himself alludes to a constant tension between security and liberty in his writings. What I don't agree with is the suggestion that despotism is the route to achieving the level of prosperity and security needed for discussions of liberty to become salient. Indeed, my reading of social history is that openness and freedom, combined with strong and stable government, are often drivers of prosperity, not luxuries to be enjoyed after prosperity is obtained.

Mill places some other, miscellaneous constraints on the Harm Principle. For instance, at one point he suggests that people might be legitimately compelled to do things that benefit other people:


There are many positive acts for the benefit of others, which [a person] may rightfully be compelled to perform; such as, to give evidence in a court of justice; to bear his share in the common defence, or in any other joint work necessary to the interest of society of which he enjoys the protection... 
(On Liberty, p 14)

  

There is a certain common sense to these examples, and implicit in some of them is the tension between security and liberty to which I just alluded. Nevertheless, I find it hard to reconcile the general claim being made with a strict interpretation of the harm principle. Why can I be legitimately forced to do things to benefit other people if my failing to do so does not harm them? The answer to this might lie in what exactly we mean by 'harm' in the context of the harm principle. It is to this thorny topic that I now turn.


2. What is harm? Comparative Accounts

One thing Mill does not do is provide us with any general theory of harm. This might seem odd given that harm is the central concept in the Harm Principle. But perhaps it is good to avoid overly abstract thinking about harm. There are paradigmatic cases of harm to others, there are paradigmatic cases of harmless conduct, and there are borderline or difficult cases. So, for example, if my hobby is going around smashing people's fingers with a hammer, then clearly my conduct is harmful to others and I can justifiably be stopped. If I like to stick needles in my own fingers, in the privacy of my own home, then clearly my conduct is harmful to no one other than myself and I should not be stopped. If I play loud music, late into the evenings, and this upset my neighbours, then this is more borderline case. We can have an argument about whether it really counts as harm, and whether it can be justifiably stopped.

Many philosophers don't like having fuzzy or contestable concepts at the heart of our ethical theories. They seek theoretical and abstract purity. They want a general theory or account of harm that tells us whether or not specific conduct falls foul of the Harm Principle. You can understand why. A general, abstract theory of harm could be used to assess novel or controversial cases of harm and give us a clear answer as to whether it can be justifiably interfered with or not. No surprise then that many philosophers have attempted to provide a general theory of harm that we can plug into Mill's principle.

Some, however, have argued that no general and satisfying theory of harm exists. Anna Folland, in her article 'Mill's Harm Principle and the Nature of Harm', examines these critiques in some detail. The critics all follow the same strategy. They introduce some general theory of harm -- oftentimes one that has intuitive appeal or support from other debates -- and argue that it is under or over inclusive in some crucial respect. In other words, they argue that the theory would identify conduct as harmful that most people agree should not count as harmful or, vice versa, would fail to identify cases as harmful that should count as harmful.

Let's consider three examples. The first is the Temporal Comparative account of harm (TCA) (definitions taken from Folland's article):


TCA: An event e harms a subject s if, and only if, e makes s worse off (in terms of well being) after e than s was prior to e.

 

Assume I was healthy and well-functioning yesterday. Today, a man in the street punched me in the face and gave me a black eye. Clearly, I am worse off now, after the punch in the face, than I was yesterday, before the punch. The punch has harmed me.

Sounds sensible, right? Not so fast. Critics argue that the TCA doesn't deal appropriately with some cases. Suppose that your child has contracted a virus and has a terrible sore throat. You could, if you so decided, eliminate their pain by giving them some medicine, which you have freely to hand. On the TCA, you do not harm them by failing to give them the medicine. But, arguably, the failure to intervene in this case is a kind of harm. Similarly, imagine a case in which a child suffers from some debilitating illness because her mother smoked throughout her pregnancy and both parents have smoked, in her presence, from the moment she was born. Their actions clearly harm her, but they don't make her worse off than she previously was since the harmful actions coincided with her conception and birth.

Counterexamples like this lead people to reject the TCA and favour alternative theories. One such alternative is the 'baseline from Mankind' comparative account (MCA):


MCA: An event e harms a subject s if, and only if, e makes s worse off (in terms of well-being) than the normal well-being level of mankind.

 

This avoids the counterexamples to TCA, but faces some counterexamples of its own. For instance, if someone is extremely wealthy, relative to the average person, then stealing their money so as to bring them down to the average level of wealth would not count as harm, on the MCA. This is counterintuitive. Similarly, if someone is extremely attractive (admittedly more a subjective quality), then disfiguring them so as to make them more averagely attractive would not count as harm. This is also counterintuitive. This makes the MCA a non-runner.

This leads to one final theory of harm, the Counterfactual Comparative theory (CCA):


CCA: An event e harms a subject s if, and only if, s would have been better off (in terms of welfare over her lifetime) in the absence of e.

 

This is a very popular theory of harm. It pops up in numerous applied ethical debates. For example, I have seen people propose it as the theory that explains why death is harmful. It also avoids the pitfalls associated with the two previous theories. But it faces problems of its own.

The main one is that if harm is defined by comparison to what would have happened in some counterfactual possible world, then we have to pick an appropriate reference class of possible worlds. But, depending on how we select the reference class, conduct can be deemed harmful or non-harmful in somewhat arbitrary ways. For instance, if I, as a parent, fail to provide private music lessons to my child, am I harming them? Intuitively, most people would probably say no. But perhaps there is counterfactual possible world in which they do much better, in terms of welfare, if they receive the private music lessons. So my failure to pay for them is a harm if I compare this world with that world. In short, weird things start to happen when you start comparing this world with counterfactual ones. If you choose counterfactual worlds in which people do much better than they do in the actual world, then they harmed by virtually everything that is happening to them in the actual world. Conversely, if you choose counterfactual worlds in which people do much worse than they do in the actual world, they are not harmed.

There are some other theories of harm, but these three are a representative sample of the ones debated by philosophers and each fails to provide a fully satisfying underpinning for the Harm Principle. What do we conclude from this? Critics conclude that the Harm Principle must fail. Folland, in her analysis, is not so quick to draw this inference. As she points out, critics are adopting something like this argument:


  • (1) In order for the Harm Principle to be acceptable, it must be grounded in some fully satisfying theory of harm.
  • (2) The only plausible candidate theories of harm are the TCA, MCA or CCA (...etc)
  • (3) Neither the TCA, MCA nor CCA (etc) provides a fully satisfying theory of harm.
  • (4) Therefore, the Harm Principle is not acceptable.

But there are a number of ways to reject this argument. One way would be to reject premise (2) and argue that there are other plausible candidate theories of harm or that, perhaps, some mishmash of theories could be fully satisfying (it's not either/or). Another way would be to reject premise (1) and argue that the fate of the Harm Principle does not rest on articulating a fully satisfying theory of harm. Folland suggests this might be right response to critics. She does so on the grounds that harm features in many ethical debates and principles and yet rarely do we require those that deploy the concept to articulate a fully satisfying theory of it. It is prima facie plausible that harm is a meaningful ethical concept that delineates between acceptable and non-acceptable conduct. That we have some problems providing a fully satisfying theory of it does not undermine its use in the Harm Principle.

I think this is correct. I have long been uneasy with the argumentative standards employed by moral (and other) philosophers, which seems to suggest that we should reject normative principles if they fail to provide intuitively satisfying results across all possible worlds. I am not sure any principle could satisfy such a demand.


3. What is harm? Miscellaneous problems

There are other problems with the concept of harm. One concerns the distinction between harm to self and harm to others. This is central to Mill's principle. He thinks harm to self cannot provide a justifiable basis for coercive interference. He thinks harm to others can. Some argue, however, that this formulation is not quite right. The crucial distinction is not between conduct that is harmful to self versus harmful to others. The crucial distinction is between conduct that is consented to versus unconsented to. If I harm you, but you consent to being harmed, then it would be wrong to coercively interfere with that choice. Although Mill avoids this formulation, it is consistent with his other views, such as his claim that freedom of association is one of the fundamental forms of personal liberty:


from this liberty of each individual, follows the liberty, within the same limits, of combination among individuals. 
(On Liberty, p 15)

 

A related, and perhaps more serious problem, concerns the causal relationship between self harm and harm to others. Sometimes conduct that is, primarily, harmful to the self is also, indirectly, harmful to others. If I want to drink myself into oblivion, you might argue that this is my right, as a free person. But, of course, my decision to do so might harm my family. It might deprive my children of care and resources they need to survive. So can we coercively interfere with the decision to drink?

Mill has a long discussion of this issue in On Liberty, focusing on the arguments of temperance activists and prohibitionists (looking to ban the sale of alcohol). He rejects an outright ban, arguing that drinking is a private pleasure within the sphere of personal liberty. But, he accepts that if drinking becomes harmful to others, coercive interference may be justified. He also argues that those occupying certain social roles -- e.g. police officers, surgeons -- may be justifiably banned from drinking, at least while on duty.

Another problem concerns the gravity of harm needed to justify coercive interference. Some people are unwilling to accept that any and all harms meet the threshold needed to justify coercive interference. Perhaps the most elaborate discussion of this issue can be found in Joel Feinberg's, multi-volume work on the moral limits of the criminal law, which is, in effect, an extended discussion and application of Mill's principle. It's impossible to capture the nuances of Feinberg's position in a short summary, but the gist of it is that he distinguishes harm from hurts and offensive conduct. A harm, for Feinberg, is something that sets back your life interests. So it is a reasonably serious kind of injury to your ongoing life plans and personal welfare. A hurt is a more trivial and temporary kind of injury, e.g. a graze upon your knee. Offensiveness is mental displeasure or distress caused by the conduct of others. For Feinberg, harms justify coercive interference, but hurts and offensiveness do not. But Feinberg ties himself up in knots about this, eventually conceding that some kinds of offensiveness, if they are sufficiently persistent and serious, might justify coercive interference, partly on the grounds that they end up being harmful if they are persistent and serious.

Others counter argue that Feinberg's attempt to distinguish between different levels of harm is unnecessary. For instance, Turner argues that the Harm Principle ought to rest on an expansive definition of harm. Virtually all harm to others, no matter how trivial, raises a prima facie case for coercive interference. Whether coercive interference is then justified depends on whether it passes the 'greatest happiness' test: i.e. do the benefits of the coercive interference outweigh its costs. Since all coercive interference is itself harmful, this is a difficult threshold to cross in the case of relatively trivial harms. Less intrusive and coercive policies will almost always be preferable.


4. Conclusion

The foregoing is a summary of some of the key challenges facing the Harm Principle. Mill's claim that it is a 'very simple principle' is both true and misleading. It is true insofar as it is easy to state the principle, and it is intuitively appealing. Nevertheless, it is misleading insofar as its practical application raises a number of complexities. Still, the mere fact that it can be contentious and that there can be difficult edge cases is not, in itself, a reason to reject the principle. One can see why it remains popular to this day.


Wednesday, February 8, 2023

Uncoupling Cost and Benefit: How Technology Transforms Morality



How can technologies transform our moral beliefs and practices? One suggestion, made popular by a famous case study on technology and moral change, is by uncoupling certain costs and benefits, thereby altering how we perceive and prioritise values.

But how, exactly, does this happen? Can the opposite happen - can technologies 'couple' or bundle together certain costs and benefits - to the same effect? And can we use this idea of coupling-uncoupling to anticipate potential future technological moral transformations? These are the questions I want to consider in the remainder of this article.


1. Contraception and Uncoupling

The famous case study I alluded to in the introduction concerns the impact of contraception on sexual morality. I have discussed this case study in depth before and explained some of the supporting evidence. I don't wish to rehash it here. I just want to focus on what the case study tells us about the phenomenon of uncoupling.

In brief, the idea is that effective contraception uncoupled sexual intimacy/gratification from reproduction. In the past (roughly pre-1900) if you had (heterosexual)* sexual intercourse with another person, this carried a significant risk of unwanted pregnancy. This was true even if you used the forms of contraception available at that time. This meant that sexual intimacy was almost always coupled together with reproduction. It wasn't possible to pursue (heterosexual) sexual intimacy without also being forced to pursue the possibility of reproduction. This made extra-marital or premarital sex a risky endeavour, particularly for women (since they bore the main costs of reproduction), and consequently relatively few willingly engaged in it. Associated with this, there were very strong norms of sexual purity and chastity (primarily for women), and a corresponding condemnation of sexual looseness or liberty.

This changed, dramatically, when effective forms of contraception, particularly forms of contraception that women could control, became widely available. The contraceptive pill is the most famous example. These forms of contraception decoupled sexual intimacy from reproduction (to a high degree of probability and safety) and thus enabled people to access the value of sexual intimacy without necessarily being forced to pursue the value of reproduction. The social effects of this have been quite dramatic. Not only is premarital sex now normalised, but most people now ignore social or institutional norms that still enforce or favour sexual purity or chastity. A more liberal approach to sexuality has, as a result, taken root in many societies.

A few comments about this case study. First, maybe it is not correct to say that effective contraception 'uncoupled' sex from reproduction. All forms of contraception are subject to failure. The possibility of unwanted pregnancy is not completely eliminated. So perhaps it would be more correct to say that it significantly reduced the potential cost (unwanted pregnancy). Still, there is something to be said for sticking with the word 'uncoupled'. For most people, the perceived risk of unwanted pregnancy when using an effective form of contraception is so low that it doesn't feature much in their decision-making.

Second, to say that sexual intimacy has been decoupled from the risk of unwanted pregnancy is not to say that it is decoupled from all other risks. Depending on the form of contraception used, and the type of sexual practice, the risk of sexually transmitted infection could be quite high and this could, in turn, alter sexual beliefs and practices (there is an interesting story to be told about the history of HIV, both from initial panic -- perhaps 'moral' panic -- through to the invention of effective forms of treatment. I won't tell this story here though).

Third, it is of course true that, for many people, the link between sexual intimacy and reproduction is an important one, and many people want to pursue both at the same time. The critical impact of contraception, however, is that it gave people the choice of pursuing these things independently if they so wish (again, there is another interesting story to be told about the rise of assisted reproduction and fertility treatments that also serve to decouple sexual intimacy from reproduction. I won't tell that story here either).

Fourth, and finally for now, just because contraception impacted on how many people thought about the value of sexual intimacy, it does not follow that old norms of sexual purity and chastity are eliminated. They still linger. There are still double standards when it comes to social judgment of sexual liberty -- men, typically, being free from punishment and shame; women, typically, being subject to both. Nevertheless, social attitude surveys suggest that there has been a big shift in sexual mores over the course of the 20th and 21st century. That said, recently there evidence to suggest that younger generations (Gen Z etc) are less sexually liberal and less promiscuous than mid-to-late 20th century generations. There is survey evidence to support this regarding age of first sexual experience and number of sexual partners. For what it is worth, I don't think this is new trend (yet) represents a recrudescence of sexual conservatism. I suspect it may be overstated, and driven by other factors such as delayed adulthood, increased atomisation, social anxiety and the rise of a highly risk averse culture. I also suspect that some elements of this new trend represent a further compounding of the ethic of sexual liberty -- people should be free to not have sex, or to identify as asexual, if they so choose and should not be under any social pressure to have sex simply because that's what everyone else does.

These final comments are half-baked thoughts that require further research and development. They are also tangential to my main focus in this article. If we accept the contraception case study at face value, then we need to take seriously the idea that technology can transform our social moral beliefs and practices by uncoupling certain values and risks. How common is this phenomenon? Can the opposite happen? Let's consider these questions now.


2. Failed Uncoupling: The Case of Opioid Addiction

Contraception is a case study in effective uncoupling: technology promised to reduce the risk of unwanted pregnancy and it really did do so. Sometimes technology fails to deliver on its promise to uncouple. What are the potential effects of such failure? I'm sure there are many examples of this but one that springs to mind are the repeated attempts to create opioid-based painkillers that give us the benefits of pain relief without the associated costs of addiction and other related negative effects.

It has long been known that opioids are highly effective painkillers. It also long been known that they are highly addictive and that this addiction has negative impacts on individuals, their families and society at large. So, unfortunately, it seems that nature has bound both value and risk together in opioids: taking them can reduce pain and suffering (undoubtedly a good thing) but it can also create addiction, crime and other social costs.

As a result of this, there have been repeated attempts by drug companies to create safer forms of opioid that 'decouple' the positive effects from the negative ones. Heroin, for example, was created by the German company Bayer (in the late 1800s) and marketed as a safer alternative to morphine, on exactly these grounds. This is how Patrick Radden Keefe describes it in his book on the opioid epidemic in the US:


...The German company Bayer began to mass market [heroin] as a wonder drug -- a safer alternative to morphine...Bayer proceeded to sell the drug in little boxes with a lion printed on the label, and suggested that differences in the molecular structure of heroin mean that it did not possess the dangerous addictive qualities of morphine. It was an appealing proposition: throughout human history, opium's upsides and its downsides had appeared to be inextricable, like the twined strands of a double helix. But now Bayer claimed, they had been decoupled, by science, and with heroin, humans could enjoy all the therapeutic benefits of the opium poppy, with none of the drawbacks. 
(Keefe, 2021, 186)

 

Of course this turned out not to be true. Heroin was, in fact, much more powerful than morphine and just as addictive. The most recent episode in this saga has been particularly tragic and well-documented. In 1996, the company Purdue Pharmaceutical released a new opioid-based wonder drug onto the market: Oxycontin. Based on a preparation of oxycodone (also more powerful than morphine), Oxycontin promised to uncouple opioids from their addictive effects. How so? Oxycontin used a slow-release mechanism (a special coating on the pill) to ensure that people taking it didn't get a big 'hit' or 'high' from the drug when they initially swallowed it. Instead, the drug would be gradually released into their bloodstream over the course of 12 hours. This would, according to Purdue, minimise its potential for addiction and abuse.

Purdue Pharmaceutical aggressively marketed the drug. I won't get into all the details here -- they are well documented, for example in this article -- but suffice to say this marketing campaign was based on some suspect and misleading claims about the benefits of the drug and the low-risk of addiction. They also encouraged its use for non-acute long-term pain management as opposed to short term acute pain management. The net effects of this have problematic, to say the least. It turns out that Oxycontin did not achieve the longed-for decoupling. Its slow release formula was far from perfect and could easily be bypassed by addicts and abusers. There was, as a result, a huge increase in opioid-addiction and opioid-related deaths (approximately 500,0000 Americans have died as a result of opioid abuse since 1996), a lucrative black market trade in Oxycontin and other prescription opioids, an increase in crime and related issues in particular regions. Whether Oxycontin is solely responsible for this is, of course, unclear. Other factors played a role such as workplace related injuries, unemployment and economic decline, and the availability of other similar drugs. But Oxycontin was, undeniably, a significant part of the picture.

What were the moral effects of this failed uncoupling? This is tricky. Many philosophers and policy-makers think it is a mistake to moralise drug addiction and its consequences. We should not, in other words, view the decision to take a drug or to fall into addiction as a moral failing or moral vice. To do so, presumes freedom of choice which may be absent and may be counterproductive to the goal of reducing the harms of addiction. Furthermore, our restrictive drug laws often create additional or unnecessary moral problems associated with addiction. For instance, criminal wrongdoing associated with black market trading in prescription drugs (theft, violence, gang wars) may largely be the result of insufficiently liberal drug laws, not the drugs themselves. I am very sympathetic to this view and broadly in favour of drug decriminalisation. However, even if it is correct, the reality is that people often do moralise the choices of addicts and subject them to moral criticism. Likewise, even if addicts lack full moral autonomy, those associated with the production, prescription and trade of addictive opioids do not. Their choices and decisions can be moralised and subjected to moral criticism, if their consequences are individually and socially harmful.

So what has happened with the failed promises of Oxycontin? This is purely speculative but it seems that we have two waves of change. First, there is the euphoria associated with the promise of the drug. The pharmaceutical companies push the idea that we can now access the benefits of opioids without the risks. This leads to some initial softening of professional and social attitudes toward them. Doctors become more willing to prescribe them for a larger range of conditions, and become less worried about the potential harms of doing so. Similarly, the social taboo or shame associated with using the drug starts to dissipate.

Second, once the drug fails to live up to its promise, we get a retrenchment of attitudes. The drug companies become social pariahs and they are widely condemned for their false promises. In the case of Purdue Pharmaceutical, the family that owned the company (the Sacklers) have been 'cancelled' in many settings. They were major philanthropists and patrons of the arts. They have seen their names removed from buildings and exhibitions. People don't want to be associated with their morally tainted reputation. Prescribing the drug is no longer so liberally permitted. Doctors have to take greater responsibility for the decision to do so. Taboo and shame recrudesce around those that use the drugs. There is the sense that you lack moral rectitude or courage if you take the drugs; there is a virtue in resisting their temptations. There is a scene in the TV adaptation of Dopesick (a book about the Oxycontin epidemic) that revels in this moralisation. One of the lawyers in the case against Purdue undergoes treatment for cancer. After surgery, in intense pain, he is offered some Oxycontin. His gaze sharpens and he asks for a less effective drug instead. It's clear that the makers of the show view the decision as a courageous one. Commentators have, however, criticised it as unnecessarily shaming those that choose to take opioids. Perhaps they are right, but the point remains: the choice to take the drugs has become moralised. You exhibit moral courage and virtue if you do not. In fact, given what we now know about the negative effects of the opioid epidemic in the US, the choice to take opioids may be an even more morally contested decision than it once was.

As I say, these thoughts are speculative, but they give a sense of what might happen to social moral beliefs and practices in the aftermath of a failed uncoupling.


3. Apparently necessary coupling: the case of privacy

Technology doesn't always uncouple values from their costs. Sometimes, perhaps even oftentimes, technologies have unwelcome side effects that bundle together values and costs. Sometimes these effects don't become apparent until long after the technology has been adopted. As a result, the decision to use the technology forces us to confront a coupling together of values and costs that may not have been present before. A new value tradeoff comes into existence or becomes more acute and pressing.

There are many examples of this, but one that springs to mind is the coupling together of digital convenience and surveillance. As we all know, digital technologies have numerous conveniences. This is particularly true of software and services delivered via the internet or with the assistance of machine learning/AI. Most of us now spend inordinate amounts of time communicating via social media platforms, consuming digitally-mediated entertainment and news, and working with digitally-mediated tools. The problem is that these technologies, almost by their very nature, collect information about us. Every keystroke is recorded; every digital transaction is timestamped and placed in a memory bank. It's possible to destroy some of this information, but not all of it. Most of it is available somewhere, if you are willing and able to look for it. As a result, accessing digitally convenient services is coupled with a significant cost to individual privacy.

Early in the era of digital technologies, the scale and significance of this digital surveillance was not obvious. Many people did things online (wrote ill-judged emails; shared ill-judged photos) that they didn't expect to come back to bite them 20-30 years later. Many people participated in online forums without knowing that governments and corporations were silently collecting everything they did with a view to mining it for useful insights. Now, we all know this and the privacy-related cost of digital convenience is undeniable.

What is the moral effect of this? Tech enthusiasts and futurists have long supposed (or advocated) for the idea that 'privacy is dead' or, at least, dying. When faced with the choice between digital convenience and privacy, people seem to overwhelmingly choose convenience. As economists might put it, whatever we might say, our revealed preference is for convenience, not privacy. There is, however, a significant backlash against this idea. Privacy advocates claim that the value of privacy is even more salient and obvious in the digital era; that we are too quick to give up privacy in favour of convenience (is social media really all that great? Are AI-services all they are cracked up to be?); and that, to some extent, the choice between digital convenience and privacy is a false dilemma: we can have digital services that are less intrusive and surveillant. Significant legislative muscle has been brought to bear on this problem, particularly in the EU in the form of the GDPR. This puts more safeguards in place to prevent unwarranted collection of data and gives individuals rights over their personal data.

My own view, for what it is worth, is that these legislative efforts are valiant, and it is indeed correct that we shouldn't be so quick to give up privacy for digital convenience. Nevertheless, when it all washes out, I suspect the 'privacy is dead/dying' idea will be closer to the truth. As long as we keep using digital services (social media, internet-based communications, AI-based software) they will keep collecting and mining personal data. This means we will always forgo some of our privacy. Since we show no obvious signs of turning away from such services, it seems likely that privacy will continue to ebb away.

I could be wrong about this. There may be a major backlash and withdrawal from the use of digital technologies. There may also be some satisfactory technical solution to the privacy dilemma -- there are some already of course -- but I'm not sure these will be widely distributed or used.


4. Future Uncouplings and Couplings

Now let's turn to the future. If coupling and uncoupling can have the moral effects suggested in this article, then we might be able to use informed speculation about the future direction of technological innovation to anticipate future moral changes. Any such speculation must be taken with a heavy dose of epistemic humility, but let's try to have some fun with it.

Are there any obvious uncouplings on the technological horizon? One potential example is the uncoupling of meat consumption from factory farming and animal slaughter (for what it is worth, I spoke about this example with Jeroen Hopster in one of my podcast episodes). At present, if you wish to have a meat-based diet, you must do so in full awareness that this requires the slaughter of animals. And while some people consume meat that is relatively** humanely slaughtered, most people consume it from large factory farms with dubious animal welfare standards. In short, for most people, the benefits of meat consumption (gustatory pleasure; dietary need) come with the not insignificant cost of harm to animals, plus some other negative downstream costs (e.g. creating ripe conditions for zoonotic viral transmission). Most people seem happy to accept those costs; but some people (perhaps an increasing number) find them unbearable. [On a purely anecdotal level, I have now found that, among philosophers and ethicists at least, meat consumption is frowned upon. Few people will admit to it openly and many conferences and events operate with a vegetarian/vegan default. I fully accept that this is not representative of the general population.]

So, at present, we have the coupling of a value (meat consumption) with a cost (animal suffering). Developments in the field of artificial meat production may uncouple the value from the cost: we could produce meat without requiring significant and ongoing animal suffering. This may have two interesting moral effects. The first is that it may reduce or eliminate the moral pressure to be a vegan/vegetarian. This does not mean that veganism would be eliminated. There could be other benefits to a vegan diet that are unaffected by technological advances. The other potential effect is that it increases the moral 'taint' or 'sin' of traditional meat consumption: anyone that chooses to consume meat from a slaughtered animal, when there is another lower cost alternative, will face a larger burden of justification. This could have all sorts of interesting second and third order effects.

I won't go into other possibilities in as much detail, but here are a few other uncouplings that are worth pondering:


Virtual reality and human contact: VR may give us many of the benefits of in person contact without the associated costs. In many instances, I suspect people will prefer in person contact (at least for now) but in some cases this may not be true. For example, VR could give you much of the emotional excitement of in person contact sports(boxing; football etc) without needing to run the risk of serious injury. Some people like to run the risk of genuine physical harm -- that may be part of the pleasure for them -- but for many people these risks are not worth it and they may find the VR alternative compelling. This could lead to a moralisation of some forms of in person contact.
Automation and human risk/error: Many advances in automating technologies are sold to us on the basis that they uncouple the benefits of intelligence from the associated risks of human error. These benefits are often oversold, but in some cases they might be compelling and could affect moral beliefs and practices. For instance if (and it remains an 'if') automated driving is safer than human driving, then, as Nyholm and Smids have argued, human driving may become the morally inferior option and require greater moral justification.
Love/Sex robots and the complications of intimacy: Intimate human relationships come with all sorts of benefits: physical and emotional pleasure, mutuality, shared resources, support, comfort and so on. They also come with significant costs: betrayal, jealousy, anger, emotional and physical burdens, anxiety and so on. Some developers of love/sex robots have argued that the technology could uncouple the benefits of intimate relationships from their costs. Such claims are often overly-simplistic: some benefits of intimate relationships are inextricable from their costs. But I think Henrik Skaug Sætra is right to suggest that the technology could lead some people to prefer a different kind of intimacy ("deficient" to use his word) from traditional human intimacy. Again this could become a moralised choice.

 

Those are some possibilities. There are many more. I would encourage readers to consider them for themselves and to consider whether the 'uncoupling/coupling' idea is a useful way of thinking about technology and moral change.


*And, of course, non-heterosexual forms of sexual intimacy carried significant legal risks.

** There is a debate to be had about whether there can be 'humane' animal slaughter. I think there probably can be, but I won't get into that debate here.

Friday, December 16, 2022

102 - Fictional Dualism and Social Robots



How should we conceive of social robots? Some sceptics think they are little more than tools and should be treated as such. Some are more bullish on their potential to attain full moral status. Is there some middle ground? In this episode, I talk to Paula Sweeney about this possibility. Paula defends a position she calls 'fictional dualism' about social robots. This allows us to relate to social robots in creative, human-like ways, without necessarily ascribing them moral status or rights. Paula is a philosopher based in the University of Aberdeen, Scotland. She has a background in the philosophy of language (which we talk about a bit) but has recently turned her attentio n to applied ethics of technology. She is currently writing a book about social robots.

You download the episode here, or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services.



Relevant Links

Tuesday, November 29, 2022

Debating Meritocracy: Arguments For and Against




Note: This article is, essentially, a set of expanded notes from a class I taught about debating meritocracy.

In 1958, Michael Young — now better known as the father of the execrable Toby Young — published The Rise of the Meritocracy. Misunderstood in its own time, the book is a dystopian critique of a meritocratic society. It is set in the future. The year 2034 to be precise (still the future as I write). It is a retrospective history, told from that future, of how meritocracy took root in the UK and how it became a new class system, replacing the old one based on accident of birth. The gist of the critique seems to be that we might think meritocracy is justified and better than the old system (and in many ways it is) but there is a danger that it will end up creating a new, unequal social order.

I’ll be honest. I’ve never read Michael Young’s book. I only know of its contents second-hand. But I recently came across it, again, when reading Adrian Wooldridge’s book The Aristocracy of Talent. Wooldridge’s book is a full-throated defence of meritocracy. It is primarily a historical overview of how meritocratic systems came into popularity, but it also deals with contemporary critiques of meritocracy — particularly those from left-leaning authors like Michael Sandel — and concludes with an extended argument in favour of it.

As with all good books, Wooldridge’s provokes reflection. I don’t know where I stand on meritocracy. I can see its advantages, certainly when compared with historical practices of nepotism and patrimony (though, to be clear, neither of these practices is entirely ‘historical’). But I can also see some of its dangers, including the one highlighted by Young’s dystopia.

In the remainder of this article, I want to review some of the arguments for and against meritocracy. My goal is not to provide a definitive evaluation of those arguments. It is, rather, to clarify the terms of the debate. This should be of interest to anyone who wants to know what the typical arguments are. The analysis is inspired by my reading of Wooldridge, but I do not intend to offer an extended critique or commentary on Wooldridge’s book.

I start, as ever, by clarifying some of the concepts at stake in the debate.


1. Meritocracy and a Toy Model of Society

One of the easiest ways to think about equality and social justice is to create a toy model of society. The diagram below provides one such toy model. I’ve used it on previous occasions.

At the bottom, you have the members of society. They are people defined by a range of characteristics. These could include their talents and abilities (raw intelligence, virtues, physical prowess, emotional intelligence etc) as well as other social and biological traits (race, ethnicity, religious beliefs and so on). It is probably impossible to list the full set of defining characteristics. You can slice and dice people into lots of different categories, but you get the basic idea.

At the top of the diagram there are social outcomes. These are loosely defined to include jobs, educational status, income level, well-being, health and so on. Basically, any outcome variable in which you happen to be interested can be classified as a social outcome. Like personal characteristics, outcomes are not neat and discrete. Many outcomes overlap and intersect. Similarly, outcomes vary across a person’s lifetime. If you look at my income bracket right now, it’s a lot different from what it was like when I was in my twenties.

In the middle of the diagram there are gatekeepers. These are people or social institutions that control or influence the access to social outcomes. They could include educational institutions, doctors, judges, job interviewers and so on.



In an ideal social order, the system for allocating people to different social outcomes would be fully moral justified and non-arbitrary. Everyone would have equal opportunity to pursue their preferred social outcomes and they would not be denied access to those social outcomes for irrelevant reasons. The problem, of course, is that people disagree as to what is a morally justified system of social allocation. For example, many people believed, historically, that it was entirely appropriate to allocate on the basis of race and gender. Nowadays, we think this is inappropriate. Some people think that in order to correct for historically biased forms of social allocation we need to engage in reverse discrimination or affirmative action. This, somewhat paradoxically, means that we should pay attention to characteristics such as gender and race, at least temporarily, in order to achieve a more just system.

I am not going to be able to do justice to the complexity of these debates in this article. Suffice to say, there are many desiderata to balance when figuring out the ideal system of social allocation. It’s quite likely that it is impossible to balance them all to everyone’s satisfaction.

What I will say, for the purposes of evaluating meritocracy, is that we can distinguish between three general systems of allocation. As follows:


Meritocracy: Allocating people to social outcomes on the basis of merit (how good or well-attuned they are to succeed in that outcome). Markers of merit could include intelligence, creativity, physical prowess and so on.
Nepotism/Patrimony: Allocating people to social outcomes on the basis of family, connections or accidents of birth. Think of Donald Trump and how he gave his family members and friends cushy positions in his companies and in his presidential administration.
Representationalism: Allocating people to social outcomes on the basis that we need to achieve proportional representations of certain social groups in those outcome classes (e.g. x% women; y% ethnic minorities and so on)

 

I do not claim that these three systems are exhaustive of the possibilities. You could allocate to social outcomes in other ways, e.g. random allocation (lottos). I also would not claim that these systems are mutually exclusive. Oftentimes particular social institutions will blend elements of each. For example, admissions to elite US universities often involve a mix of nepotism/partimony (legacy admissions), meritocracy and representationalism.

Nepotism is probably the historically most common system of social allocation and it remains a feature of most societies to this day. Even in societies that openly embrace or claim commitment to meritocracy, one can find pockets of nepotism. Representationalism is an odd one. I am not sure that anyone else uses the term or openly embraces it, nevertheless, I think many people nowadays advocate for a form of representationalism. Debates about quotas for female politicians or affirmative action policies in higher education, for example, often seem to presume or favour representationalism.

In any event, in what follows, I will be considering arguments for and against meritocracy that work, in part, by comparing it to these other two systems of social allocation.


2. Arguments for Meritocracy

There are four main arguments in favour of meritocracy. Most of these arguments are consequentialist in nature, i.e. they defend meritocracy on the basis that it produces or incentivises better outcomes for individuals and societies as a whole. It is, however, possible to defend meritocracy on intrinsic grounds and I will consider one such possibility below.

The first argument in favour of meritocracy is the ‘better societies’ argument:


A1 - Better Societies - More meritocratic societies score better on measures of economic growth, innovation and social well-being; less meritocratic societies tend to be more stagnant and have higher rates of emigration.

 

In other words, given certain measures of societal success — GDP, GNP, Human Development Index and so on — societies that are more meritocratic score better than less meritocratic ones. If we grant that these measures are, indeed, positive and something we would like to increase, we have reason to favour meritocracy. For what it is worth, Wooldridge, in his defence of meritocracy, makes much of this argument:


…a glance around the world suggests that meritocracy is the golden ticket to prosperity. Singapore, perhaps the world’s poster child of meritocracy, has transformed from an underdeveloped swamp into one of the world’s most prosperous countries…Scandinavian countries retain their positions at the top of the international league tables…in large part because they are committed to education, good government and…competition. …countries that have resisted meritocracy have either stagnated or hit their growth limits. Greece, a byword for nepotism and ‘clientelism’…has struggled for decades. Italy, the homeland of nepotismo…has been stagnating since the mid-1990s. The handful of countries that have succeeded in combining anti-meritocratic cultures with high standards of living are petro-states that are dependent on an accident of geography… 
(Wooldridge 2021, 368)

 

There is some merit (!) to this argument. If you look up countries such as Singapore or Sweden and see how they do on these measures of societal success, you will find that they do better than countries like Italy and Greece (check out the comparative charts from Our World in Data for more on this). That said, we have to be a little bit cautious when it comes to identifying ‘more’ and ‘less’ meritocratic societies. As the use of language here suggests, it is rare, certainly among European and developed nations, to find a society that is completely committed to nepotism and has no meritocratic elements. Most developed countries have educational systems with standardised merit-based exams and while not all have competitive entry to university, many do and have more or less elite universities that allocate places based on merit. It’s really a question of the balance between meritocratic and other forms of allocation. Furthermore, even in countries that claim to be committed to meritocratic social allocation — and Singapore probably is the best example of this — it is impossible to sustain the commitment across all social outcomes. Singapore, for instance, is primarily meritocratic in its education system and in its allocation of civil service jobs. While private industry may choose to adopt merit-based allocation (and, perhaps, companies that do this do better than those that don’t) it’s probably not feasible to cut out all forms of nepotism or representationalism in those sectors of society.

If you wanted to criticise this argument you might say that the measures of success identified by its supporters are misleading or misguided. For example, a lot of people would criticise the use of GDP as a measure of social success (Ireland’s GDP per capita is very high but that doesn’t reflect the wealth of the people in Ireland; it’s largely because US companies report earnings in Ireland as a way to avoid paying tax). The only problem with this argument, from my perspective, is that the positive comparison for ‘more’ meritocratic societies tends to hold up no matter which measure of success you use, e.g. human development index. Also, while these measures of societal success might overlook or ignore some important things, it is hard to argue that a society that does much worse on those measures is a better place to live. Nowhere is ideal, but these measures do tell us something about relative well-being across societies.

The second argument for meritocracy is the ‘better incentives’ argument:


A2 - Better Incentives - Meritocratic societies provide rewards to people for developing and honing their talents. This leads to better social outcomes because talents produce social goods (e.g. new companies, new jobs, new insights, new creative culture)

 

This is obviously closely related to the first argument. The idea is that meritocratic societies send a signal to their members: if you work hard at honing certain talents and abilities (intelligence, knowledge, physical skill etc), you will be rewarded (better jobs, more money etc). This, in turn, produces better outcomes for societies. I think this argument makes sense, at least in its abstract form, but the devil is in the detail. Is it possible to hone talents in the way that meritocrats presume, or are we just rewarding those that got lucky in the genetic lottery (or through some other means)? What talents are we incentivising and do they really produce social goods? I’ll consider a potential criticism of this second argument in the next section when I look at the ‘wrong measure’ counterargument.

The third argument for meritocracy is the ‘respecting dignity’ argument:


A3 - Respecting Dignity - Meritocracies allow people to develop and hone their talents in the manner that they desire, and to reward them for doing so. This allows them to develop into full human beings. They are not treated as victims of circumstance or representatives of abstract social classes.

 

Unlike the first two arguments, this one is not consequentialist in nature. It is based on the idea that meritocratic systems are intrinsically better, irrespective of their broader social outcomes, because they treat people as individuals and respect them in their full humanity. People are not prisoners of the past or of circumstance. They have the opportunity to develop their full powers of agency. You can think of this as a quasi-Kantian argument: meritocratic societies respect people as ends in themselves, not for some other reason (though, of course, this would need to be balanced against the consequentialist arguments that do not do this). Again, this is an argument that Wooldridge emphasises in his defence of meritocracy:


By encouraging people to discover and develop their talents, [meritocracy] encourages them to discover and develop what makes them human. By rewarding people on the basis of those talents, it treats them with the respect they deserve, as self-governing individuals who are capable of dreaming their dreams and willing their fates while also enriching society as a whole.
(Wooldridge 2021, 373)

 

This is an interesting argument. I think there is a core of good sense to it. Certainly, nepotistic or representationalist societies are in tension with ideals of individualism and autonomy. They do not treat people as masters of their own fate. In such societies, people are not valued for who they are. People are, instead, valued because of where they came from or who they represent. That said, I think it would be a mistake to presume that meritocratic societies are more respectful of individuals. Meritocratic societies can be very unpleasant places to live, given the high anxiety and competitiveness often associated with them. I’ll discuss this in more detail in a moment.

The fourth argument in favour of meritocracy is the ‘best alternative’ argument.


A4 - Best Alternative - Meritocratic social allocation is better than any historic or proposed alternative system of social allocation. Nepotism is often corrupt and stagnant; representationalism would increase the power of the state and perpetuate identitarian thinking; neither system treats people with dignity or respects their individuality

 

This argument has been implicit in much of what has been said already, but it is worth making it explicit. The idea is that, whatever its flaws may be (and we will consider some below), meritocracy is better than alternative systems. Think of this as the Churchillian defence of meritocracy (after Churchill’s alleged defence of democracy against other systems of government). To me, this might be the most persuasive argument, at least when it comes to certain forms of social allocation (i.e. something like healthcare should not be allocated on merit but I don’t think any defender of meritocracy believes that, at least not openly and directly). I have thought about it a lot when it comes to allocating positions to students at university. The country in which I live — Ireland — has a competitive, points-based system for allocating students to university degree programmes. To get into the more competitive (and presumably attractive) universities and degree programmes (like medicine) students have to score highly on a national second-level exam (the Leaving Cert). The system is often criticised, for reasons similar to the ones that I will discuss below, but it’s never been obvious to me what a better alternative would be. Each proposed alternative tends to make the system more complex and opaque, and to insert more potential forms of bias into it. Perhaps a ‘mixed’ system of allocation is best — some positions on merit; some in line with representationalist/reverse discrimination concerns — but I’m not sure what the balance should be or whether introducing some element of the latter just adds confusion, potential for longer-term abuse/misuse, and does not serve students particularly well. I don’t have a fully worked out view to offer here, but, as I say, this Churchillian defence gives me some pause.


3. Arguments Against Meritocracy

What about arguments against meritocracy? I will consider three here. Each of these has been developed from conversations/debates with students in my classes about the topic. I’m sure it is possible to identify other criticisms, and I would be happy to hear about them from readers, but these are the ones that keep coming up in my classes.

The first objection is something I will call the ‘wrong measures’ objection:


CA1 - Wrong Measures: Classic meritocratic tests (e.g. IQ or other standardised aptitude tests) do not measure the full set of relevant talents or merits, relevant to all the forms of social allocation in which we are interested. They may also be inaccurate and generate false negatives/false positives.

 

In other words, the kinds of testing paradigm commonly deployed in aid of meritocracy are too narrow and only consider a limited range of talents. They do not ensure sufficient cognitive or talent-based diversity in social institutions, which is bad because, if you follow the arguments of Scott Page and others, cognitive diversity is a good thing, particularly if we want our institutions to be more successful in solving problems. As a result, it could be the case that the tests reward people we would rather exclude and exclude people we would rather reward.

I think there is some value to this criticism because I am reasonably convinced that some degree of cognitive diversity is important. But this doesn’t mean that meritocracy is the problem, rather, our means of its implementation. Changing the tests so that we have a broader view of the talents that count could patch up the system, at least to some extent. We would still be focused on merits, and not slipping into some other form of social allocation, but we would have a more pluralistic conception of merit. Defenders of IQ tests and other standardised tests may come back on this and argue that their preferred tests are exceptionally well-evidenced and validated and that there is some general factor of intelligence that seems to correlate with a large number of positive social outcomes. I am not going to get embroiled in the IQ wars here, but from the limited materials I have read and listened to on the topic, I am inclined to agree that there is some there there. That said, it is pretty clear IQ is not the only thing that matters. We can have high IQ psychopaths but I am pretty sure we don’t want psychopaths in some decision-making roles. Also, even if such tests are accurate and well-validated, the problem I tend to have is that most competitive examinations systems that I am familiar with are nothing like IQ or similar tests. They tend to be the more typical academic, educational tests (based on a standard set of problem questions, comprehension questions, essay questions and so on). On previous occasions, I have explained why the grading associated with at least some of these forms of testing can be quite arbitrary and unfair. Whatever the results mean, they are probably not always a good signal of underlying raw intelligence. Also, these kinds of tests, and the grades associated with them are much more susceptible to gaming and bias. Which brings me to the next objection.

The second objection is what I will call the ‘biased measures’ objection:


CA2 - Biased Measures: Classic meritocratic tests are biased in favour of existing social elites either because (a) they can pay for coaching or training to excel on the test and/or (b) the tests are designed to suit their cognitive style (e.g. abstraction over concreteness).

 

This objection is importantly distinct from the preceding objection. It is not that the measures are wrong or not indicative of the kinds of talents we wish to reward, it is that even if they are broad-minded and accurate, they are the kinds of measures that wealthy elites can do better on, either because they can invest more money in their children’s education, paying for private tuition and test preparation, and/or because the tests suit their cognitive style.

I mention the latter possibility because I am reminded of Alexandria Luria’s famous experiments suggesting that rural peasants in Russia did less well on certain kinds of test because they were less adept at abstract thinking and that the more industrialised and modernised community found abstract thinking more facile (see Luria Cognitive Development: Its Social and Cultural Foundations) I am not claiming that Luria’s specific studies are relevant to contemporary debates about meritocratic testing. I am mentioning them simply because they illustrate — quite vividly — a key point: that cognitive styles and abilities can be subtly shaped and influenced by one’s developmental niche and unless a testing paradigm is very carefully designed to eliminate this form of bias, it may tend to perpetuate the success of those drawn from a particular niche (e.g. the tests may presume certain ‘shared’ knowledge that is not really shared at all).

That said, I think the other point, about parental investment in education, and the perpetuation of a new wealthy elite, is the more critical one. This is the issue that weighs most heavily on the minds of my students when I discuss meritocracy with them. It is also the objection that has cropped up in most recent criticisms of Singapore’s experience with meritocracy. Findings there suggest that those that initially did well in the meritocratic system can afford to pay more for their children’s schooling and thereby run the risk of entrenching a new wealth and merit-based elite. This experience is similar to that observed around the world. Simon Kuper’s book Chums — which is about how wealthy public school boys came to run modern Britain — comments on this too. Kuper notes that while at one point in time aristocrats and upper-middle class children could succeed based purely on connections and historical wealth, by the 1980s (when he attended Oxford along with Boris Johnson, David Cameron, Michael Gove et al), even the wealthy had to do well in academic tests. And they did. Their elite schools invested heavily in prepping them for success on those tests.

This entrenchment of a new elite was, of course, Michael Young’s big concern about meritocracy in his 1958 book. The counter-response to it could be that, again, we just need to change the form of test and rely on tests that cannot be prepped or gamed. Some aptitude tests bill themselves as such. For instance, Irish medical schools use a HPAT (Health Professions Admission Test) in addition to the traditional end-of-school Leaving Certificate to allocate places at university. The test is based on an Australian proprietary platform which is, allegedly, ungameable because you cannot study or prep for it. Nevertheless, you can find preparatory materials for it and there are plenty of people willing to sell training and/or tuition to help you prepare for it. It seems unlikely that the test is ungameable. Similar experiences with the LSAT and MCAT in the US suggests the opposite. This is not surprising. All tests tend to rely on common styles of question and those that are motivated to do so can pay for some, at least minimal advantage, in taking tests with those common formats of question. Those minimal advantages can accumulate over time.

It’s not clear what the solution to this problem is or ought to be. On the one hand, a defender of meritocracy could tough it out and say that as long as the tests provide the right measures (i.e. identify the relevant range of talents and abilities) who cares if they are gameable or biased towards elites. As long as we are rewarding merit directly that’s all that matters. And, who knows, perhaps some people from less privileged backgrounds may still be able to break through the system. Investment in education might gain some advantage but not enough to completely swamp other factors (raw intelligence, hard work/ambition, luck). Contrariwise, a defender of meritocracy could advocate for constantly tweaking or changing the test format to eliminate the potential for unfair advantage linked to wealth. This strategy might face diminishing returns, however. Whatever tweaks you make would still need to be consistent with the aims of the test (to identify the relevant talents) and a constant arms race between testers and takers may run up many additional costs for little gain.

It could be, however, that this objection gets to one of the tragedies of human social life. That new systems for allocating social goods based on merit can be disruptive when they are initially introduced, shaking up the old social order and threatening established norms, but after a generation or two things settle down into a familiar pattern. If you read Wooldridge’s book you cannot help but come away with the impression that meritocracy really was a disruptive social innovation. But perhaps now its capacity for continued disruption has been largely eroded, at least in countries where it is well-established.

The third, and final, objection is the ‘competitiveness and cruelty’ objection:


CA3 - Competitiveness: Meritocratic societies create perpetual competition for credentials. You have to run faster and faster to say in the same place. This can lead to a very unpleasant and anxious existence with harsh results for those that cannot or do not keep pace.

 

This is an objection that concerns me a lot these days. Like most academics of my age, I am often struck by the scale of mental health problems I see among my students. I’m sure there are many causal factors behind this, and perhaps the problem is exaggerated, or my perception of it is distorted (I only tend to hear from students in distress). Nevertheless, it has struck me as odd and out of line with what I used to experience when I was a student (older colleagues also agree that the scale of the problem has gotten worse). What is of particular interest to me is how many students I encounter expressing anxiety around their exams and degree results. Many feel their lives will be over and their career aspirations ruined if they do not get a 2:1/B average in their degree. Many also feel pressure to pursue additional qualifications to make themselves stand out from the crowd. Doing an undergraduate degree is no longer enough. You have to do at least one postgraduate degree and consider other forms of microcredential or short-course qualifications. I’m not sure that this constant credential seeking is positive or conducive to human flourishing.

But perhaps this is the inevitable consequence of any meritocratic system. The whole purpose of the system is to encourage people to develop their talents. Very few gatekeepers are going to conduct an exhaustive inquiry into people’s actual merits. They are going to rely on credentials to tell them who is worth considering for the opportunity. But if everyone pursues the same credentials, and if social opportunities are scarce in some way, the competitive advantage of those credentials is reduced and people have to pursue other credentials to stand out from the crowd. An arms race mentality kicks in. While some pressure and anxiety might help us to achieve great things, constant pressure and anxiety is debilitating. There is a danger that, over time, this is the kind of social culture embedded by meritocracy. Everybody is racing to standstill and nobody is particularly happy about it.

I would also repeat the obvious point, made above, that relying on meritocracy to resolve all forms of social allocation would be cruel and inhuman. For instance, allocating spaces to healthcare treatment on the basis on educational attainment would be cruel. I would also argue that any biasing or weighting of votes based on merit (as was once proposed by John Stuart Mill) would be cruel and undignified. We might be able to live with the benefits and costs of meritocracy in some areas, but not in all.





4. Conclusion

As I said, my goal was not to provide a definitive evaluation of meritocracy here. Rather, my goal was to clarify the concept and outline a framework for debating its benefits and costs. I hope I have provided that in the preceding. I am happy to hear from people as to how the framework could be modified or developed. Are there other arguments for and against that should be added to the mix?