Wednesday, December 9, 2015

Understanding Nihilism: What if nothing matters?





We spend so much of our time caring about things. Thomas Nagel described the phenomenon quite nicely:

[People] spend enormous quantities of energy, risk and calculation on the details [of their lives]. Think of how an ordinary individual sweats over his appearance, his health, his sex life, his emotional honesty, his social utility, his self-knowledge, the quality of his ties with family, colleagues, and friends, how well he does his job, whether he understands the world and what is going on in it. Leading a human life is a full-time occupation, to which everyone devotes decades of intense concern. 
(Nagel 1971, 719-720).

Why so much intense concern? What if nothing we do really matters? What, in other words, if nihilism is true?

That’s the question I want to look at in this post. I do so with the help of Guy Kahane’s recent paper ‘If nothing matters’, which is an excellent and insightful exploration of the topic. It doesn’t defend the nihilistic view itself, but it does clarify what it means to be a nihilist and what the implications of the nihilistic view might be. In the process, it takes issue with a strange trend in contemporary metaethics which assumes that if nihilism is true, then nothing about our day-to-day lives would change all that much. Kahane finds this implausible and tries to explain why.

In what follows, I discuss the key elements of Kahane’s analysis. I start by explaining what nihilism is, and distinguishing between its evaluative and practical versions. I then look at the oddly deflationary attitude of some metaethicists towards the truth of nihilism. And I close by considering Kahane’s critique of this deflationary view. As we shall see, Kahane argues that if we come to believe that nihilism is true, then we are unlikely to be able to go about our daily business much as we did before. On the contrary, we can expect much to change.


1. What is Nihilism Anyway?
Nihilism is the view that nothing matters. It comes in two distinct forms. The first is evaluative nihilism, which Kahane describes like this:

Evaluative Nihilism: Nothing is good or bad — or — All evaluative propositions are false.

Remember that time a few weeks back when you were walking to work, it was raining heavily, you stubbed your foot and ripped the sole off your shoe, then got splashed by a car and ended up being late and soaking wet? At the time, you said that this was ‘bad’. If evaluative nihilism is correct, you were wrong to say this. Nothing is really good or bad because evaluative propositions that ascribe those properties to particular events or states of affairs are always false. And this is just to use a trivial example. Evaluative nihilism also applies to more serious evaluative propositions like ‘murder is bad’ or ‘pleasure is good’. None of these claims is true.

Evaluative nihilism is the core of nihilism. But the typical belief is that it entails another form of nihilism:

Practical Nihilism: We have no reasons to do, want, or feel anything.

The idea here is that values are what should motivate action, desire and emotion. The badness of being wet and late for work should motivate me to avoid this outcome in the future. It should motivate me to leave earlier, wear more sensible raingear and footwear. But if nothing is really good or bad all that motivational force is sapped away. This is a normative claim, not a psychological one (we’ll touch upon psychology later). It is about having reasons for doing, wanting and feeling. Practical nihilism strips us of all such reasons.

Practical and evaluative nihilism often go hand-in-hand, but they are separable. Kahane argues that evaluative nihilism only implies practical nihilism if you accept a consequentialist view of practical reason. If there are non-consequentialist constraints on action, then the goodness or badness of an outcome or state of affairs may not always be decisive in determining whether you have reasons for action. That said, it is worth treating the two forms of nihilism together since many who worry about the implications of nihilism worry about both.

But why do they worry? There are some misconceptions about the consequences of accepting nihilism. Many authors speak of nihilism in hushed and terrified tones. The idea is that if we really believed in nihilism we would be overwhelmed by the emptiness of our lives and driven to despair and suicide. In short, if nihilism were true then our lives would be worse. This is to misunderstand nihilism. To use the classic retort: if nothing matters, then it doesn’t matter that nothing matters. Or, in more evaluative terms:

No Cause for Despair: If nihilism is true, then its truth couldn’t make our lives worse (or better) for the simple reason that nihilism entails that you cannot say that a particular state of existence is worse or better.

Of course, how we react to the truth of nihilism is an empirical matter. It may be that some people do feel despair at the thought that nothing matters. But this is arguably because they implicitly cling to non-nihilistic views. They assume that things can really be better or worse for them; that they can have reasons for their despair. If nihilism is true, neither of these things is actually possible.


2. Deflationary and Conservative Metaethical Nihilism
Now that we have a firmer grasp of nihilism we can consider some broader issues. One is the role of nihilism in contemporary metaethical debates. Metaethics is the branch of moral philosophy that is concerned with the ontology and epistemology of moral claims. Moral claims are all about what is good and bad and right and wrong. Some metaethicists are cognitivists, who believe that moral claims are capable of being objectively true or false (i.e. that things really are good/bad and right/wrong). Non-cognitivists reject this view. There are many different schools of non-cognitivism, but the one that is the focus of Kahane’s analysis is that of the error theorists.

Error theorists hold that our entire moral discourse rests on a mistake. The mistake is that when we say something like ‘Torture is bad’ we think we are making a claim like ‘Water is H2O”, but we are wrong. The latter statement is capable of being objectively true or false; the former is not. In short, our moral discourse is in error: there are no objective values (or rights and wrongs). Famous error theorists include JL Mackie and Richard Joyce.

Described thusly, error theorists seem to embrace nihilism. You might think this would cause them to cast off ordinary moral practice. But strangely enough they do not. Many of them adopt an oddly deflationary attitude toward their metaethical insights. Yes, it is true that there is no objective good or bad or right or wrong, but this shouldn’t change much about how we live our lives. Consider the following passage from Mackie:

The denial of objective values can carry with it an extreme emotional reaction, a feeling that nothing matters at all... Of course this does not follow; the lack of objective values is not a good reason for abandoning subjective concern.. 
(Mackie 1977, 34)

Mackie’s suggestion here is that even if his error theory is correct it is possible for people to care about things and to continue to live their lives as they always have. This is reinforced elsewhere in his work when he talks about the practical utility of continuing to behave in a ‘moral’ way. As some have put, we should be error theorists in the seminar room; but practical evaluative realists in the streets.
Kahane thinks this deflationary attitude is itself in error. It fails to take seriously the implications of evaluative and practical nihilism. As he sees it, in order for us to follow Mackie’s lead, it must be possible for us to do two things after coming to accept the truth of nihilism:


  • A. We must continue to have the subjective concerns we used to have before coming to believe in nihilism (i.e. believe that some things are worthwhile, not worthwhile etc).

  • B. We must be able to use these concerns to guide our actions (i.e. engage in instrumental reasoning).


While Kahane thinks it might be possible for us to conform to something like instrumental reasoning, he is much less convinced that we will continue to have the same subjective concerns. He has an argument for this which we will consider next.



3. Against the Deflationary View
Kahane’s argument is somewhat elaborate. I’ll describe a simplified version. The simplified version focuses on two claims about our normative psychology, i.e. by what should happen if we come to believe in the truth of nihilism. The empirical reality might be somewhat different, and Kahane concedes as much, but he thinks his argument works off a number of basic truisms about how our psychology functions.

The two main claims are as follows:

Belief Loss: If we come to believe in the truth of nihilism, we will lose many (or all) of our evaluative beliefs.
Covariance thesis: Our subjective concerns covary with our evaluative beliefs in such a way that the loss of the latter is likely to result in the loss of the former.

These claims then get incorporated into an argument which runs something like this:


  • (1) If we are to continue to live as we did before, then we need to retain our subjective concerns.
  • (2) If we come to believe in nihilism, we will probably lose many (possibly all) of our evaluative beliefs.
  • (3) If we lose many (possibly all) of our evaluative beliefs, then we will probably lose our subjective concerns.
  • (4) Therefore, if we come to believe in nihlism, we will probably not continue to live as we did before.


This is a probabilistic argument. It is about what is likely to happen rather than what will definitely happen. How can its key premises be defended?

We’ll start with the second premise, which is the belief loss claim. The first obvious point in its favour is that evaluative nihilism straightforwardly entails the falsity of evaluative beliefs. If no evaluative proposition is true, then any beliefs we have in such evaluative propositions must be false. The question is whether this subsequently implies that we will lose our evaluative beliefs. The logical implication is straightforward, but human psychology does not always track logic. It is conceivable that people could hold contradictory beliefs in their heads at the same time. But this is an unstable state of affairs. Over time, we might expect them to favour one or the other. Kahane uses a thought experiment to illustrate his thinking:

Witch Belief: Suppose Bob believes that two people he knows (Anne and Claire) are witches. But suppose you manage to convince Bob that witches do not exist, i.e. that no one has been or ever will be a witch. Will he continue to believe that Anne and Claire are witches? It is difficult to see how, at least in the long term. His acceptance of the general proposition (“there are no witches”) is going to be in constant tension with the more specific propositions (“Anne is a witch” and “Claire is a witch”). Eventually, something would have to give.

This certainly seems plausible. And if we expect this to happen in the case of witch-belief, it seems natural to expect it to happen in the case of nihilism. After all, the two scenarios are structurally similar. If I come to believe in the general proposition “Nothing matters”, it’s hard to see how I could continue to believe in specific propositions like “My job matters”. It is, of course, possible that I could waver in my commitment to nihilism, believing in it at times and disbelieving in it at others. This might cause me to oscillate back and forth between believing that my job matters and believing that it doesn’t. But if I am unwavering in my commitment, my other evaluative beliefs should slowly ebb away.

This brings us to the third premise which holds that this loss of evaluative belief should impact upon my subjective concerns. Kahane doesn’t give an elaborate argument for this view. He seems to think the covariance of evaluative belief is a basic truism of our psychology. To reject it, one would have to embrace an epiphenomenalist view of evaluative belief. This would hold that evaluative belief has no causal impact on our ‘pattern of concerns’. There may be some materialist approaches to the philosophy of mind that accept this notion, but these approaches have their costs.

If the second and third premises are correct, then the conclusion follows. The deflationary view of error theorists like Mackie looks to be implausible. Believing in nihilism is likely to have a knock-on effect on our lives. We probably couldn’t be nihilists in the seminar room and evaluative realists in the streets. We could only be one of these things.


4. Conclusion
I don’t have too much to say about all this. Kahane’s argument seems right to me, at least when it is interpreted within its own self-imposed constraints. Kahane deals with normative psychology, not empirical psychology. It would be interesting to have more empirical evidence about the effects of nihilistic belief on someone’s behaviour, but I suspect it would be difficult to conduct any tests on this. I also think that further engagement with the epiphenomenalist view would be interesting.

Monday, December 7, 2015

The Ethical Significance of Symbolic Meanings


Is burning a body on a funeral pyre a mark of respect for the dead?


Suppose you are married and have two children with your spouse. Ordinarily, you share various household and childcare duties equally but recently you have become fed up with this arrangement. You feel like your time could be better spent on other activities. Fortunately, you have a solution. You can pay your spouse extra money to perform your duties. Should you do it?

There are many reasons why this would probably be a bad idea. But one of them is that to make such an offer would communicate the wrong message. You are locked in an intimate relationship of mutual recognition and exchange with your spouse. To suddenly start offering money might suggest a degree of indifference to their well-being. It seems to say ‘my time is more important and valuable than yours’. Surely that is not the signal you wish to send to the love of your life?

Symbolic (or semiotic) arguments of this sort are popular among anti-commodification theorists. Although there are many reasons to object to commodifcation, one of the most popular has to do with the negative meaning that attaches to commodified exchange. But how persuasive are such arguments? In their recent paper ‘Markets without Symbolic Limits’, Brennan and Jaworski present a detailed and systematic rebuttal of symbolic arguments. In this post, I want look at what they have to say.


1. General Argumentative Strategy
I start by considering Brennan and Jaworski’s general argumentative strategy. The paper is part of a larger book project. The book is entitled Markets without Limits and it defends the commodification of pretty much everything. To be more precise, it defends the view that if you can do something for free, you can also do it for money. Monetising or commoditising an activity that was previously permissible does not magically render it impermissible. For example, it is wrong to exchange child pornography for free; and exchanging it for money doesn’t change things in this respect. In defending this view, Brennan and Jaworski respond to many of the most prominent anti-commodification arguments in the literature.

That’s the larger project. When it comes to this specific paper, their general argument remains the same, the details are simply adjusted to address the specific concerns raised about the meaning of commodified exchange. To do this, they first isolate distinct versions of the symbolic objection. They identify three in the original paper. Here, I focus on the two that are relevant to my particular concerns:

Wrong Signal Objection: “holds that buying and selling certain objects is wrong because it expresses wrongful motives, wrongful attitudes, or fails to communicate proper respect. This expression occurs independently of the attitudes or motives the buyer or seller may have.” (Brennan and Jaworski 2015, 1061).

Wrong Currency Objection: “begins with the premise that offering money for services tends to communicate estrangement. Since it can be wrong in some cases to communicate estrangement, it can be wrong to buy and sell services within certain relationships—such as between romantic partners, between fellow citizens, among friends.” (Brennan and Jaworski 2015, 1061)

The objections are similar but subtly different. The first objection is about a general mismatch between the social meaning of the commodified exchange and one’s actual intentions; the second is about a particular meaning that seems to attach to commodified exchange, in this instance distance or estrangement. Sometimes there is a problem if there is a mismatch; sometimes it is wrong to communicate distance or estrangement. Indeed, this might explain the reaction to the opening example of offering your spouse money to perform household chores. Doing so seems to communicate distance and estrangement, which is out of keeping with the character of the relationship.

Brennan and Jaworski concede that commodified exchange can sometimes communicate an unintended meaning, and that in some settings it may communicate estrangement and distance. They reject, however, the notion that this provides a general reason not to favour the commodification of certain exchanges. Their argument is somewhat convoluted, but it essentially boils down the following three propositions (I have not knitted these together into a formal argument):

(1) The meaning that attaches to a particular social practice or symbol is highly contingent. In particular, the meaning that attaches to commodified exchange varies quite considerably from culture to culture and time to time.

(2) If the meaning of a social practice or symbol is highly contingent, then it cannot be treated as a given in our ethical analysis, i.e. the symbolic practice itself must be subject to ethical scrutiny and, if warranted, reformed in light of that scrutiny.

(3) In at least some instances, the negative social meaning that attaches to commodified exchange is trumped by the positive consequences of commodification.

The first two of these propositions are critical to the argument Brennan and Jaworski are trying make; the last proposition merely ties the argument to some real-world practical consequences, which certainly bolsters their view but is not strictly speaking essential. How can the three propositions be defended? Let’s start by considering the contingency of symbolic meaning.


2. The Contingency of Symbolic Meaning
In some ways, the contingency of the meaning that attaches to cultural symbols is obvious and irrefutable. It seems pretty obvious, for instance, that the meaning that attaches to the three-letter symbol ‘cat’ in English is highly arbitrary. We could have used ‘kat’ or ‘cait’ or ‘chat’ to mean the same thing. Other languages prove this point. Why should it be any different when it comes to cultural symbols, including money? The temptation is to assume that the intentions and motivations behind monetary exchange are more universal, and hence the meaning that attaches to it is more fixed.

But this assumption does not appear to be correct. Brennan and Jaworski cite several examples of different cultural practices, each having a different meaning in a relevant culture from what we might expect. Most involve money; some don’t; they all point towards the contingency of symbolic meaning. They include:

King Darius and the Dead Bodies: According to Herodotus, King Darius of Persia once asked the Greeks if they would eat the bodies of their dead relatives as a mark of respect. The Greeks were abhorred by the notion, arguing that the way to show respect was to burn the bodies on a funeral pyre. Darius then asked the Callations if they would burn the bodies of their dead relatives as a mark of respect. The Callations were abhorred by the notion, arguing that this was to treat the bodies as trash. The proper way to show respect was to eat them. Both the Greeks and the Callations agreed on the need to show respect. But they had very different views about the symbolic act that best communicated this respect.

Monetary Gifts: Michael Sandel thinks it is improper to give someone a gift of money. To him, it communicates the wrong kind of attachment or thoughtfulness. But some cultures think that monetary gifts are perfectly respectable, maybe even better than non-monetary ones. Examples include the Merina tribe on the Island of Madagascar (according to the work of Carruthers and Ariovich) and the US (according to the work of Viviana Zelizer). This suggests that Sandel’s attitude toward monetary gifts is largely an accident of his cultural background.

Paying for Sex: Most Westerners agree that paying someone for sex is symbolically problematic. It says something about the person being paid, namely: that they are a sex worker. And since sex work tends to have negative associations in our culture, to communicate such a meaning is inappropriate if the person you are having sex with is not, in fact, a sex worker. But this is not true in all cultures. Again, among the Merina tribe of Madagascar a man is expected to pay his wife after sex as an expression of respect. In that culture, the monetary payment is not what distinguishes an intimate spouse from a sex worker.

Paid Mourners: Suppose your father died. When your friends show up at the funeral they are surprised to see so many grief-stricken mourners following the coffin and attending the grave. You tell them that you actually paid for all those people to be there. Your friends are horrified: that is no way to honour your dead father. This seems like a natural reaction to most Westerners, but it is not natural everywhere. In some cultures, paid mourners are a true mark of respect. Such cultures include (according to Brennan and Jaworski) those of Romania, China and England in Victorian times.

Commodified Relationship: As noted in the intro, the idea of commodifying the chores and duties that must be performed in a relationship seems like it sends the wrong signals, but not every couple agrees. Daniel Reeves and Bethany Soule (creators of the beeminder app) have apparently commodified much of their relationship. This includes payments for putting their kids to bed. They claim that this commodification has made them happier and less resentful of one another. They have rejected the symbolic meaning of the surrounding culture to positive effect.

I could go on. Brennan and Jaworski cite some other examples in their article but hopefully this suffices to make the point: the meaning that attaches to a cultural practice (like commodification) is indeed contingent. What appears twisted and corrupt to us may be perfectly normal and well-adjusted to others. Furthermore, unless one is willing to challenge the anthropological evidence and the personal testimony of the people involved in these symbolic practices, it is difficult to reject this claim.


3. The Ethical Significance of Contingent Meaning
But what is the upshot of this symbolic contingency? It is simply this: the meaning that attaches to a particular symbol in a particular culture cannot be taken as an ethical given. It must itself be subject to ethical scrutiny. And when it is subject to ethical scrutiny, it may turn out that it should be reformed. This is where the second of the three propositions outlined above comes in. Brennan and Jaworski provide a useful case study in support of this proposition. It builds upon the ‘eating the dead’ example used earlier on. I’ll quote from them in full:

[C]onsider that some cultures developed the idea that the best way to respect the dead was to eat their bodies. In those cultures, it really was a socially constructed fact, regardless of one’s intentions, that failing to eat the dead expressed disrespect, while eating rotting flesh expressed respect. But now consider that the Fore tribe of Papua New Guinea suffered from prion infections as a result of eating the rotten brains of their dead relatives prior to that practice being banned in the 1950s. The interpretative practice of equating the eating of rotting flesh with showing respect is a destructive, bad practice. The people in that culture have strong moral grounds to change what expresses respect. 
(Brennan and Jaworski 2015, 1067)

In this instance, the personal risk that attached to following the cultural practice was so severe that the cultural practice needed to change.

Of course, it may be difficult to make such changes. Symbolic meanings rarely arise overnight (though they can). Centuries of tradition and ritual may undergird any particular symbolic practice. It may be a struggle to change things for the better. But if the stakes are high enough, this is the appropriate course of action.

This gives Brennan and Jaworski all they really need. They have shown that symbolic meaning (including the symbolic meaning of money) is culturally contingent and that contingent symbolic meaning can be subjected to ethical scrutiny. This means that symbolic objections to commodification are not as robust or immune from empirical challenge as their proponents often assume. But to further bolster their case it would be nice if they could provide an example of a negative cultural meaning that attaches to commodification that ought to be changed. They duly oblige by considering the controversial example of markets for kidneys.

I discussed this example at length a few weeks back. I’ll just give the basic gist of it here. Many countries suffer from a shortage of kidney donors: more people are on waiting lists than there are available organs. As a result, many people suffer the terrible consequences of severe kidney disease (up to and including death). A suggested solution to this problem is to create a market for kidney donations. In other words, to pay people for donating kidneys. One country that has tried this is Iran and they, apparently, do not suffer from the same shortages as countries like the US. Despite this, many people object to the commodification of kidney donations. They have lots of reasons for doing so, some relating to the possible consequential harms of such markets, some relating to the fairness and justice of market-based allocations. In theory, these objections could be met through appropriate regulation and management of the market. Nevertheless, some people continue to object, largely for symbolic reasons, believing that paying people for organ donation sends the wrong signal.

Brennan and Jaworski’s argument reveals the silliness of this persistent objection. If the symbolic meaning of commodified organ donation is problematic, but the consequential benefits are great, then it is the meaning that should be changed to accommodate the commodification. In other words, the consequential benefits should guide our reasoning, not the symbolic meaning.


4. Conclusion
There is more that needs to be said. In the full article, Brennan and Jaworski consider various objections to their position, including those that appeal to ‘incorrigible’ social meaning and civic duty. I don’t have the time to consider those objections right now. I would simply close with two observations. First, I think the points they make offer a nice corrective to proponents of symbolic arguments (myself included). It has long struck me that symbolic practices are highly contingent and yet, despite this, I often accord them great practical and ethical significance. I don’t think I should necessarily refrain from doing this — there are prudential and ethical reasons to favour the status quo — but one shouldn’t presume that symbolic meaning has great weight in ethical reasoning. It can be trumped by other considerations.

Second, I think the argument has significance for some of the work I have done on virtual and robotic acts. In one of my papers, I objected to the use of sex robots to replicate acts of rape or child sexual abuse. I did so partly on the grounds of the social meaning that would attach to such acts (even if they did not cause harm to others). My argument was that someone who took pleasure from such symbolic acts revealed a troubling insensitivity to negative social meaning. But this negative social meaning must itself be subject to ethical scrutiny. There could be contexts in which we should abandon any queasiness we might have towards this social meaning. An example would be if such sex robots could be used to effectively treat those who might otherwise engage in real-world acts of rape and child sexual abuse. To be fair, I said as much when I wrote the original paper, I just didn’t appreciate its deeper philosophical grounding. Brennan and Jaworski’s argument allows me to appreciate this.

Tuesday, November 24, 2015

Will technological unemployment lead to human disenhancement?




I have written a lot about the prospects of widespread technological unemployment; I have also written a lot about the ethics of human enhancement. Are the two topics connected? Yes. At least, that’s what Michele Loi tries to argue in his recent paper “Technological Unemployment and Human Disenhancement”. In this post, I want to analyse his argument and offer some mild criticisms. I do so in a constructive spirit since I share similar views.

As you might guess from the title, Loi’s claim is that the displacement of human workers by machines could lead to widespread human disenhancement. This is due to the differential impact of technological unemployment on the mass of human workers: some will find that technology has an enhancing effect, but most will not. This is supported largely, though not entirely, by the work of the economist David Autor (discussed previously on this blog). Autor is famous for describing the polarisation effect that technology is having on the workforce. In essence, Loi’s argument is that this polarisation effect is likely to result in disenhancement.

This might sound confusing right now but it should all make sense by the end of the post. I’ll break the discussion down into four main parts. I’ll start by looking more closely at the concept of ‘disenhancement’; then I’ll outline Loi’s main argument; then I’ll look at his defence of that argument; and then I will close by presenting some limited criticisms of that argument.


1. The Concept of Disenhancement
One of Loi’s goals is to demonstrate that there is an interesting connection between the economic debate about technological unemployment, and the bioethical debate about human enhancement. To prove this he needs to define his terms. Since ‘disenhancement’ is simply the inverse of ‘enhancement’, it makes sense to start with the latter. But anyone who has been paying attention to the enhancement debate for the past decade or so will know that clear definitions are an elusive quarry. There are so many sub-categories, sub-definitions and terminological kerfuffles, that it is hard to keep up. Loi says we need to understand two things to keep up with his argument.

The first is the distinction between traditional ‘functional’ definitions of enhancement and more recent ‘welfarist’ definitions. The distinction can be characterised in the following manner (this does not follow exactly what is presented in Loi’s article):

Functional Enhancement: Person X is enhanced if their capacities and abilities are improved (or added to) relative to some species-level or population-level functional norm.

Welfarist Enhancement: Person X is enhanced if the likelihood of their life going well is improved, relative to some set of circumstances.

Functional enhancement is more in keeping with what people generally mean when they think about enhancement. It assumes that there is some normal level of human ability and that the enhancing effect of a technology or intervention must be judged relative to that norm. Welfarist enhancement is a slightly more recent development, associated largely with the work of Savulescu and Kahane. It focuses on the individual’s welfare, not the general norm, and holds that the enhancing effect of a technology or intervention must be judged relative to the individual’s life and circumstances. The functional definition is inherently moralised because of how it implicates a norm; the welfarist definition is more about prudential well-being and hence less moralised. Loi wants his argument to cover both types of enhancement.

The other thing Loi wants us to understand is the distinction between broad and narrow forms of enhancement. These concern the nature of the enhancing intervention and can be characterised as follows:

Narrow Enhancements: Biomedical technologies that directly target biological capacities and have an enhancing effect.
Broad Enhancements: Any intervention — including non-biomedical technology, education, political governance etc — with an enhancing effect.

Although much of the bioethical debate is concerned with narrow forms of enhancement, Loi thinks it is difficult to maintain a principled distinction between the two. Indeed, the fuzziness of the distinction is something that is routinely exploited by proponents of enhancement. They often try to argue for biomedical enhancements on the grounds that they are not substantially different from broader enhancements to which no one has an objection. John Harris is probably the quintessential exponent of such arguments.

If you like, you could categorise enhancement arguments using these four concepts — as in the diagram below. This might help you better understand the argument you are dealing with.




In fact, it might even help us to understand Loi’s argument. Loi’s argument uses a broad definition of enhancement, and focuses on both the functionalist and welfarist types. This means he is concerned with the right half of the matrix given above. Of course technically, Loi is concerned with ‘disenhancement’ not ‘enhancement’, but that only means we have to invert the definitions in our mind. In essence, he is trying to argue that technological displacement in the workplace will have a disenhancing effect in both the welfarist and functionalist sense.


2. Loi’s Main Argument: Technological Unemployment could be Disenhancing
Loi doesn’t set out his main argument in an explicit, logical form in his article. As best I can tell, it works like this:

  • (1) A technology is disenhancing if it reduces or subtracts from ‘normal’ human capacities (functionalist sense), or if it reduces the chances of someone’s life going well, relative to some set of circumstances (welfarist sense). 

  • (2) Technological displacement at work gives rise to a polarisation effect: it pushes some people into highly-skilled, abstract forms of work, but pushes most people into lower-skilled, less rewarding forms of work. 

  • (3) If people are in lower skilled or lower paid forms of work, then they will witness a reduction in their normal capacities and the chances of their lives going well will be reduced. 

  • (4) Therefore, technological displacement at work leads to disenhancement.

This is messy, but I think it does justice to what Loi is trying to say. The first premise appeals to the definition of enhancement that Loi favours in his article. It should not cause any great controversy. The second premise is the key empirical support for the argument and, as I mentioned in the introduction, is based largely on the work of the economist David Autor (though Loi mentions others, including Autor’s collaborators). The third premise is where Loi’s real contribution to the debate comes: it links technological unemployment to disenhancement. Loi doesn’t set it out in these explicit terms — and that may be one of his argument’s main flaws — but something like it does seem to be implied by what he says. The conclusion then follows (for those who care, the argument’s structure is roughly: A = B; C → D; D = B; therefore C → A).

I’ll go through Loi’s defence of premises (2) and (3) next, but before I do so it’s worth noting something about Loi’s overarching goal. As he himself makes clear, he is not trying to offer concrete predictions about the future. He acknowledges that there is a large degree of empirical uncertainty in his argument. Rather, his goal is to simply identify a plausible scenario and tease out its ethical and social implications. This means we are better-armed when the technological changes come. This strikes a chord with me since I approach my own work in a similar light.


3. The Causes of Disenhancement
The second and third premises of Loi’s argument are all about the effect that technology has on the workplace. The traditional view — heavily influenced by the first wave of automation during the industrial age — is that machines replace human workers in the performance of arduous, routine physical tasks. Hence the takeover by machines of certain types of agricultural and manufacturing work. This, arguably, has had a long-term enhancing effect: it gave people the opportunity to work in more challenging, cognitive forms of employment.

This is no longer true. With the rise of computerisation and machine learning, technology is taking over from more and more cognitive work. Indeed, there is an interesting paradox, first observed by Hans Moravec, to the effect that machines are good at taking over routine work. This includes certain constrained physical tasks, but far more rule-based cognitive tasks (e.g. computing itself was a task once performed by human workers). Such tasks represented, for much of the 20th century, the core of the middle-skill, middle-income jobs that made a prosperous middle-class possible. These jobs are now slowly eroding in the wake of automation.

This is giving rise to a polarisation effect. It turns out that there are two types of work that are hard to automate. David Autor refers to these as “manual” and “abstract” work, respectively. Manual work is anything that requires fine sensorimotor skills and includes things like fruit-picking, food preparation, and cleaning. Though there are some initial forays into the automation of these tasks, it requires far more computing power to replace humans in these jobs than it does in the middle-skill cognitive jobs. Abstract work is anything that requires high levels of analytical ability and creative thinking. It includes things like entrepreneurialism, certain forms of managerial work, and high-level professional services. These are also difficult to automate, but benefit a lot from the automation of the middle-skill cognitive jobs (e.g. because abstract workers can now use computer technology to cheaply perform their own data analysis and processing).

With the automation of the middle-skill jobs, the workforce is being polarised into manual and abstract forms of work. The problem is that these forms of work are very different in character. Manual work is generally viewed as being low-skill and is often precarious and poorly paid. Manual workers tend to have little on-the-job autonomy and may find their work boring and unfulfilling. Abstract work is usually the opposite. The workforce is well-educated, well-paid and highly autonomous. Many abstract workers are deeply committed to and fulfilled by what they do.

The problem is that there are relatively few abstract jobs as compared to manual jobs. Indeed, the paucity of such jobs is partly driven by the effect of technology: it takes longer to educate a well-paid abstract worker, and they are able to gain larger market shares, with less human input, thanks to the automation of lower-skill jobs. Thus, the effect of automation is to drive relatively more workers into the more precarious, lower-paid, less-fulfilling, and less-rewarding types of work.

This is how Loi supports premises two and three. Premise two is supported by appeal to evidence relating to the polarisation effect and predictions about its future. Premise three is supported by the effects of more precarious, lower paid, less-fulfilling, and less rewarding types of work. Loi’s belief is that they are likely to have a ‘disenhancing’ effect.


4. Thoughts and Criticisms
What are the implications of this argument? Loi discusses several. Two stood out for me. First, there was his focus on the basic income guarantee as one way in which to ameliorate the negative effects of technological unemployment. This is not surprising since many make the same argument, but Loi does link it directly to concerns about disenhancement and not about social inequality more generally. Second, there was his discussion of biomedical enhancement as a way in which to correct for the disenhancing effects of technological unemployment. In other words, enhancement via the biomedical route may be a necessary countermeasure to disenhancement via the automation route. This is something I have argued in relation to the political effects of automating technology, and something I also discuss in an upcoming paper.

Is the overarching argument any good? In general, I agree with Loi that one can usefully fuse together the debates about technological unemployment and human enhancement. Indeed, I think it is worth doing so. That said, there were two omissions from the article that bothered me. The first was that I don’t think Loi did enough the emphasise the merits of the anti-work position. Proponents of this view argue that non-work can be better for an individual than work. Hence, there are ways in which technologically-induced unemployment could be a good thing and this could counteract some of the disenhancing effect. I have talked about this antiwork view ad nauseum on the blog before so I won’t repeat myself now. Safe to say, the antiwork view only really makes sense if the productive gains from technology are shared reasonably widely. If people are suffering from deprivation, and still being forced to find work, then Loi is aware of this, hence why he discusses the importance of the basic income guarantee.

The other issue I had with the article had to do with its success in demonstrating a disenhancing effect for technological unemployment in both the functionalist and welfarist senses. At the outset, Loi claims he is arguing for both, but towards the end he seems to limit himself to just the welfarist sense:

If ICT innovation leads to intrinsically worst [sic] jobs and low wages for most workers, technology will disenhance more workers (in the welfarist sense) than it enhances. This seems, unfortunately, to be the present trend.
(Loi 2015, 208)

As best I can tell, he makes no attempt to argue for disenhancement in the functional sense. He might be able to do this by, say, arguing that manual workers suffer from a reduction in capacity and ability. But it's not obvious to me that this is true. They may not have their mental abilities expanded, but their physical abilities could be. Furthermore, there is an argument out there to the effect that the kinds of automating technology used by abstract workers can have a (functionally) disenhancing effect. Nicholas Carr makes much of this in his recent book, claiming that assistive technologies often lead to the degeneration of mental abilities. I’m not endorsing that argument here (I discussed it at length on a previous occasion) but it adds an interesting angle to Loi’s argument. It suggests that the disenhancing effect might be broader than he suggests.

Anyway, those are just some quick — no doubt poorly thought-out — reflections on Loi’s article. I’m thinking a lot about the relationship between the enhancement debate and other techno-ethical debates at the moment, so I will continue to explore these issues.

Thursday, November 19, 2015

Theory and Application of the Extended Mind (Series Index)




In the past year, I have written several posts about Chalmers and Clark's famous extended mind thesis. This thesis takes seriously the functionalist explanation of mental events, and holds that the mind is not confined to skull. Instead, it can extend into artefacts and objects in the world around it.

I have been interested in both the theoretical underpinnings to this thesis and its potential applications, particularly to the human enhancement debate. Anyway, here are links to everything I have done on the concept -- two of them are podcasts in which I discuss it at some length.


  • Neuroenhancement and the Extended Mind Thesis: This post introduces the thesis and looks at Neil Levy's so-called Ethical Parity Principle which, to put it crudely, holds that what goes on inside the skull should be ethically on a par with what goes on outside the skull. This could have interesting consequences for the enhancement debate.

  • Two Interpretations of the Extended Mind Thesis: Some people have trouble understanding what the extended mind thesis is all about. This post tries to help by considering two interpretations put forward by the philosopher Katalin Farkas.

  • Extended Mind and the Coupling-Constitution Fallacy: The biggest criticism of the extended mind thesis comes from Kenneth Aizawa and Fred Adams. They argue that Chalmers and Clark confuse a causal relationship between the brain and external objects with a constitutive relationship. I try to explain this criticism and consider a possible reply.




Wednesday, November 18, 2015

The Philosophy of Games and the Postwork Utopia




I want to start with a thought experiment: Suppose the most extreme predictions regarding technological unemployment come to pass. The new wave of automating technologies take over most forms of human employment. The result is that there is no economically productive domain for human workers to escape into. Suppose, at the same time, that we all benefit from this state of affairs. In other words, the productive gains of the technology do not flow solely to a handful of super-wealthy capitalists; they are fairly distributed to all (perhaps through an guaranteed income scheme). Call this the ‘postwork’ world. What would life be like in such a world?

For some, this is the ideal world. It is a world in which we no longer have to work in order to secure our wants and needs. And the absence of compelled work sounds utopian. Bob Black, in his famous polemic ‘The Abolition of Work’, makes the case that:

No one should ever work. Work is the source of nearly all the misery in the world. Almost any evil you'd care to name comes from working or from living in a world designed for work. In order to stop suffering, we have to stop working.

But is the postwork world really all that desirable? To me, it all depends on what it takes to live a meaningful and flourishing life. Philosophers think that in order to live a flourishing life you need to satisfy certain basic conditions of value. Can those conditions be satisfied in the absence of work? Black seems to think they can. He paints a rosy picture of the ‘ludic’ (i.e. game-playing) life we can live in the absence of work:

[The postwork world means] creating a new way of life based on play; in other words, a *ludic* conviviality, commensality, and maybe even art. There is more to play than child's play, as worthy as that is. I call for a collective adventure in generalized joy and freely interdependent exuberance.

That sounds rather nice. But deeper analysis of this ludic life is needed. Only then will we know whether it provides for the kind of flourishing we seek. I want to provide that deeper analysis in this post. I do so by drawing from the work of Bernard Suits and Thomas Hurka, and in particular from the argument in Hurka’s paper ‘Games and the Good’. I want to suggest that a purely ludic life (one consisting of ‘games’) does allow for a certain type of flourishing. It is distinct from that included in traditional understandings of the good life, but it may provide a plausible blueprint for a postwork utopia.

To make this case, I’m going to have to do three things. First, I’m going to have to start with a pessimistic view, one suggesting that a postwork world would robs us of some value. Second, I will have to outline Hurka’s analysis of games and the good. And third, I will have to argue that this analysis provides one way of defending Black’s ideal of the ludic life.

[Note: The main idea in this post came from a conversation I recently had with Jon Perry and Ted Kupper on the Review the Future podcast. I would like to thank both of them for making me think about this issue.]


1. A Pessimistic View of the Postwork World
Antiwork theorists think that work is bad and nonwork is better. I have analysed this argumentative posture on previous occasions. One thing I noted on those occasions is that antiwork theorists are good at explaining why work is bad; but not-so-good at explaining why non-work is better. This is because their vision of the good life is often undertheorised. In other words, they lack clarity about what it takes to live a flourishing and meaningful life, and how that life might be enhanced in a postwork world. Theorisation is needed for a full defence of the antiwork position.

Here is one plausible theory of meaning, taken from the work of Thaddeus Metz. In one of his papers (LINK), Metz argues that there are three main sources of value in life: the Good, the True and the Beautiful. Our lives flourish and accumulate meaning when we contour our intellects to the pursuit of these three things. In other words, our lives flourish when we act to bring about the moral good, to pursue and attain a true conception of reality, and produce (and admire) things of great aesthetic beauty. The more we do of each, the better our lives are.

Under this account of meaning, your activities (and your intellect) must bring about valuable changes in the external reality. For example, I could dedicate my life to ending cancer. If I succeed, and my actions realise (or at least form some significant part of) the cure for cancer, the world would be a slightly better place. This would make my life meaningful (perhaps very meaningful). Why so? Because my actions would have helped to attain the Good (maybe also the True).

Here is one concern you could have about this type of meaning in the postwork future. The centrepiece of this theory is the link (typically causal and/or mental) between what I do and what happens in the world around me. I cause or help to bring about the Good, the True and the Beautiful: that’s what makes my life meaningful. But it is the very essence of automating technologies to sever the link between what I do and what happens in the world around me. Automating technologies, after all, obviate the need for humans in certain endeavours. The concern is that this power to sever the link might take hold in many domains, thereby distancing us from potential sources of meaning.

The concern needs to be fleshed out. The danger with the futurist antiwork position is that it assumes automating technologies will takeover the boring, degrading and dehumanising jobs, and leave us free to pursue things that provide opportunities for genuine meaning and flourishing. But there doesn’t seem to be any good reason to think that advances in automating technologies will only effect ‘bad’ or meaningless activities. They could takeover other more meaningful tasks too, thereby severing the connection between what we do and the things that are supposed to provide meaning. Indeed, if we assume that science is the main way in which we pursue Truth in the modern world, then there are already some obvious ways in which technology is taking over in its pursuit. Science is increasingly a big data enterprise, in which machine learning algorithms are leveraged to make sense of large datasets, and to make new and interesting discoveries. They are in their infancy now, but already we see ways in which the algorithms are attenuating the link between individual scientists and new discoveries. Why? Because they are becoming increasingly complex, and working in ways that are beyond the understanding and control of the individual scientists.

So the concern is that automating technologies narrow the domain for genuinely meaningful activities. Some such activities will no doubt remain accessible to humans (e.g. there are serious questions as to whether machines could ever really takeover the pursuit of the Beautiful), but the totality will diminish in the wake of automation. Humans could still be very well off in this world: the machines could solve most moral problems (e.g. curing disease, distributing goods and services, deciding on and implementing important social policies) and make new and interesting discoveries in which we can delight, but we will be the passive recipients of these benefits, not active contributors to them.
There is something less-than-idyllic about such a world.


2. Games as a Forum for Flourishing
One thing that would be left open to us in this postwork future, however, is game-playing. While the machines are busy solving our moral crises and making great discoveries, we can participate in more and more elaborate and interesting games. These games would be of no instrumental significance — they wouldn’t solve moral problems or be sources of income or status, for example — but they might be sources of value.

To make this argument, we first need a better handle on what a game is. To do this, we can turn to the conceptual analysis of games provided by Bernard Suits’s famous book the Grasshopper. Suits argued, contra-Wittgenstein, that all games (properly so-called) shared three key features:


Prelusory Goals: These are outcomes or changes in the world that are intelligible apart from the game itself. For example, in a game like golf the prelusory goal would be something like: putting a small, dimpled ball into a hole, marked by a flag. In a game like tic-tac-toe (or “noughts and crosses”) it would be something like: being the first to mark three Xs or Os in row, and/or preventing someone else from doing the same. The prelusory goals are the states of affairs that help us keep score and determine who wins or loses the game.

Constitutive Rules: These are the rules that determine how the prelusory goal is to be attained. According to Suits, these rules set up artificial obstacles that prevent the players from achieving the prelusory goal in the most straightforward and efficient manner. For example, the most efficient and straightforward way to get a dimpled ball in a hole would probably be to pick up the ball and drop it directly in the hole. But the constitutive rules of golf do not allow you to do this. You have to manipulate the ball through the air and along the ground using a set of clubs, in a very particular constrained environment. These artificial constraints are what make the game interesting.

Lusory Attitude: This is the psychological orientation of the game players to the game itself. In order for a game to work, the players have to accept the constraints imposed by the constitutive rules. This is an obvious point. Golf could not survive as a game if the players refused to use their clubs to get the ball into the hole.


This three-part analysis of games has struck many as both illuminating and (in broad brush) correct. We could quibble, but let’s accept it for now. The question then becomes: can a world in which we have nothing to do but play games (so-defined) provide the basis for a flourishing life? Maybe. Suits himself seems to have thought it would be the best possible life. But Suits was notoriously esoteric in his defence of this claim. His book on the topic, the Grasshopper, is an allegorical dialogue, which discusses games in the context of a future of technological perfection, but doesn’t present a clearcut argument. It is also somewhat equivocal and uncertain in its final views, which is what you would expect from a good philosophical dialogue. This makes for good reading, but not good arguing. So this is where we need to turn to the work of Thomas Hurka. Taking onboard Suits’s analysis, Hurka argues that games are a way of realising two important kinds of value.

The first value concerns the structure of means-end reasoning (or ‘practical’ reasoning if you prefer). Means-end reasoning is all about working out the most appropriate course of action for realising some particular goal. A well-designed game allows for some complexity in the relationship between means and ends. Thus, when one finally attains those ends, there is a great sense of achievement involved (you have overcome the obstacles established by the rules of the game). This sense of achievement, according to Hurka, is an important source of value. And games are good because they provide a pure platform for realising higher degrees of achievement.

An analogy helps to make the argument. Compare theoretical reasoning with practical reasoning. In theoretical reasoning, you are trying to attain true insights about the structure of the world around you. This enables you to realise a distinct value: knowledge. But this requires something more that the mere description of facts. You need to identify general laws or principles that help to explain those facts. When you succeed in identifying those general laws or principles you will have attained a deep level of insight. This has more value than mere description. For example, when Newton identified his laws of gravity, he provided overarching principles that could explain many distinct facts. This is valuable in a way that simply describing facts about objects in motion is not.

The point here is that in theoretical reasoning there is extra value to knowledge that is explanatorily-integrated. Hurka argues that the parallel to knowledge in the practical domain is achievement. There is some good to achievement of all kinds, but there is greater good in achievement that involves some means-end complexity. The more obstacles you have to overcome, the more achievement you have. Hurka illustrates the point using the diagrams I have reconstructed below. They the illustrate the depth and complexity of insight and achievement that can be acquired in both theoretical and practical domains.





The second source of value in game-playing has to do with Aristotle’s distinction between two types of activity: energeia and kinesis (this is how the distinction is described in Hurka - I’m not an expert on Aristotelian metaphysics but there are related distinctions in Aristotle’s work, e.g. praxis vs poesis). Energeiai are activities that are all about process. Aristotle viewed philosophy and self-examination as being of this sort: it was a constant process of questioning and gaining insight: it never bottomed out in some goal or end state. Kineseis are activities that are all about goals or end states. Aristotle thought that process-related activities were ultimately better than goal-related activities. The reason for this is that he thought the value of a kinesis was always trumped by or subordinate to its goal (i.e. it wasn’t good in itself). This is why Aristotle advocated the life of contemplation and philosophising. Such a life would be one in which the activity is an end in itself (I spoke about this before).

At first glance, it would seem like games don’t fit neatly within this Aristotelian framework. They are certainly goal-directed activities (the prelusory goal is essential to their structure). And so this makes them look like kineseis. But these goals are essentially inconsequential. They have no deeper meaning or significance. As a result, the game is really all about process. It is about finding ways to overcome the artificial obstacles established by the constitutive rules. As Hurka puts it, games are consequently excellent platforms for attaining a particularly modern conception of value (one found in the writings of existentialists). They are activities directed at some external end, but the internal process is the sole source of value. Indeed, there is a sense in which they are an even purer way of achieving Aristotle’s ideal. The problem with Aristotle’s suggestion that the best life is the life of intellectual virtue is that intellectual activity often does have goals lurking in the background (e.g. attaining some true insight). There is always the risk that these goals trump the inherent value of the intellectual process. With games, you never have that risk. The goals are valueless from the get-go. Purely procedural goods can really flourish in the world of games.

To sum up, a life filled with games does allow for certain forms of flourishing. Two are singled out in Hurka’s analysis. First, games allow for people to attain the good of achievement (overcoming obstacles to goals). And better games add the right amount of complexity and difficulty to the process and thereby enable deeper levels of achievement. Second, games allow for the inherent value of processes to flourish in the absence of trumping external goods. Hence, we can revel purely in exercising the physical, cognitive and emotional skills needed to overcome the obstacles within the game.


3. Is this the utopia we've been looking for?
But is this enough? Again, Bernard Suits certainly thought so. He thought the game-playing life was one of supreme value. Hurka is more doubtful. While he accepts that the game-playing life allows for some flourishing, he still thinks it is of a weaker or inferior sort. To quote:

Now, because game-playing has a trivial end-result, it cannot have the additional intrinsic value that derives from instrumental value. This implies that excellence in games, though admirable, is less so than success in equally challenging activities that produce a great good or prevent a great evil. This seems intuitively right: the honour due athletic achievement for themselves is less than that due the achievements of great political reformers or medical researchers. 
(Hurka 2006)

This suggests a retreat to the vision of meaning I outlined earlier in this post, i.e. truly meaningful activity must be directed toward the Good, the True and the Beautiful. The problem is that even if this vision is right, there is the risk that advances in automating technologies cut us off from these more valuable activities. We may need to make do with games.

But perhaps this should not cause us despair. In many ways, this is a plausible vision of what a utopian world would look like. If you think about it, the other proposed sources of meaning (like the Good and the True) make most sense in an imperfect world. It is because people suffer or lack basic goods and services that we need to engage in moral projects that improve their well-being. It is because we are epistemically impaired that we need to pursue the truth. If we lived in a world in which those impairments had been overcome, the meaning derived from those activities would no longer make sense. The external goods would be available to all. In such a world, we would expect purely procedural or instrumental goods to be the only game in town.

And what is a world devoid of suffering, impairment and limitation? Surely it is a utopia?

Monday, November 16, 2015

Is Anyone Competent to Regulate Artificial Intelligence?




Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

In outlining these problems, I was drawing from the work of Matthew Scherer and his soon-to-be-published article “Regulating Artificially Intelligent Systems: Risks, Challenges, Competencies and Strategies”. Today I want to return to that article and consider the next step in the regulatory project. Once we have a handle on the basic problems, we need to consider who might competent to deal with them. In most Western countries, there are three main regulatory bodies:

Legislatures: The body of elected officials who enact general laws and put in place regulatory structures (e.g. in the US the Houses of Congress; in the UK the Houses of Parliament; in Ireland the Houses of the Oireachtas).

Regulatory Agencies: The body of subject-area specialists, established through legislation, and empowered to regulate a particular industry/social problem, often by creating, investigating and enforcing regulatory standards (there are many examples, e.g. the US Food and Drug Administration, the UK Financial Conduct Authority; the Irish Planning Board (An Bord Pleanala))

Courts: The judges and other legal officials tasked with arguing, adjudicating and, ultimately, enforcing legal standards (both civil and criminal).

To these three bodies, you could perhaps add “The Market” which can enforce a certain forms of discipline on private commercial entities, and also internal regulatory bodies within those commercial entities (though such bodies are usually forced into existence by law). For the purposes of this discussion, however, I’ll be sticking to the three bodies just outlined. The question is whether any of these three bodies are competent to regulate the field of artificial intelligence. This is something Scherer tries to answer in his article. I’ll follow his analysis in the remainder of this post, but where Scherer focuses entirely on the example of the United States I’ll try to be a little more universal.

Before I get underway, it is worth flagging two things about AI that could affect the competency of anyone to regulate its development and deployment. The first is that AI is (potentially) a rapidly advancing technology: many technological developments made over the past 50 years are now coming together in the form of AI. This makes it difficult for regulatory bodies to ‘keep up’. The second is that advances in AI can draw on many different fields of inquiry, e.g. engineering, statistics, linguistics, computer science, applied mathematics, psychology, economics and so on. This makes it difficult for anyone to have the relevant subject-area expertise.


1. The Competencies of Legislatures
Legislatures typically consist of elected officials, appointed to represent the interests of particular constituencies of voters, with the primary goal of enacting policy via legislation. Legislatures are set up slightly differently around the world. For example, in some countries, there are non-elected legislatures working in tandem with elected legislatures. In some countries, lobbyists have significant influence over legislators; in others this influence is relatively weak. In some countries, the executive branch of government effectively controls the legislature; in others the executive is an entirely distinct branch of government.

Scherer argues that three things must be remembered when it comes to understanding the regulatory role of a legislature:

Democratic Legitimacy: The legislature is generally viewed as the institution with the most democratic legitimacy, i.e. it is the institution that represents the people’s interests and answers directly to them. Obviously, the perceived legitimacy of the legislature can wax and wane (e.g. it may wane when lobbying power is excessive). Nevertheless, it will still tend to have more perceived democratic legitimacy than the other regulatory bodies.

Lack of Expertise: Legislatures are generally made up of career politicians. It is very rare for these career politicians to have subject matter expertise when it comes to a proposed regulatory bill. They will have to rely on judgments from constituents, advisors, lobbyists and experts called to give evidence before a legislative committee.

Delegation and Oversight: Legislatures have the ability to delegate regulatory power to other agencies. Sometimes they do this by creating an entirely new agency through a piece of legislation. Other times they do so by expanding or reorganising the mission of a pre-existing agency. The legislature then has the power to oversee this agency and periodically call it account for its actions.

What does all this mean when it comes to the AI debate? It means that legislatures are best placed to determine the values and public interests that should go into any proposed regulatory scheme. They are directly accountable to the people and so they can (imperfectly) channel those interests into the formation of a regulatory system. Because they lack subject matter expertise, they will be unable to determine particular standards or rules that should govern the development and deployment of AI. They will need to delegate that power to others. But in doing so, they could set important general constraints that reflect the public interest in AI.

There is nothing too dramatic in this analysis. This is what legislatures are best-placed to do in virtually all regulatory matters. That said, the model here is idealistic. There are many ways in which legislatures can fail to properly represent the interests of the public.


2. The Competencies of Regulatory Agencies
Regulatory agencies are bodies established via legislation and empowered to regulate a particular area. They are quite variable in terms of structure and remit. This is because they are effectively designed from scratch by legislatures. In most legal systems, there are some general constraints imposed on possible regulatory structures by constitutional principles (e.g. a regulatory agency cannot violate or undermine constitutionally protected rights). But this still gives plenty of scope for regulatory innovation.

Scherer argues that there are four things about regulatory agencies that affect their regulatory competence:

Flexibility: This is what I just said. Regulatory agencies can be designed from scratch to deal with particular industries or social problems. They can exercise a variety of powers, including policy-formation, rule-setting, information-collection, investigation, enforcement, and sanction. Flexibility often reduces over time. Most of the flexibility arises during the ‘design phase’. Once an agency comes into existence, it tends to become more rigid for both sociological and legal reasons.

Specialisation and Expertise: Regulatory agencies can appoint subject-matter experts to assist in their regulatory mission. Unlike legislatures who have to deal with all social problems, the agency can keep focused on one mission. This enhances their expertise. After all, expertise is a product of both: (a) pre-existing qualification/ability and (b) singular dedication to a particular task.

Independence and Alienation: Regulatory agencies are set up so as to be independent from the usual vagaries of politics. Thus, for example, they are not directly answerable to constituents and do not have to stand for election every few years. That said, the independence of agencies is often more illusory than real. Agencies are usually answerable to politicians and so (to some extent) vulnerable to the same forces. Lobbyists often impact upon regulatory agencies (in some countries there is a well-known ‘revolving door’ for staff between lobbying firms, private enterprises, and regulatory agencies). Finally, independence can come at the price of alienation, i.e. a perceived lack of democratic legitimacy.

The Power of Ex Ante Action: Regulatory agencies can establish rules and standards that govern companies and organisations when they are developing products and services. This allows them to have a genuine impact on the ex ante problems in any given field. This makes them very different from the courts, who usually only have ex post powers.


What does this mean for AI regulation? Well, it means that a bespoke regulatory agency would be best placed to develop the detailed, industry-specific rules and standards that should govern the research and development of AI. This agency could appoint relevant experts who could further develop their expertise through their work. This is the only way to really target the ex ante problems highlighted previously.

But there are clearly limitations to what a bespoke regulatory agency can do. For one thing, the fact that regulatory structures become rigid once created is a problem when it comes to a rapidly advancing field like AI. For another, because AI potentially draws on so many diffuse fields, it may be difficult to recruit an appropriate team of experts. Relevant insights that catapult AI development into high gear may come from unexpected sources. Furthermore, people who have the relevant expertise may be hoovered up by the enterprises they are trying to regulate. Once again, we may see a revolving door between the regulatory agency and the AI industry.


3. The Competencies of Courts
Courts are judicial bodies that adjudicate on particular legal disputes. They usually have some residual authority over regulatory agencies. For instance, if you are penalised by a regulatory agency you will often have the right to appeal that decision to the courts. This is a branch of law known as administrative law. Although legal rules vary, most courts adopt a pretty deferential attitude toward regulatory agencies. They do so on the grounds that the agencies are the relevant subject-matter experts. That said, courts can still use traditional legal mechanisms (e.g. criminal law or tort law) to resolve disputes that may arise from the use of a technology or service.

Scherer focuses on the tort law system in his article. So the scenario lurking in the background of his analysis is a case in which someone is injured or harmed by an AI system and tries to sue the manufacturer for damages. He argues that four things must be kept in mind when assessing the regulatory competence of the tort law system in cases like this:

Fact-Finding Powers: Rules of evidence have been established that give courts extensive fact-finding powers in particular disputes. These rules reflect both a desire to get at the truth and to be fair to the parties involved. This means that courts can often acquire good information about how products are designed and safety standards implemented, but that information is tailored to a particular case and not to what happens in the industry more generally.

Reactive and Reactionary: Courts can only intervene and impose legal standards after a problem has arisen. This can have a deterrent effect on future activity within an industry. But the reactive nature of the court also means that it has a tendency to be reactionary in its rulings. In other words, courts can be victims of “hindsight bias” and assume that the risk posed by a technology is greater than it really is.

Incrementalist: Because courts only deal with individual cases, and because the system as a whole moves quite slowly, it can really only make incremental changes.

Misaligend Incentives: In common law systems, the litigation process is adversarial in nature: one side prosecutes a claim; the other defends. Lawyers only take cases to court that they think can be won. They call witnesses that support their side. In this, they are concerned solely with the interests of their clients, not with the interests of the public at large. That said, in some countries class actions are possible, which allow for many people to bring the same type of case against a defendant. This means some cases can represent a broader set of interests.

What does all this mean for AI regulation? Well, it suggests that the court system cannot deal with any of the ex ante problems alluded to earlier on. It can only deal with ex post problems. Furthermore, in dealing with those problems, it may move too slowly to keep up with the rapid advances in the technology, and may tend to overestimate the risks associated with the technology. If you think those risks are great (bordering on the so-called “existential” risk-category proposed by Nick Bostrom), this reactionary nature might be a good thing. But, even still, the slowness of the system will count against it. Scherer thinks this tips the balance decisively in favour of some specially constructed regulatory agency.




4. Conclusion: Is there hope for regulation?
Now that we have a clearer picture of the regulatory ecosystem, we can think more seriously about the potential for regulation in solving the problems of AI. Scherer has a proposal in his article, sketched out in some reasonable detail. It involves leveraging the different competencies of the three bodies. The legislature should enact an Artificial Intelligence Development Act. The Act should set out the values for the regulatory system:

[T]o ensure that AI is safe, secure, susceptible to human control, and aligned with human interests, both by deterring the creation of AI that lack those features and by encouraging the development of beneficial AI that include those features. 
(Scherer 2015)

The act should, in turn, establish a regulatory agency with responsibility for the safe development of AI. This agency should not create detailed rules and standards for AI, and should not have the power to sanction or punish those who fail to comply with its standards. Instead, it should create a certification system, under which agency members can review and certify an AI system as “safe”. Companies developing AI systems can volunteer for certification.

You may wonder why any company would bother to do this. The answer is that the Act would also create a differential system of tort liability. Companies that undergo certification will have limited liability in the event that something goes wrong. Companies that fail to undergo certification will face strict liability standards in the event of something going wrong. Furthermore, this strict liability system will be joint and several in nature: any entity in the design process could face full liability. This creates an incentive for AI developers to undergo certification, whilst at the same time not overburdening them with compliance rules.

In a way, this is a clever proposal. It tries to balance the risks and rewards of AI. The belief is that we shouldn’t stifle creativity and development within the sector, and that we should encourage safe and beneficial forms of AI. My concern is that this system misses some of the unique properties of AI that make it such a regulatory challenge. In particular, the proposal seems to ignore the difficulty of (a) finding someone to regulate and (b) the control problem.

This is ironic given that Scherer was quite good at outlining those challenges in the first part of his article. There, he noted how AI developers need not be large, well-integrated organisations based in a single jurisdiction. But if they are not, then it may be difficult to ‘reach’ them with the proposed regulatory regime. I am guessing the joint and several liability proposal is designed to address this problem as it creates an incentive for anyone involved in the process to undergo certification, but it assumes that diffuse networks of developers have the end goal of producing a ‘consumer’ type device. This may not be true.

Furthermore, earlier in the article, Scherer noted how AI systems can do things that are beyond the control or anticipation of their original designers. This creates liability problems but these problems can be addressed through the use of strict liability standards. At the same time, however, it also creates problems in the certification process. Surely if AI systems can act in unplanned and unanticipated ways, it follows that members of a putative regulatory agency would not be well-equipped to certify an AI system as “safe”? That could be concerning. The proposed system would probably be better than nothing, and we shouldn’t make the perfect the enemy of the good, but anyone who is convinced of the potential for AI to pose an “existential threat” to humanity is unlikely to think that regulation of this sort can play a valuable role in mitigating that risk.

Scherer is aware of this. He closes by stating that his goal is not to provide the final word but rather to start a conversation on the best legal mechanisms for managing AI risk. That’s certainly a conversation that needs to continue.

Saturday, November 14, 2015

Blockchain Technology, Smart Contracts and Smart Property






Blockchain technology is at the heart of cryptocurrencies like Bitcoin. Most people have heard of Bitcoin and some are excited by the prospect it raises of a decentralised, stateless currency/payment system. But this is not the most interesting thing about Bitcoin. It is the blockchain technology itself that is the real breakthrough. It not only provides the foundation for a currency and payment system; it also provides the foundation for new ways of organising and managing basic social relationships. This includes legal relationships such as those involved in contractual exchange and proprietary ownership. The most prominent expression of this potential comes in the shape of Ethereum, an open source platform that allows developers to use blockchains for whatever purpose they see fit.

This might sound a little abstract and confusing. Blockchain technology is exciting, but many people are put off by the technical and abstruse concepts underpinning it. Proponents of the technology talk about strange things like cryptographic hash functions and public key encryption. They also refer to obscure mathematical puzzles like the Byzantine Generals problem in order to explain how it works. This is daunting. Many wonder whether they have to master this obscure conceptual vocabulary in order to understand what all the fuss is about.

If they want to engage with the technology at the deepest levels, they do. But to gain a high level understanding of how it works, and to share some of the excitement of its proponents, they don’t. My goal in this post is to provide that high-level understanding, and to explain how the technology could provide an underpinning for things like smart contracts and smart property. With luck, this will enable people to see the potential for this technology and will pique their interest in its political, legal and ethical implications.

I appreciate that there are many other articles out there that try to do the same thing. I am merely adding one more to the pile. I do so in the hope that it may prove useful to some, but also in the hope that it helps me to better understand the phenomenon. After all, most writing is an exercise in self-explanation. It is through communication that we truly begin to understand.

The remainder of this post is divided into three main sections. The first talks about the ‘Trust Problem’ that motivates the creation of the blockchain. The second tries to provide a detailed but non-mathematical description of how the blockchain works to solve the trust problem. The third explains how the technology could support a system of smart contracts and smart property.


1. The Trust Problem and the Motivation for the Blockchain
All human societies have a trust problem. In order to survive and make a living, we must coordinate and cooperate with others. In doing so, there is potential for these others to mislead, deceive and disappoint. To ensure successful ongoing cooperation, we need to be able to trust each other. Many societies have invented elaborate rituals, laws and governance systems to address this trust problem. At its most fundamental level, blockchain technology tries to do the same.

To illustrate, let’s use the example of a currency and payment system. This seems appropriate given the origins of blockchain technology in the development of such systems. I’m going to use the example of a real-world currency system: the currency used (historically) on the Island of Yap. Some people will be familiar with this example as it is beloved by economists. The only problem is that example has become heavily mythologised and abstracted from the actual historical reality. I’m not an expert on that history, so what I am about to describe is also likely to be highly mythologised and simplified. I hope that’s okay: the goal here is to explain the rationale behind blockchain technology, not to write an accurate monetary history of the Island of Yap.

Anyway, with that caveat in mind, the Islanders of Yap had an unusual monetary system. They did not use coins as money. Instead, they used stone discs of varying sizes. These discs were mined from another island, several hundred miles away. This ensured the discs that had been mined and brought back to the island retained their value over time. The picture below provides an example and illustrates just how large these discs could get. People would exchange these large discs in important transactions. But obviously the islanders could not just hand the discs to one another to finalise the transaction. The discs remain fixed in place. In order to know who-owned-what, the islanders needed to keep some kind of ledger, which recorded transactional data and allowed them to figure out which stone disc belongs to which islander.




One way to do this would have been to use a trusted third party ledger. In other words, to find some respected tribal elder or chief and make it a requirement that all transactions be logged with him/her. That way, whenever a dispute arose, the islanders could go to the elder and he/she could resolve the dispute. The elder could confirm that Islander A really does own the disc and is entitled to exchange it with Islander B, or vice versa. This is illustrated in the diagram below.



We make use of such trusted third party systems everyday. Indeed, modern political, legal and monetary systems are almost entirely founded upon them. When you make a payment via credit or debit card, that transaction must first be logged with a bank or credit card company, who will verify that you have the necessary funds and that the payment came from you, before the payment is finally confirmed. The same goes for disputes over legal rights. Courts function as trusted third parties who resolve disputes (ultimately via the threat of violence) about contractual rights and property rights (to give just two examples).

But that is not the only way to solve the trust problem. Another way would be to use a distributed consensus ledger. In other words, instead of logging transactional data with a trusted third party, you could require all the islanders to keep an ongoing, updated, record of transactions. Then, when a dispute arises, you go with either the majority or unanimous view of this network of ledger-keepers. As far as I am aware (and this is where my caveat about historical accuracy needs to be borne in mind) this is what the Islanders of Yap seem to have done. Each islander kept a mental record of who owned what, and this distributed mental record could be used to resolve transactional disputes.




Blockchain technology follows this distributed consensus method. It tries to create a computer-based protocol for resolving the trust problem through a distributed and publicly verifiable ledger. This is known as the blockchain. We can define it in the following way (from Wright and De Filippi, 2015):

Blockchain = A distributed, shared, encrypted database which serves as an irreversible and incorruptible public repository of information.


2. How the Blockchain is Built
But how exactly does the technology build the ledger? This is where things can get quite technical. In essence, the blockchain works by leveraging the networking capabilities of modern computers and by using a variety of cryptographic tools for verifying transactional data.

A network is established consisting of many different computers located in many different places. Each computer is a node in the network. You could have one node in South Africa, one in England, one in France, one in the USA, one in Yemen, one in Australia and so on. The network can, in theory, be distributed across the entire world. This network is then used for logging, recording and verifying transactional information. Every computer on the network keeps a record of all transactions taking place on the network. This record is known as the blockchain. It is comprehensive, permanent, public and distributed across all nodes in the network. The network can thus function as a decentralised authority for managing and maintaining records of transactions.

It is easy enough to see how this works in the case of two people exchanging money. Suppose Person A wants to transfer 100 bitcoin (or whatever) to Person B. Person A has a digital ‘wallet’ which contains a record of how much bitcoin they currently own. They sign into this and agree to transfer a certain sum to Person B. They do this by broadcasting to the network that they wish to transfer the money to Person B’s digital wallet. Details of this proposed transaction are then added to a ‘block’ of transactional data that is stored across the network. The ‘block’ is like a temporary record that is in the process of being added to the permanent record (the blockchain). The ‘block’ represents all the transactions that took place on the network during a particular interval of time. In the case of bitcoin, the block includes information about all the transactions taking place in a ten minute interval.

At this stage, the transaction between A and B has not been verified and does not form part of the permanent distributed ledger. What happens next is that once all the data has been collected for a given interval of time, the network works on verifying the details in those transactions (i.e. does A really have that amount of money to send to B? Did A really initiate the transaction? etc). Each computer on the network participates in a competition to verify the transactional data. The winner of this competition gets to add the ‘block’ to the ‘blockchain’ (i.e. they get to update the ledger). When they do so, they broadcast their ‘proof of work’ to the rest of the network. This shows the network how the winning computer verified the transactional data. The other computers on the network then check that proof of work and confirm that the record is correct. This is where the ‘distributed consensus’ comes in. It is only if the winning ‘solution’ is confirmed by the majority that it becomes a permanent part of the blockchain.

This verification process is technically tricky. I have a given a simple descriptive account. For the full picture, you would need to engage with the cryptographic concepts underpinning it.

There are a couple of interesting things about this, over and above its ‘distributed consensus’ nature. The first has to do with the role of trust. Some people refer to the blockchain as a ‘trustless’ system. I think people say this because it is the computer protocol and its combination of cryptographic verification methods that underpin the ledger. Thus, when you are using the system, you do not have to trust or place faith in another human being. This makes it seem very different from, say, the situation faced by the islanders of Yap, who really do have to trust one another when using their distributed ledger. But clearly there is trust of a kind involved in the process. You have to trust the technology, and the theory underpinning it. Maybe that trust is justified, but it still seems to be there. Also, since most people lack the technical know-how to fully understand the system, there is a stronger sense of trust involved for most users: they have to trust the technical experts who establish and maintain the network.

The other interesting thing has to do with the incentive to maintain the network. You may wonder why people would be willing to give up their computing resources to maintain such an elaborate system. The technologically-inclined might do so initially out of curiosity, or maybe some sense of idealism, but to have a widespread network you probably need something more enticing. The solution used by most blockchain systems is to reward members of the network with some digital token that can be used to conduct exchanges on the network. In the case of bitcoin, the winner of the verification competition receives newly minted bitcoin for their troubles. This makes it attractive for people to join and maintain the network. Bitcoin adopts a particular economic philosophy in its reward system: the winner takes all the newly-minted bitcoin. This doesn’t have to be the case. You could adopt a more egalitarian or socialist system in which all members of the network share whatever token of value is being used.


3. Smart Contracts and Smart Property
To this point, I have stuck with the example of bitcoin and illustrated how it uses blockchain technology. But as I noted at the outset, this is merely one use-case. The really interesting thing about blockchain technology is how it can be used to manage and maintain other kinds of transactional data. In essence, the blockchain is a decentralised database that can maintain a record of any and all machine-to-machine communications. And since smart devices, involving machine-to-machine communication, are now everywhere, this makes the blockchain a potentially pervasive technology. Smart contracts and smart property are two illustrations of this potential. I’ll try to explain both.

A contract is an agreement between two or more people involving conditional commitments, i.e. “If you do X for me, I will do Y for you”. A legal contract makes those conditional commitments legally enforceable. If you fail to do X for me, I can take you to court and have you ordered to do X, or ordered to pay me compensation for failing to do X. A smart contract is effectively the same, only you use some technological infrastructure to ensure that conditions have been met and/or to automatically enforce commitments. This can be done using blockchain technology because the distributed ledger system can be used to confirm whether contractual conditions have been met.

Suppose I am selling drugs illegally via the (now-defunct) Silk Road. We agree that you will pay me X bitcoin if you receive the drugs by a particular date. That condition could be built into the initial transaction that is logged on the blockchain platform. In this case, the system will only release the bitcoin to me if the relevant condition is met. How will it know? Well, suppose the drugs are of a certain weight and have to be delivered to a certain locker that you use for these purposes. The locker is equipped with a ‘smart’-weighing scales. Once a package of the right weight is delivered to the locker, the weighing scales will broadcast the fact to the network, which then confirms that the relevant contractual condition has been met. This results in the money being released to me.

Notice how the contract here is enforced automatically. I do not have to wait for you to release the bitcoin to me and you do not have to worry about losing your bitcoin and never receiving the drugs. The relevant conditions are coded into the original smart contract and once they are met the contract is automatically executed. There is no need for recourse to the courts (though you could build in conditional recourse to courts if you liked). The increasing number of ‘smart’ devices makes smart contracts enticing. Why? Because these devices allow for more ways in which to record, implement, and confirm the performance of relevant contractual conditions. The advantage of the blockchain is that it provides a way to manage and coordinate these devices without relying on trusted third parties.

Smart property is really just a variation on this. Tangible, physical property in the real world (e.g. cars, houses, cookers, fridges etc) can have smart technology embedded in them. Indeed, this is already true for many cars. Information about these physical objects can then be registered on the blockchain along with details of who stands in what type of ownership relationship to those physical objects. Smart keys could then be used to facilitate ownership rights. So, for example, you might only be able to access and use a car if you had the right smart key stored on your phone. The same could be true for a smart-house. These keys can then be exchanged and the exchanges verified using the blockchain. The blockchain thus becomes a system for recording and managing property rights.

Hopefully, these two examples give some sense of the excitement surrounding blockchain technology.


4. Conclusion
To sum up, the blockchain is a distributed, publicly verifiable and encrypted ledger used for recording and updating transactional data. It helps to solve the trust problem associated with most forms of social cooperation and coordination by obviating the need for trusted third parties. The technology is exciting because it can be used to manage and maintain networks of smart devices. As such devices become more and more widespread, there is the potential for blockchain technology to become pervasive. I’ll try to explore some of the more philosophically and legally interesting questions this throws up in future posts.