Pages

Friday, April 18, 2014

Does radical enhancement threaten our sense of self?



(Previous Entry, Series Index)

If we extended our lives by 200 years, or if we succeeded in uploading our minds to an artificial substrate, would we undermine our sense of personal identity? If so, would it be wiser to avoid such radical forms of enhancement? These are the questions posed in chapter 4 of Nicholas Agar’s book Truly Human Enhancement. Over the next two posts I’ll take a look at Agar’s answers. This is all part of my ongoing series of reflections on Agar’s book.

Agar’s main contention is that radical enhancement could indeed pose a serious threat to our personal identity and that this is something we should care about. Arguments about what does or does not pose a threat to identity often take the following form:


  • (1) Condition X is a necessary condition for personal identity to obtain.
  • (2) Y undermines or cancels condition X.
  • (3) Therefore, Y necessarily undermines personal identity.


Such arguments are part of the game played by philosophers to identify the necessary and sufficient conditions for the realisation or exemplification of certain properties and concepts. Agar does not wish to play this game. He is clear that he is not arguing that radical enhancement will necessarily undermine personal identity. He is arguing that it could, and that we would be wise to avoid that risk. This is really Agar preferred mode of argumentation, as mentioned in the previous post.

To get a handle on Agar’s argument, we will need to do three things. First, we’ll need a (very) brief primer on the concept of personal identity and the sense in which that concept is invoked in Agar’s argument. Second, we’ll need to look at Agar’s argument that radical enhancement threatens autobiographical memory. And then third, we’ll need to consider how Agar’s argument can be interpreted in terms of a game we play with our future selves.


1. What is personal identity and why does it matter?
Who we are is a matter of great importance to most of us. We spend our lives developing a sense of self, a sense of purpose and direction, a sense of identity. Our identity is what binds us together, what makes us whole. But as Agar notes, there at least two different senses of the word ‘identity’ in the philosophical debate:

The Metaphysical Sense: Identity is what makes me the same person as I was ten minutes ago. Identity is a one-to-one relationship. So X and Y are identical, if and only if, they are one and the same thing (hence, why this is sometimes referred to as numerical identity).
The Evaluative Sense: Identity is what makes my continued existence meaningful, valuable or worthwhile. When asking questions about identity I am interested in what conditions or mishaps might make continued existence worthless or devoid of meaning. Identity, under this definition, is what matters to us in our survival (and, as Parfit famously argued, this need not require a one-to-one relationship).

There are many interesting philosophical questions about identity. And we could go back and forth forever on which sense of identity is the important one. Those are debates worth having. But we don’t need to have them here. Agar’s argument against radical enhancement works with either sense in mind.

In addition to the different senses of the word, there are also different accounts of what constitutes our identity (in either sense). The two leading ones are the psychological continuity account and the animalist account. According to the psychological continuity account, our identities are constituted by a set of temporally overlapping mental states. In other words, the reason why I am the same person I was ten minutes ago (or the reason why myself ten minutes should care about who I am now) is because we share certain mental states: we have the same beliefs, desires, memories etc.). According to the animalist account, our identities are constituted by a continuity relationship between the biological organism that we are now and that we will be later. (This is often thought to allow for identity to be preserved in troubling cases like that of the person in a PVS).

Agar works with the psychological continuity account in his argument. This seems appropriate to me since that seems like the most plausible theory of identity (particularly in the evaluative sense). Still, one might wonder whether his argument would work as well against the animalist account. I think it probably would. Indeed, depending on the nature of the radical enhancement, it might be easier to argue that identity is undermined on the animalist account. For example, if radical enhancement involves the destruction of the biological human form, and the uploading of the mind to a digital medium, then I think it is safe to say that biological continuity has been undermined.


2. How Radical Enhancement Might Undermine Personal Identity
So much for the conceptual framework. Now we must deal with Agar’s actual argument: how exactly would radical enhancement undermine or threaten our identities? To answer that, Agar appeals to the notion of autobiographical memory. This is the memory of personal events and details from our lives. It is like the narrative tale we tell ourselves about who we are, what has happened and why is it important.

According to modern theories of memory, remembering is a reconstructive process. My brain does not record my past life experiences like a video-recorder. Instead, it creates schemata which encode salient information. The act of remembering then fills out these schemata. But in order to fill them out, other cognitive resources must be drawn upon. For example, my ability to remember riding a bike this morning might rely my having the requisite learned skill and background knowledge. It might also rely on my evaluative beliefs and desires at the present moment (e.g. my ongoing interest in good health, better bike riding etc.).

Disease can affect the reconstructive processes of autobiographical memory. Alzheimer’s is the example discussed by Agar. He refers, in particular, to the case of Ronald Reagan. It was said that, in his final years, Reagan forgot that he had been President of the US, a devastating loss of autobiographical memory. Agar speculates that this probably wasn’t because all the schemata for his presidential life were destroyed, but rather because he lost a lot of the background knowledge that would be needed to reconstruct those memories.

How does this relate to radical enhancement? Well, Agar wants to argue that radical enhancement could have a disruptive effect on the reconstructive process. If we radically enhance ourselves (either through biological or technological manipulation), our future radically enhanced selves are unlikely to actually forget about us. Indeed, modern recording technologies will probably make that impossible: the past will always be recoverable if they wish it to be.

The problem, instead, is that our future radically enhanced selves are likely to have very different evaluative frameworks. Things that seemed important or significant to us, will seem trivial and inconsequential to them. My 120km bike ride this morning — a significant and meaningful achievement to me right now — will look like a walk in park to my radically enhanced future self. He (or she or it) won’t deem it important enough to remember. We experience this to some extent ourselves right now: think about how you rarely have a good memory for the mundane details in your life.

Agar illustrates the problem by discussing a fictional example. The example comes from the late Iain M. Banks’s novel Matter. The novel was part of his Culture series, which frequently engaged with themes of radical enhancement. In the novel, a character named Anaplian — who comes from a world with sixteenth century technology and culture — undergoes a series of radical enhancements. She is made significantly stronger or faster; she can sense radio waves; she can operate machinery through thought alone; and she can switch pain and fatigue on and off at will. After she undergoes all this, she develops a very ambivalent relationship to her past self. She cares less and less about who she was.

Within the novel, there are probably good reasons for this: her people were backward and patriarchal. But Agar suggests that even if they were not, radical enhancement will just tend to produce that feeling of disconnect because of the different evaluative frameworks. So his argument works a little bit like this:


  • (4) Our autobiographical memories are integral to our identities (metaphysical/evaluative): they are like a record of important events and experiences in our lives.
  • (5) Autobiographical remembering relies on an array of background cognitive resources to reconstruct the memories.
  • (6) Radical enhancement will so alter those background cognitive resources that we may no longer be able (or, rather, willing) to reconstruct those memories.
  • (7) Therefore, radical enhancement will undermine our identities.



3. A Dangerous Game with our Future Selves
I won’t say too much about the merits of this argument for now (I leave that until the next post), but here is a way to think about it that might be helpful.

Game theorists often look at the decision to start smoking as a game you play with your future self. When given the opportunity to reflect, broadly, on the shape of their lives, many people would prefer not to smoke, despite the short-term rewards they experience. This is because they prioritise and prefer their long term health and well-being over their short-term desire to smoke. The problem is that we aren’t very good at prioritising long-term goals over short-term desires. We often undergo a process known as preference reversal: a point where the short term desire to smoke rises about the long-term desire for health and well-being.

Because of this phenomenon of preference reversal, it pays to think about the decision to start smoking as being akin to a game you play with your future self. This is depicted in the game tree below. Your present self gets a payoff of 0 for not smoking in the short-term, and a small reward from smoking in the short-term (say 1, but it doesn’t really matter for this game). In the long-term, your present self gets a payoff of 1 for not smoking, and a payoff of −1 for smoking (due to the health effects). Your future self, on the other hand, gets a payoff of 1 for smoking and −1 for not smoking (he, after all, is addicted and suffers the loss much more).

The Smoking Game

When you look at the decision to start smoking like this, you realise that your future self is not on your side. He is a competitor in this game. If you care about your long-term health, you need to do something to make sure he doesn’t get a chance to act out his preferences. You can do this either by not smoking at all in the short-term (and thereby never developing the addiction) or you can develop some commitment strategy that will limit the options open to your future self (think Ulysses and the Sirens).

The relevance of this here is that Agar’s argument about radical enhancement and person identity involves a similar game. Agar is saying that your present self has certain life interests and experiences that are important to it right now. It would like to see those interests and experiences preserved in the long-term. But your present self can’t rely on your radically enhanced future self caring about those things. It may have very difference preferences. Agar emphasises this by noting the asymmetrical attitude we have towards our past and future. Since we tend to care more about our future than our past, we can at least count on our radically enhanced future self having a similar bias. This increases the likelihood of him/her disconnecting from our present selves.

Okay, so I’ll leave it there for now. In the second part, I’ll look at Agar’s childhood-adulthood analogy (which he uses to further underscore his point about radical enhancement and identity), and consider some weaknesses in this argument.

Tuesday, April 15, 2014

Will sex workers be replaced by robots? (A Precis)


Daryl Hannah, Blade Runner

I recently published an article in the Journal of Evolution and Technology on the topic of sex work and technological unemployment (available here, here and here). It began by asking whether sex work, specifically prostitution (as opposed to other forms of labour that could be classified as “sex work”, e.g. pornstar or erotic dancer), was vulnerable to technological unemployment. It looked at contrasting responses to that question, and also included some reflections on technological unemployment and the basic income guarantee.

I hate to say this myself, but I thought the arguments in the paper were interesting, and I’d like to hear what other people think about them. But since people are busy, and may not be inclined to read the full 8,000 words, I thought I would provide a brief precis of the main arguments here. That might persuade some to read the full thing, and others to offer their opinions. So that’s what I’m going to do. I’m going to focus solely on the arguments relating to the replacement of sex workers by robots, leaving the basic income arguments out.

This is the first time I’ve ever tried to summarise my own work on the blog — I usually focus on the work of others — and it comes with the caveat that there is much more detail and supporting evidence in the original article. I’m just giving the bare bones of the arguments here. No doubt everyone else whose work I’ve addressed on this blog wishes I added a similar caveat before all my other posts. In my defence, I hope that such a caveat is implied in all these other cases.


1. The Case for the Displacement Hypothesis
Those who think that prostitutes could one day be rendered technologically unemployed by sophisticated sexual robots are defenders of something I call the “displacement hypothesis”:

Displacement Hypothesis: Prostitutes will be displaced by sex robots, much as other human labourers (e.g. factory workers) have been displaced by technological analogues.

As I note in the article, a defence of the displacement hypothesis is implicit in the work of several writers. The most notable of these is, perhaps, David Levy, whose 2007 book Love and Sex with Robots remains the best single-volume work on this topic. In the article, I try to clarify and strengthen the defence of the displacement hypothesis.

I argue that it depends on two related theses:

The Transference Thesis: All the factors driving demand for human prostitutes can be transferred over to sex robots, i.e. the fact that there is demand for the former suggests that there will also be demand for the latter.
The Advantages Thesis: Sex robots will have advantages over human prostitutes that will make them more desirable/more readily available.

I then proceed to consider the arguments in favour of both.

The argument for transference thesis depends on a close analysis of the factors driving demand for human prostitution. Extrapolating from several empirical studies of human demand, these factors can be reduced to four general categories: (i) people demand prostitutes because they are seeking the kind of emotional connection/attachment that is typical in romantic human sexual relationships; (ii) people demand prostitutes because they are seeking sexual variety (both in terms of partners and types of sex act); (iii) people demand prostitutes because they desire sex that is free from the complications and expectations of non-commercial sex (basically, the inverse of the first reason); and (iv) people demand prostitutes because they are unable to find sexual partners through other means.

To defend the transference thesis, one simply needs to argue that sex robots can cater to all of these demands. So you must argue that it will be possible to create sex robots that develop emotional bonds with their users (or not, if that is the user preference); it will be possible to create sex robots that cater to the need for variety; and it will be possible to supply sex robots to those who are unable to find sexual partners by other means.

The argument for the advantages thesis depends on identifying all the ways in which sex robots could be more desirable and more readily available than human prostitutes. In the article, I list four types of advantage that sex robots could have over human sex workers. First, there are the legal advantages: prostitution is illegal in several countries whereas the production of sex robots is not (I also suggested that sex robots could cater to currently illegal forms of sexual deviance, though this is more controversial). Second, there are the ethical advantages: less need to worry about trafficking or objectification. Third, there are the health risk advantages: less risk of contracting STDs (though this depends on sanitation). And fourth, and finally, there are the advantages of production and flexibility: it might be easier to produce sex robots en masse to cater for demand, and to re-programme them to cater to new desires.

When combined, I suggest that the transference thesis and the advantages thesis present a good case for the displacement hypothesis. An argument diagram summarising what I have said and clarifying the logical connections is provided below.




2. The Case for the Resiliency Hypothesis
Although I accept that there is a reasonable case for the displacement hypothesis, one of my primary goals in the article is to suggest that there is also a case to be made for the contrasting view. Thus, I introduce something I call the “resiliency hypothesis”:

Resiliency Hypothesis: Prostitution is likely to be resilient to technological unemployment, i.e. demand for and supply of human sexual labour is likely to remain competitive in the face of sex robots.

As with the displacement hypothesis, the case for the resiliency hypothesis rests on two theses:

The Human Preference Thesis: Ceteris Paribus, if given the choice between sex with a human prostitute or a robot, many (if not most) humans will prefer sex with a human prostitute.
The Increased Supply Thesis: Technological unemployment in other industries is likely to increase the supply of human prostitutes.

In retrospect, I possibly should have called the second of these, the “Increased Supply and Competitiveness Thesis” since the claim is not just that there will be an increased supply but that those drawn into sex work will do everything they can to remain competitive against sex robots (thereby countering some of the advantages robots have over humans). I think this is clear in how I defend the thesis in the article, just not in the name I gave it.

Anyway, I rested my defence of the human preference thesis on three arguments and bits of evidence. The first was largely an argument from philosophical intuition. I suggested that it seems plausible to suppose that we would prefer human sex partners to robotic ones. I based this on the belief that ontological history matters to us in matters both related and unrelated to sex. Thus, for example, we care about where food or fine art comes from: it’s more valuable if it has the ontological right history (not just because it looks or tastes better). We also seem to care about where our sexual partners come from: witness, for example, the reaction to transgendered persons, who are sometimes legally obliged to disclose their gender history. (I’m not saying that this reaction is a good thing, just that it is present).

It has been pointed out to me — by Michael Hauskeller — that my ontological history argument may simply the beg the question. It assumes that sex robots will have an ontological history that fails to excite us as much as the ontological history of human sex workers, but in a way that is the very issue under debate: would we prefer humans to robots. On reflection, Hauskeller looks to be right about this. Additional evidence is needed to show that the ontological history we desire is a human one. I would also add that if our concern with ontological history is irrational or prejudiced, it may be possible to overcome it. Thus, even if humans are preferred in the short term, they may not be in the long term.

Fortunately, there were two other arguments for the human preference thesis. One was based on some polling data suggesting that humans were not all that willing to have sex with a robot (though I did critique the poll as well). The other was based on the uncanny valley hypothesis. I reviewed some of the recent empirical literature suggesting that this is a real effect, and argued that it might not even be a valley.

The defence of the increased supply thesis rested an a simple argument (the numbering may look a bit weird here but remember that’s because everything I’ve said is going into an argument diagram at the end):


  • (16) An increasing number of jobs, including highly skilled jobs, are vulnerable to technological employment. 
  • (17) If an increasing number of jobs are vulnerable to technological unemployment, people will be forced to seek other forms of employment (all else being equal). 
  • (18) When making decisions about which form of employment to seek, people are likely to be attracted to forms of employment: (i) in which there is a preference for human labour over robotic labour; (ii) with low barriers to entry; and (iii) which are comparatively well-paid. 
  • (19) Prostitution satisfies all three of these conditions (i) - (iii). 
  • (11) Therefore, there is likely to be an increased supply of human prostitution.


I looked at each of the premises of this argument in the paper, though I focused most attention on premise (19). In support of this, I considered evidence from economic studies of prostitution. I also followed this with some argumentation on the way in which human prostitutes could address the advantages of sex robots.

That gives us the following argument diagram.



That’s it then. I hope this clarifies the case for the displacement and resiliency hypotheses. For more detail and supporting evidence please consult the original article. There is also some follow-up in the article about the implications of all this for the basic income guarantee.

Monday, April 14, 2014

Should we bet on radical enhancement?



(Previous Entry, Series Index)

This is the third part of my series on Nicholas Agar’s book Truly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.

I’m offering my reflections on parts of the book as a read through it. I’m currently on the second half of Chapter 3. In the first half of Chapter 3, Agar argued that humans are (rightly) uninterested in the activities of the radically enhanced because they cannot veridically engage with those activities. That is to say: because they cannot accurately imagine what it is like to engage in those activities. I discussed this argument in the previous entry.

As Agar himself notes, the argument in the first half of the chapter only speaks to the internal goods of certain human activities. In other words, it argues that we should keep enhancements modest because we shouldn’t wish to lose goods that are intrinsic to our current activities. This ignores the possible external goods that could be brought about by radical enhancement. The second half of the chapter deals with these.


1. External Goods and the False Dichotomy
It would be easy for someone reading the first half of chapter 3 to come back at Agar with the following argument:

Trumping External Goods Argument: I grant that there are goods that are internal and external to our activities, and I grant that radical enhancement could cause us to lose certain internal goods. Still, we can’t dismiss the external goods that might be possible through radical enhancement. Suppose, for example, that a radically enhanced medical researcher (or team of researchers) could find a cure for cancer. Wouldn’t it be perverse to forgo this possibility for the sake of some internal goods? Don’t certain external goods (which may be made possible by radical enhancement) trump internal goods?

The proponent of this argument is presenting us with a dilemma, of sorts. He or she is saying that we can stick with the internal and external goods that are possible with current or slightly enhanced human capacities, or we can go for more and better external goods. It would seem silly to opt for the former when the possibilities are so tantalising, especially given that Agar himself acknowledges that new internal goods may be possible with radically enhanced abilities.

The problem with this argument is that it presents us with a false dilemma. We don’t have to pick and choose; we can have the best of both worlds. How so? Well, as Agar sees it, we don’t have to radically enhance our abilities in order to secure the kinds of external goods evoked by the proponent of the trumping argument. We have other kinds of technology (e.g. machines and artificial intelligences) that can help us to do this.

What’s more, as Agar goes on to suggest, these other kinds of technology are far more likely to be successful. Radical forms of enhancement need to be integrated with the human biological architecture. This is a tricky process because you have to work within the constraints posed by that architecture. For example, brain-computer interfaces and neuroprosthetics, currently in their infancy, face significant engineering challenges in trying to integrate electrodes with neurons. External devices, with some user-friendly interface, are much easier to engineer, and don’t face the same constraints.

Agar illustrates this with a thought experiment:

The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction. 
Which investment strategy do you choose?

The question is a bit of a throwaway since, obviously, Pharaohs are unlikely to have the patience for investment of either sort. Still, it seems like the second investment strategy is the wiser one. We’ve had machines to assist construction for a long time now that aren’t directly integrated with our biology. They are extremely useful, going well beyond what is possible for a human. This suggests that the second option is more likely to be successful. Agar argues that this is all down to the integration problem.


2. Gambling on radical enhancement: is it worth it?
I think it’s useful reformulate Agar’s argument using some concepts and tools from decision theory. I say this because many of Agar’s arguments against radical enhancement seem to rely on claims about what should we be willing (or unwilling) to gamble on when it comes to enhancement. So it might be useful to have one semi-formal illustration of the decision problems underlying his arguments, which can then be adapted for subsequent examples.

We can do this by for the preceding argument by starting with a decision tree. A decision tree is, as the name suggests, a tree-like diagram that represents the branching possibilities you confront every time you make a decision. The nodes in this diagram either depict decision points or points at which probabilities affect different outcomes (sometimes we think of this in terms of “Nature” making a decision by determining the probabilities, but this is a just a metaphor).

Anyway, the decision tree for the preceding argument works something like this. At the first node, there is a decision point: you can opt for radical enhancement or modest (or no) enhancement. This then branches out into two possible futures. In each of those futures there is a certain probability that we will secure the kinds of external goods (like cancer cures) alluded to by the proponent of the trumping argument, and a certain (complementary) probability that we won’t. So this means that either of our initial decisions leads to two further possible outcomes. This gives us four outcomes in total:

Outcome A: We radically enhance, thereby losing our current set of internal goods, and fail to secure trumping external goods.
Outcome B: We radically enhance, thereby losing our current set of internal goods, but succeed in securing trumping external goods.
Outcome C: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, and fail to secure trumping external goods through other technologies.
Outcome D: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, but succeed in securing trumping external goods through other technologies.

This is all depicted in the diagram below.




With the diagram in place, we have a clearer handle on the decision problem confronting us. Even without knowing what the probabilities are, or without even having a good estimate for those probabilities, we begin to see where Agar is coming from. Since radical enhancement always seems to entail the loss of internal goods, modest enhancement looks like the safer bet (maybe even a dominant one). This is bolstered by Agar’s argument we have good reason to suppose that the probability of securing the trumping external goods is greater through the use other technologies. Hence, modest enhancement really is the better bet.

There are a couple of problems with this formalisation. First, the proponent of radical enhancement may argue that it doesn’t accurately capture their imagined future. To be precise, the proponent could argue that I haven’t factored in the new forms of internal good that may be made possible with radically enhanced abilities. That’s true, and that might be a relevant consideration, but bear in mind that those new internal goods are, at present, entirely unknown. Is it not better to stick with what we know?

Second, I think I’m being a little too-coarse grained in my description of the possible futures involved. I think it’s odd to suggest, as the decision tree does, that there could be a future in which we never achieve certain trumping external goods. That would suppose that there could be a future in which there is no progress on significant moral problems at our current level of technology. That seems unrealistic to me. Consequently, I think it might be better to reformulate the decision tree with a specific set of external goods in mind (e.g. things like a cure for cancer, or for world hunger, childhood mortality etc. etc.).


3. The External Mind Objection
There is another objection to Agar’s argument that is worth addressing separately. It is one that he himself engages with. It is the objection from the proponent of the external mind thesis. This thesis can be characterised in the following manner:

External Mind Thesis: Our minds are not simply confined to our skulls or bodies. Instead, they spill out into the world around us. All the external technologies and mechanisms (e.g. calculators, encyclopedias) we use to help us think and interact with the world are part of our “minds”.

The EMT has been famously defended by Andy Clark (and David Chalmers). Clark argues that the EMT implies that we are all cyborgs because of the way in which technology permeates in our lives. The EMT can be seen to follow from a functionalist theory of mind.

The thing about the EMT is that it might also suggest that the distinction Agar draws between different kinds of technological enhancement is an unprincipled one. Agar wants to argue that technologies that enhance by being integrated with our biology are different from technologies that enhance by providing us with externally accessible user interfaces. An example would be the difference between a lifting machine like a forklift and a strength enhancing drug that allows us to lift heavier objects. The former is external and non-integrated; the latter is internal and integrated. The defender of the EMT argues that this is a distinction without a difference. Both kinds of technological assistance are part of us, part of how we interact with and think about the world.

Agar could respond to this by simply rejecting the EMT, but he doesn’t do this. He thinks the EMT may be a useful framework for psychological explanation. What he does deny, however, is its usefulness across all issues involving our interactions with the world. There may be some contexts in which the distinction between the mind/body and the external world count for something. For example, in the study of the spread of cancer cells, the distinction between what goes on in your body, versus what goes on in the world outside it, is important (excepting viral forms of cancer). Likewise, the distinction between what goes on in our heads and what goes on outside, might count for something. In particular, if we risk losing internal goods through integrated enhancement, why not stick with external enhancement? This doesn’t undermine Clark’s general point that we are “cyborgs”; it just says that there are different kinds of cyborg existence, some of which might be more valuable to us than others.

I don’t have any particular issue with this aspect of Agar’s argument. It seems correct to me to say that the EMT doesn’t imply that all forms of extension are equally valuable.

That brings us to the end of chapter 3. In the next set of entries, I’ll be looking at the arguments in chapter 4, which have to do with radical enhancement and personal identity.

Sunday, April 13, 2014

Veridical Engagement and Radical Enhancement



(Previous Entry) (Series Index)

This is the second post in my series on Nicholas Agar's new book Truly Human Enhancement. The book offers an interesting take on the enhancement debate. It tries to carve out a middle ground between bioconservatism and transhumanism, arguing that modest enhancement (within or slightly beyond the range of human norms) is prudentially valuable, but that radical enhancement (well beyond the range of human norms) may not be.

As noted in the previous entry, the purpose of this series is to share my reflections on the book as I work my way through the chapters. Today's post is the first of two on the contents of chapter 3. To follow that chapter, you need to familiarise yourself with the conceptual framework set out in chapter 2. Fortunately, I covered that in the previous entry. I recommend reading that before proceeding with this post. I'm serious about this: if you don't know what is meant by terms like "prudential value", "intrinsic value" or "internal goods", then you will miss out on aspects of this discussion.

Anyway, assuming you are familiar with these concepts, we can proceed. Chapter 3 is entitled "What interest do we have in superhuman feats?". It is an appropriate title. The chapter itself looks at two related arguments that respond to that question. The first holds that we have little interest in superhuman feats, at least in terms of their relationship to intrinsically valuable internal goods. The second holds that we might have great interest in them, if they were the only way of bringing about certain external goods, but as it happens they aren't the only way of doing this.

I'm going to look at each of these arguments over the next two posts, starting today with the first.


1. Are we uninterested in superhuman sports and games?
To support the first argument, Agar uses some illustrations from the world of human sports and games. The illustrations supposedly demonstrate that we do as a matter of fact lack an interest in superhuman versions of these activities. This is then used as the springboard for an argument about why we lack this interest.

The first example is that of the marathon, specifically Haile Gebrselassie's victory in the Berlin marathon in 2008. Gebrselassie ran that marathon in 2hr 03mins 59secs, which then sparked a debate about whether we would soon seen a sub-two-hour marathon. Agar suggests that Gebrselassie's achievement and the subsequent debate are interesting to us; that we can relate to and value these possibilities.

Contrast this with a (for now) hypothetical superhuman marathon. Agar refers to Robert Freitas's idea of the respirocyte. This is a one-micron-wide nanobot that could be used to replace human haemoglobin. This could massively increase the oxygen-carrying capacity of our blood, allowing us to run at sprint speed for 15 minutes or more. If we enhanced ourselves with respirocytes, the traditional 26.2 mile marathon would no longer be of interest. We would have to invent a new race, perhaps a 262 mile marathon, to create a challenge worthy of our abilities. Agar's suggestion is that we are less interested and less excited by this possibility.

That example might not work for you. So here's another, with a much starker contrast. Consider the game of chess. As you all know, Gary Kasparov -- probably the greatest human chess player of all time -- was defeated by the IBM computer Deep Blue in 1997. Since then, computers have been decisively better than humans at chess (though teams of computers and humans are still better than computers alone).

Nevertheless, despite the clear superiority of computers over human beings, we are not interested in or engaged by the prospect of computer-against-computer competitions (unless, perhaps, we are computer programmers). Human competitions still take place and still dominate the popular imagination. Why is this?


2. Veridical Engagement and Simulation Theory
Agar answers this question by appealing to the concept of veridical engagement. We can define this in the following manner:

Veridical Engagement: We veridically engage with an activity or state of being when we can (more or less) accurately imagine ourselves performing that activity or being in that state.

This definition is mine, not Agar's. I based it on what he wrote but there may be some differences. He speaks solely to activities since the two examples he uses (marathon running and chess) are activities, but I've broadened it out to cover states of being since they would also seem to fit with his argument, and to be relevant to the enhancement debate. I've also added the "more or less" bit before "accurately imagine". When he initially introduces the concept, Agar only refers to "accurately imagine", but later he acknowledges that this comes in degrees. So I think, for him, the imagining does not need to be perfect, just close to reality.

Why is the concept helpful? In essence, Agar argues that our lack of interest in superhuman feats can be explained by our inability to veridically engage with those feats. We have no interest in the achievements of Deep Blue because we cannot think like a computer. To think like Deep Blue would require us to compute 200,000,000 positions per second. We could at best perform a very poor facsimile of this. That's very different from how we engage with Kasparov's achievements. As Agar himself puts it:

No matter how soundly Deep Blue beats Kasparov, a human player will always play chess in ways that interests human spectators to a greater degree than Deep Blue and its successors. Human chess players of modest aptitude can read Kasparov's annotations and thereby gain insight into his stratagems. Kasparov's chess play is vastly superior to that of his fans. But he, presumably as a very young player, passed though a stage in his development [that was]...similar to that of his fans. 
(Agar, 2014, p. 41)

Agar offers us a psychological theory that accounts for our ability to veridically engage with certain activities and states of being. This is simulation theory which argues that the way in which we understand the behaviour of other human beings is by performing a simulation of the mental processes that lie behind that behaviour. Gregory Currie has used this to explain how we engage with fiction. It also helps to explain why we resort to anthropomorphism when imagining non-human animal behaviour.

So the upshot here for Agar is that we don't care about superhuman endeavours because we can't veridically engage with them. Agar is quick to point out that this doesn't mean that superhuman feats are devoid of intrinsic value. It could be that once we become superhuman, we will find our new capacities thrilling and begin to appreciate a whole new set of goods (like how we appreciate new things when we transition from childhood to adulthood). Nevertheless, it does suggest, to him at least, that superhuman activities and states of being lack intrinsic value to us, right now, as ordinary human beings. It'd be better to stick with the intrinsic goods that currently excite our imaginations.


3. Some thoughts and criticisms
I can appreciate what Agar is trying to do in this part of chapter 3. He is trying to flesh out his anthropocentric ideal of enhancement. He is trying to explain how it could be that enhancement up to, or slightly beyond, the current range of human norms is prudentially valuable, but enhancement outside of that range is not. I do, however, have a couple of critiques and queries.

The first has to do with the nature of the argument being presented. I take it that Agar is trying to present an argument from prudential axiology. That is: from premises about what we ought to prudentially value to conclusions about how radical enhancement might negatively impact on those values. That would be consistent with his stated aims from earlier chapters. The problem is that the argument he presents doesn't seem to be like that. It seems to be a purely factual argument about what interests or excites us and why. It's an explanation of one of our psychological quirks, not a defence of a principled normative distinction. At least, it reads that way to me.

Agar could perhaps respond by suggesting that his argument is based on intuitions about particular cases. In other words, he could argue that we intuitively find superhuman feats less prudentially valuable, as is obvious from our reaction to these cases. Arguments from intuition are certainly venerable in axiological debates, but he doesn't seem to adopt this approach directly. Furthermore, if this is what he is doing, it renders the explanation in terms of veridical engagement somewhat superfluous, however interesting it may be. Or, at least, it does so provided that Agar doesn't think that the notion of veridical engagement is itself axiologically significant. Might he believe that? I'm not sure, and I'm not sure why it would be.

This brings me to another point, which has to do with making claims about our capacity to veridically engage with certain activities. This is a dangerous game since what seems experientially out of reach to some may seem less so to others. I certainly have this feeling in relation to the superhuman marathon runners that Agar imagines. I just don't see what's so difficult to imagine about their experiences. I can imagine running at sprint speed; and I can imagine running for a very long time. Why couldn't I imagine both together? Seems like it just requires adding together experiences that I'm already capable of veridically engaging with. It just requires more of the same.

Now, you may respond by saying that this is just one example: Agar's case doesn't stand or fall on this one example. And I happen to think that this is right (I certainly think Agar hits the nail on the head with respect to computer chess: I don't think we can veridically engage with that style of chess-play). My only point is that my reaction to the superhuman marathon could indicate that cases of truly radical enhancement are harder to find than we might think. For example, hyperextended lifespans might be deemed "radical" enhancements by some, but it would seem possible to veridically engage with them: they are longer versions of what we already have. Admittedly, Agar has a chapter on this later in his book where he will no doubt argue that this view of hyperextended lifespan is wrong. I haven't read that yet.

Anyway, that's what I'm thinking so far. In the next post, I'll look at the second argument from chapter three. That argument claims that not only would radical enhancement deprive us of certain intrinsic goods, it would also be unnecessary for achieving certain external goods.

Saturday, April 12, 2014

The Badness of Death and the Meaning of Life (Series Index)



Albert Camus once said that suicide is the only truly serious philosophical question. Is life worth living or not? Should we fear our deaths or hasten them? Is life absurd or overflowing with meaning? These are questions to which I am repeatedly drawn. Consequently, I have written quite a few posts about them over the years. Below, you'll find a complete list, in reverse chronological order, along with links.

Enjoy.


1. The Achievementist view of Meaning in Life
My most recent foray into the debate about the meaning of life was my analysis of Steven Luper's "achievementist" account of meaning in life. Although I find the account intriguing, I'm not entirely convinced.




2. William Lane Craig and the "Nothing But" Argument
This post critiques William Lane Craig's argument that, because humans are nothing but collections of molecules, their lives are devoid of moral value. Although ostensibly framed as a contribution to the debate on morality and religion, the argument also has significance for those who are interested in the meaning of life.


3. Scientific Optimism, Techno-utopianism and the Meaning of Life
This post looks at an argument from Dan Weijers. The argument claims that if we combine naturalism with a degree of techno-utopianism we arrive a robust account of meaning in life. This contrasts quite dramatically with Craig's belief that naturalism entails the end of meaning.
4. Are we Cosmically Significant?
If you look up at the stars at night, it's easy to become overawed at the vastness of our universe. It is so mind-bogglingly large and we are so small. Does this fact make our lives less significant? Guy Kahane argues that it doesn't. This post analyses his argument. 


5. Must we Pursue Good Causes to Have Meaningful Lives?
Philosopher Aaron Smuts defends the Good Cause Account (GCA) of meaning in life. According to this account, our lives are meaningful in virtue of and in proportion to the amount of objective good for which they are causally responsible. These two posts cover his defence of the GCA.


6. Revisiting Nagel on the Absurdity of Life
Thomas Nagel has probably written the most famous paper on the absurdity of life. Many people refer to this paper for knockdown critiques of "bad" arguments for the absurdity of life, while ignoring the fact that Nagel himself thinks that life is absurd. In this two-part series I revisit Nagel's famous paper. I suggest that some of his knockdown critiques are not-so good, and I outline Nagel's own defence of the absurdity of life.


7. Should we Thanatise our Desires?
The ancient philosophy of Epicureanism has long fascinated me. Epicureans developed some interesting arguments about our fear of death and developed a general philosophy of life. One key element of this philosophy was that we should live in a way that is compatible with our eventual deaths. One way to do this was to thanatise our desires, i.e. render them immune to being thwarted or unfulfilled by death. This post asks whether this is sensible advice.


8. The Lucretian Symmetry Argument
Lucretius was a follower of Epicureanism. In one of the passages from his work De Rerum Natura, he defends something that has become known as the symmetry argument. This argument claims that death is not bad for us because it is like the period of non-existence before our births. In other words, it claims that pre-natal non-being is symmetrical to post-mortem non-being. Many philosophers dispute this claim of symmetry. In these two posts, I look at some recent papers on this famous argument.
9. Would Immortality be Desirable?
If we assume that death is bad, does it follow that immortality is desirable? Maybe not. Bernard Williams's famous paper - "The Makropulos Case: Reflections on the Tedium of Immortality" famously makes this case. In these three posts, I look at Aaron Smuts updated defence of this view. Smuts rejects Williams's argument, as well of the arguments of others, and introduces a novel argument against the desirability of immortality.


10. Is Death Bad or Just Less Good?
This is another series of posts about Epicureanism. In addition to the Lucretian symmetry argument, there was another famous Epicurean argument against the badness of death. That argument came from Epicurus himself and claimed that death was nothing to us because it was an experiential blank. In these four posts, I look at Aaron Smuts's defence of this Epicurean argument.
11. Theism and the Meaning of Life
The links between religion and the meaning of life are long-standing. For many religious believers, it is impossible to imagine a meaningful life in a Godless universe. One such believer is William Lane Craig. These two posts look at Gianluca Di Muzio's critique of Craig's view.
12. Harman on Benatar's Better Never to Have Been
Anti-natalism is arguably the most extreme position one can take on the value of life and death. Anti-natalists believe that coming into existence is a great harm, and consequently we have duty not to bring anyone into being. The most famous recent defence of anti-natalism is David Benatar's book Better Never to Have Been (Oxford: OUP, 2006). In these three posts, I look at Benatar's arguments and Elizabeth Harman's critiques thereof:


13. Podcasts on Meaning in Life
Back when I used to do podcasts, I did two episodes on meaning in life. One looking at a debate between Thomas Nagel and William Lane Craig on the absurdity of life. The other looking at the possibility of living a transcendent life without God.


14. Wielenberg on the Meaning of Life
This is a frustratingly incomplete series on Erik Wielenberg's arguments about the meaning of life. In my defence, it was my earliest foray into the topic, and I've covered many similar arguments since. One for the die-hards only I suspect:


Friday, April 11, 2014

The Objective and Anthropocentric Ideals of Enhancement



Nicholas Agar has written several books about the ethics of human enhancement. In his latest, Truly Human Enhancement, he tries to stake out an interesting middle ground in the enhancement debate. Unlike the bioconservatives, Agar is not opposed to the very notion of enhancing human capacities. On the contrary, he is broadly in favour it. But unlike the radical transhumanists, he does not embrace all forms of enhancement.

The centrepiece of his argument is the distinction between radical forms of enhancement — which would push us well beyond what is normal or possible for human beings — and modest forms of enhancement — which work within the extremes of human experience. Agar argues that in seeking radical forms of enhancement, we risk losing our entire evaluative framework, i.e. the framework that tells us what is good or bad for beings like us. That is something we should think twice about doing.

I'm currently working my way through Agar's book, and I thought it might be worth sharing some of my reflections on it as I do. This is something I did a few years back when reading his previous book, Humanity's End?. In my reflections, I'm going to focus specifically on chapters 2, 3 and 4 of the book. I will write these reflections as I read the chapters. This means I will be writing from a position of ignorance: I won't know exactly where the argument is going in the next chapter when I write. I think this can make for a more interesting experience from both a writer's and a reader's perspective.

Anyway, I'll kick things off today by looking at chapter 2. In this chapter, Agar introduces some important conceptual distinctions, ones he promises to put to use in the arguments in later chapters. This means the chapter is light on arguments and heavy on definitional line-drawing. But that's okay.

The main thrust of the chapter is that there is a significant difference between two ideals of enhancement: (i) the objective ideal and (ii) the anthropocentric ideal. The former is embraced by transhumanists like Ray Kurzweil and Max More; the latter is something Agar himself embraces. To understand the distinction, we first need to look at the definition of enhancement itself, and the at the concept of prudential value. Let's do that now.


1. What is enhancement?
The definition of enhancement can be contentious. This is something I've covered in my own published work. Some people equate enhancement with "improvement", but that equation tends to stack the deck against the opponents of enhancement. After all, who could object to improving human beings? If we want to engage with the debate in a more meaningful and substantive way, we can't simply beg the question against the opponents of enhancement like this.

For this reason, Agar tries to adopt a value-neutral definition of enhancement:

Human Enhancement: Is the use of technology - usually biotechnology - to move our capacities beyond the range of what is normal for human beings.

This definition does two important things. First, it focuses our attention on our "capacities", whatever they may be. This is important because, as we'll see below, capacities and their connection to certain goods, is an essential part of Agar's conceptual framework. Second, it defines enhancement in relation to human norms or averages, not moral norms or values. This is important because it is what renders Agar's definition value-free.

Still, as Agar himself seems to note (I say "seems" because he doesn't make this connection explicit), there is something oddly over-inclusive about this definition. If it really were the case that pushing human capacities beyond the normal range sufficed to count as enhancement, then we would have some pretty weird candidates for potential human enhancement technologies. For example, it would seem to imply that a drug that allowed us to gain massive amounts of weight -- well beyond the normal human range of weight gain -- would count as an enhancing drug. Surely that can't be right?

For this reason, Agar seems to endorse the approach of Nick Bostrom, which is to assert that there are certain kinds of human capacity that are "eligible" candidates for enhancement (e.g. intelligence, beauty, height, stamina) and certain others that are not (e.g. the capacity to gain weight). The problem is that this re-introduces value-laden assumptions. Ah well. Definitions are tough sometimes.


2. Prudential Value: Between Intrinsic and Instrumental Value
Agar's argument is about the prudential value of enhancement. That is to say: the value of being enhanced from an individual's perspective. The question he asks is: is enhancement good for me? His argument is not about the permissibility or moral value of enhancement. If we focus on enhancement from those perspectives — for example, if we were to focus on enhancement from the perspective of the public good — different issues and arguments would arise.

As Agar notes, there are two aspects to prudential value:

Instrumental Value: Something is instrumentally prudentially valuable if it brings about, or causes to come into being, other things that are good for the individual.
Intrinsic Value: Something is intrinsically prudentially valuable if it is good for the individual in and of itself, not because it brings about something else.

To add more complexity to the distinction, Agar also introduces the concepts of external and internal goods. This is something he derives from the work of Alasdair MacIntyre, who explains the difference with an analogy to the game of chess.

MacIntyre says that playing chess can produce certain external goods. For example, if I am a successful chess player, I might be able to win prize money at chess tournaments. The prize money would be an external good: a causal product of my success at chess. But there are other goods that are internal to the game itself. In playing the game, I experience the good of, say, strategic planning, careful rational thought about endgame and opening, and so forth. These goods are instantiated by the process of playing chess. They are not mere causal products of it.

Why is this important? Well, because Agar urges us to evaluate our human capacities in terms of both their instrumental value (i.e. their tendency to produce external goods) and their intrinsic value (i.e. their tendency to help us instantiate internal goods). This is where the contrast between the objective and anthropocentric ideals of enhancement becomes important.

I have one comment about Agar's view of capacities and goods before proceeding to discuss the differences between the objective and anthropocentric ideals. I think the relationship between our capacities and external goods is tolerably clear. Agar is simply saying that our capacities are instrumentally valuable when they help us to bring about certain external goods (e.g. greater wealth, happiness, artwork, scientific discoveries and so forth). The relationship between capacities and internal goods is less clear. Agar says "we assign intrinsic value to a capacity according to the internal goods it yields", but I wonder what he means by "yields" here. It can't be (can it?) that our capacities themselves instantiate internal goods? Rather, it would seem to be that our capacities allow us to do things, engage in certain activities (like chess playing), that instantiate certain internal goods. At least, that's how I understand the relationship.


3. The Objective Ideal of Enhancement
It is possible to measure objective degrees of enhancement. For example, if we take a capacity like stamina or intelligence, we can measure the amount of improvement in those capacities by adopting widely used metrics (e.g. bleep tests and IQ tests). We might quibble with some of those metrics, but it is still at least possible to measure objective rates of improvement along them. Other capacities or attributes might be more difficult to measure objectively (e.g. can we measure capacity for moral insight when the concept of morality is so contested?), but even in those cases it might be possible to come up with an objective measurement. It will just be a highly contentious one.

These contentions need not concern us here. All that matters is that there is some possibility of objective measurement. Provided that there is, we can understand the objective ideal of enhancement. This ideal has a very straightforward view of the relationship between human enhancement and prudential value. It says that as we increase the objective degree of enhancement (i.e. as we go up the scale of intelligence, moral insight, stamina, beauty, lifespan etc.), so too do we go up the scale of prudential value. There may be diminishing rates of marginal return — e.g. the first 400 years of added lifespan might count for more than the second 400 — but, and this is the critical point, there is never a negative relationship between the degree of enhancement and the degree of prudential value. This is illustrated in the diagram below.




Agar argues that many in the transhumanist community embrace the objective ideal of enhancement. They think that the more enhanced we become, the more prudential value we will have. He cites Ray Kurzweil and Max More as two exemplars of this attitude. His suggestion is that this comes from an instrumentalist approach to the value of our capacities; a belief that they matter because they help us to realise certain external goods; not because they instantiate intrinsic goods.


4. The Anthropocentric Ideal of Enhancement
This sets up the contrast with the anthropocentric ideal. This ideal has a different view of the relationship between enhancement and prudential value. Instead of it being the case that prudential value always increases in direct relation to increases in objective degrees of enhancement, it is sometimes the case that the relationship reverses. For example, an extra 100 IQ points might increase the degree of prudential value, but an extra 500 might actually decrease it. This idea is illustrated in the diagram below.



Agar's suggestion is that the anthropocentric ideal allows for this kind of relationship because it includes intrinsic value and internal goods in its assessment of prudential value. The anthropocentric ideal suggests that there are certain things that are good for us now (as human beings) that might be lost if we push the objective degree of enhancement too far. These are goods that are internal to some of our current types of activity.

Agar is adamant that the anthropocentric and objective ideals are not alternatives to one another. That is to say: it is not the case that one of those ideals is right and one is wrong. They are both simply different ways of looking at enhancement and measuring its value. Furthermore, the anthropocentric ideal doesn't necessarily assume that all forms of enhancement reach a point of decline. This is something that needs to be assessed on a case by case basis.

Despite these admonitions, it seems clear that his goal is to argue that the anthropocentric ideal is too often neglected by proponents of enhancement; and to argue that the negative relationship does arise in some important cases. The purpose of chapters 3 and 4 is to flesh out these arguments.

I'm interested in seeing where all of this goes. I appreciate the conceptual framework Agar is building, but I'm concerned about his use of the external/internal goods distinction and how it maps onto our understanding of human capacities. It seems to me like an objective ideal of enhancement (one that accepts the positive relationship) need not deny or obscure internal goods. But that depends on how exactly we understand the relationship between capacities and internal goods. I'll hold off on any judgment until I've read the subsequent chapters.

Wednesday, April 9, 2014

Equality, Fairness and the Threat of Algocracy: Should we embrace automated predictive data-mining?



I’ve looked at data-mining and predictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.

Today, I want to switch tack and, instead of focusing on the moral and political concerns with these technologies, I want to look at a moral and political argument in their favour. The argument comes from Tal Zarsky. It claims that the increasing use of automated predictive analytics should be welcomed because it can help to the eliminate racial and ethnic biases that permeate our social decision-making processes. It also argues that resistance to this technology could be attributable to a fear amongst the majority that they will lose their comfortable and privileged position within society.

This strikes me as an interesting and provocative argument. I want to give it a fair hearing in this post. To do this, I’ll break my discussion down into three subsections. First, I’ll clarify the nature of the technology under debate. Second, I’ll outline Zarsky’s argument. Third, I’ll look at some potential problems with this argument.

The discussion is based on two articles from Zarsky, which you can find here and here.


1. What exactly are we talking about?

Zarsky’s argument is about the way in which data-mining algorithms can be used to make predictions about individual behaviour. The argument operates in a world dominated by jargon like “data-mining”, “big data”, “predictive analytics” and so forth. This jargon is often ill-defined and poorly understood. Fortunately, Zarsky takes the time out to define some of the key concepts and to specify exactly what his argument is about.

The first key concept is that of “data-mining” which Zarsky defines in the following manner:

Data-Mining: The non-trivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data.

There is a sense in which we all engage in a degree of data-mining, so defined. The difference nowadays comes from the fact that we are living in the era of “big data”, in which vast datasets are available, and which cannot be mined without algorithmic assistance.

As Zarsky notes, there are several different kinds of data-mining. At a first pass, there is a distinction between descriptive and predictive data-mining. The former is used simply to highlight and explain the patterns in existing datasets. For example, data-mining algorithms could be used to identify significant patterns in experimental data, which can in turn be used to confirm or challenge scientific theories. Predictive data-mining is, by way of contrast, used to make predictions about future events on the basis of historical datasets. Classic examples might be the mining of phone records and internet activity to predict who is likely to carry out a terrorist attack, or the mining of historical purchasing decisions to predict future purchasing decisions. It is the predictive kind of data-mining that interests Zarsky (I call this, along with others, “predictive analytics” as it is about analysing datasets to make predictions about the future).

In addition to this, there is a distinction between two different kinds of data “searches”:

Subject-based searches: Search datasets for known/predetermined patterns (typically relating to specific people or events).
Pattern-based searches: Search datasets for unknown/not predetermined patterns.

Zarsky’s argument is concerned with pattern-based searches. These are interesting insofar as they grant a greater degree of “autonomy” to the algorithms sorting through the data. In the case of pattern-based searches, the algorithms find the patterns that human analysts and governmental agents might be interested in; they tell the humans what to look out for.

All of which brings us to the thorny issue of human involvement. Again, as Zarsky notes, humans can be more or less involved in the data-mining process. At present, they are still quite heavily involved, constructing datasets to be mined and defining (broadly) the parameters within which the algorithms work. Furthermore, it is typically the case that humans review the outputs of the algorithms and decide what to do with them. Indeed, in the European Union, this is a legal requirement. Article 15 of Directive 95/46/EC requires human review of any automated data-processing that could have a substantial impact on an individual’s life.

There are, however, exceptions to this requirement and it is certainly technically feasible to create systems that reduce or eliminate human input. Part of the reason for this comes from the existence of two different styles of data-mining process:

Interpretable Processes: This refers to any data-mining process which is based on factors and rationales that can be reduced to human language explanations. In other words, processes which are interpretable and understandable by human beings.
Non-interpretable Processes: This refers to any data-mining process which is not based on factors or rationales that can be reduced to human language explanations. In other words, processes which are not interpretable and understandable by human beings.

The former set of processes allow for significant human involvement, both in terms of setting out the rationales and factors that will be used to guide the data-mining, and in terms of explaining those rationales and factors to a wider audience. The latter set of processes reduce, and may ultimately eliminate, human involvement. This is because in these cases the software makes its decision based on thousands (maybe hundreds of thousands) of variables which are themselves learned through the data analysis process, i.e. they are not set down in advance by human programmers.

In his writings, Zarsky sometimes suggests that interpretable processes are preferable, at least from a transparency perspective. That said, in order for his fairness and equality argument to work, it’s not clear that interpretable processes are required. Indeed, as we are about to see, minimising the ability of humans to interfere with the process seems to be the motivation for that argument. I return to this issue later. For the time being, let’s just look at the argument itself.


2. The Equality and Fairness Argument
To get off the ground, Zarsky’s argument demands that we make an assumption. We must assume that predictive analytics can, as a matter of fact, be useful, i.e. that it can successfully identify likely terrorist suspects, tax evaders, violent criminals, or whatever. If it can’t do that, then there’s really no point in discussing it.

Furthermore, when assessing the merits of predictive analytics we must take care not to consider it in isolation from its alternatives. In other words, we can’t simply focus on the merits and demerits of predictive analytics by itself, without also considering the merits and demerits of policies that are likely to be used in its stead. This is an important point. Governments have legitimate aims in trying to reduce thinks like terrorism, tax evasion and violent crimes. If they are not using predictive analytics to accomplish those aims, they’ll be using something else. The comparators must be factored into the argument. If it turns out that predictive analytics is comparatively better than its alternatives, then it may be more desirable than we think.

But that simply raises the question: what are the comparators? In his most detailed discussion, Zarsky identifies five alternatives. For present purposes, I’m going to simplify and just talk about one: any system in which humans decide who gets targetted. This could actually cover a wide variety of different policies; all that matters is that they share this one feature. This is to be contrasted with an automated system that runs entirely on the basis of predictive data-mining algorithms.

With all this in mind, we can proceed to the argument proper. The argument works from a simple motivating premise: it is morally and politically better if our social decision-making processes do not arbitrarily and unfairly target particular groups of people. Consider the profiling debate in relation to anti-terrorism and crime-prevention. One major concern with profiling is that it is used to arbitrarily target and discriminate against certain racial and ethnic minorities. That is something that we could do without. If people are going to be targetted by such measures, they need to be targetted on legitimate grounds (i.e because they are genuinely more likely to be terrorist or to commit crimes).

Working from that motivating premise, Zarsky then adds the comparative claim that automated predictive analytics will do a better job of eliminating arbitrary prejudices and biases from the process. That gives us the following argument:


  • (1) It is better, ceteris paribus, if our social decision-making processes do not arbitrarily and unfairly target particular groups of people.
  • (2) Social decision-making processes that are guided by automated predictive analytics are less likely to do this than processes that are guided by human beings.
  • (3) Therefore, it would be better, ceteris paribus, to have social decision-making processes that are guided by automated predictive analytics.


Let’s probe premise (2) in a little more depth. Why exactly is this likely to be true? To back it up, Zarsky delves into the literature on implicit and unconscious biases. Those who are familiar with this literature will know that a variety of experiments in social psychology reveal that even when decision-makers don’t think they are being racially or ethnically prejudiced, they often are. This is because they subconsciously and implicitly associate people from certain racial and ethnic backgrounds with other negative traits. If you like, you can perform an implicit association test (IAT) on yourself to see whether you exhibit such biases.

Zarsky’s point is simply that the algorithms at the heart of predictive analytical programmes will not be susceptible to the same kinds of hidden bias, especially if they are automated and the capacity of human beings to override them is limited. As he himself puts it:

[A]utomation introduces a surprising benefit. By limiting the role of human discretion and intuition and relying upon computer-driven decisions this process protects minorities and other weaker groups. 
(Zarsky, 2012, pg. 35)

Zarsky builds upon this by suggesting that one of the sources of opposition to automated, algorithm-based decision-making could be the privileged majorities who benefit from the current system. They may actually fear the indiscriminate nature of the automated process. If the process is guided by a human, then the majorities can appeal to human prejudices in order to secure more favourable, less intrusive outcomes. If the process is guided by a computer, they won’t be able to do this. Consequently, some of the burden of enforcement and prevention mechanisms will be shifted onto them, and away from the minorities who currently bear their brunt.


3. Problems and Conclusions
That’s the argument in outline form. The next question is whether it is persuasive. That’s a difficult question to answer in the space of a blog post like this, and it is one I am still pondering. Nevertheless, there are a few obvious, general, points of criticism.

The first is that premise (2) might actually be wrong. It may be that predictive analytics is just as biased and prejudiced as human decision-making. This could arise for any number of reasons, some of which Zarsky acknowledges. For example, the datasets that are fed into the algorithms could themselves be the products of biased human policies on data collection. Likewise, the sorting algorithms might have built in biases that we can’t fully understand or protect against. This is something that could be exacerbated if the whole process is non-interpretable.

All of which brings me to another obvious point of criticism. The “ceteris paribus” clause in the first premise is significant. While it is indeed true that — all else being equal — we prefer to have unbiased and unprejudiced decision-making systems, all else may not be equal here. Elsewhere on this blog, I have outlined something I call the “threat of algocracy”. This is a threat to the legitimacy of our social decision-making processes that is posed by the incomprehensibility, non-interpretability and opacity of certain kinds of algorithmic control. The threat is important because, according to most theories of procedural justice, any public procedure that issues coercive judgments should be understandable by those who are affected by it. The problem is that this may not be the case if we hand control over to the automated processes recommended by Zarsky.

He himself acknowledges this point by highlighting how we prefer to have human decision-makers because at least we can engage with them at a human level of rational thought and argumentation: we can identify their assumptions and spot their faulty logic (if indeed it is faulty). But Zarsky has a response to this worry. He can fall back on the desirability of interpretable predictive analytics. In other words, he can argue that we can have the best of both worlds: unbiased decision-making, coupled with human comprehensibility. All we have to do is make sure that the rationales and factors underlying the automated predictive algorithms can be explained to human beings.

That might be a satisfactory solution, but I’m not entirely convinced. One reason for this is that I think having interpretable processes might re-open the door to the kinds of biased human decision-making that originally motivated Zarsky’s argument. The more humans can understand and shape the process, the more scope there is for their unconscious biases to affect its outputs. So perhaps the lack of bias and the degree of comprehensibility are in tension with one another. Perhaps additional solutions are needed to get the best of both worlds (e.g. moral enhancement)?

I think that question is a nice point on which to end.