Monday, March 5, 2018

The Extended Mind, Ethical Parity and the Replaceability Criterion: A Critical Analysis

I was recently watching Netflix’s sci-fi series Altered Carbon. The basic premise of the show — which is based on a series of books by Richard Morgan — is that future humans develop a technology for uploading their minds to digital ‘stacks’. These stacks preserve the identity (“soul”) of the individual and can be transferred between different physical bodies, even after one of them has been ‘killed’. This has many social repercussions, one of which is that biological death — i.e. the destruction or fatal mutilation of the body — becomes a relatively trivial event. An inconvenience rather than a tragedy. As long as the stack is preserved, the individual can survive by being transplanted to another body.

The triviality of biological form is explored in many ways in the show. Violence is common. There are various clubs that allow people to destroy one another’s bodies for sport. There is also considerable inequality when it comes to access to new bodies. The wealthy can afford to clone their preferred body types and routinely transfer between them; the poor have to rely on social distribution schemes, often ending up in the bodies of condemned prisoners. Bodies in general have become commodities: things to be ogled, prodded, bought and sold. The show has been criticised for its gratuitous nudity — the male and female performers are frequently displayed partially or fully nude — but the showrunner has defended this, arguing that it is what you would expect in a world in which the body has become disposable. I think there is some truth to this. I think our attitude toward our bodies would be radically different if they were readily ‘fungible’ (i.e. capable of being replaced by an identical or ‘as good’ item).

What if the same were true of our minds? What if we could swap out parts of our minds as readily as we swap out the batteries in an old remote control? Destroying a part of someone’s mind is currently held to be a pretty serious moral offence. If I intentionally damaged the part of your brain that allowed you to remember faces, you’d hardly take it in your stride. But suppose that as soon as I destroyed the face-recognition part you could quickly replace it with another, functionally equivalent part? Would it be so bad then?

These are not purely speculative questions. Neuroscientists and neurotechnologists are hard at work at ‘brain prosthetics’ that could enable us to swap out brain systems. Furthermore, there are plenty of philosophers and cognitive scientists who claim that we already routinely do this with parts of our minds. They take a broad interpretation of what counts as a part of a ‘mind’, claiming that our ‘minds’ extend beyond the boundaries of our bodies, arguing they are distributed between our brains, our bodies, and our surrounding environments. Some of them argue that if we take this cognitive extension seriously, it leads us to an ‘ethical parity’ thesis (Levy 2007). This thesis holds that interferences with the non-neural parts of our minds carries just as much moral weight as interfering with a neural part. This has two possible consequences, depending on the context and the nature of the interference: (i) it could mean that we ought to take non-neural interferences more seriously than we currently do; or (ii) that we should be less worried about neural interferences than we currently are.

In this post, I want to look at some arguments for taking the ethical parity thesis seriously. I do so by investigating an article by Jan-Hendrik Heinrichs which is skeptical of strong claims to ethical parity. I agree with much of what Heinrichs has to say, but his argument rests a lot of weight on the ‘replaceability’ criterion that I alluded to above and I’m not sure that this is a good idea. I want to explain why in what follows.

1. Understanding the Case for Ethical Parity
The ethical parity thesis (EPP) was originally formulated by Neil Levy in his 2007 book Neuroethics. It came in two forms (2007, 67), both of which were premised on accepting that mental processes/systems are not confined to the brain:

Strong Parity: Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.

Weak Parity: Alterations of external props are (ceteris paribus) ethically on a par with alterations of the brain, to the precise extent to which our reasons for finding alterations of the brain problematic are transferable to alterations of the environment in which it is embedded.

The Strong EPP works from something called the ‘extended mind hypothesis’, which holds that mental states can be constituted by a combination of the brain and the environment in which it is embedded. To use a simple example, the mental act of ‘remembering to pick up the milk’ could, according to the extended mind hypothesis, be constituted by the combined activity of my eyes/brain decoding the visual information on the screen of my phone and the device itself displaying a reminder that I need to pick up the milk. The use of the word ‘constituted’ is important here. The extended mind hypothesis doesn’t merely claim that the mental state of remembering to pick up the milk is caused by or dependent upon my looking at the phone; it claims that it is emergent from and grounded in the combination of brain and smartphone. It’s more complicated than that, of course, and I have examined the hypothesis in detail in previous blogposts. Suffice to say, proponents of the hypothesis don’t allow just any external prop to form part of the mind; they have some criteria for determining whether an external prop really is part of the mind and not just something that plays a causal role in it. I’ll return to this below.

Heinrichs thinks there is a major problem with the Strong EPP. He says that the argument for it is flawed. If you look back at the formulation given above, you’ll see that it presents an enythymeme. It claims that because the mind is extended, external mental ‘realisers’ (to use the jargon common in this debate) carry the same moral weight as internal mental realisers. But that inference can only be drawn if you accept another, hidden premise, as follows:

  • (1) The mind extends into the external environment, i.e. external props contribute (in a constitutive way) to mental processes.

  • [Hidden premise: (2) All contributors to mental processes are on a par when it comes to their moral value]

  • (3) Therefore, alterations of external mental props that contribute to mental processes are ethically on a par with alterations of the brain.

The problem is that the hidden premise is not persuasive. Not all contributors to mental processes are morally equivalent. Some contributors could be redundant, trivial or easily replaceable and that seems like it could make a difference. I could destroy your smartphone, but you might have another one with the exact same information recorded in it. You might have suffered some short-term harm from the destruction but to claim that it is on a par with, say, destroying your hippocampus, and thereby preventing you from ever remembering where you recorded the information about buying the milk, would seem extreme. So parity cannot be assumed, even if we accept the extended mind hypothesis.

The Weak EPP corrects for this problem with the Strong EPP by making moral reasons part and parcel of the parity claim. Although not stated clearly, the Weak EPP effectively says that (because of mental extension) the reasons for finding interferences with internal mental parts problematic transfer over to external mental parts, and vice versa. Furthermore, the Weak EPP doesn’t require the extended mind hypothesis, which many find implausible. It can work from more modest distributed/embodied theories of cognition, which hold that both the body and its surrounding environment play a critical causal role in certain mental processes, even if they aren’t technically part of the mind. An example here might be the use of a pen and paper while solving a mathematical problem. While in use, the pen and paper are critical to the resolution of the puzzle, so much so that it makes sense to say that the cognitive process of solving the puzzle is not confined to the brain but is rather distributed between the brain and the two external props. This is true even if you don’t think the pen and paper are part of the mind. There is, in other words, an important dependency relation between the two such that if you find it problematic to disrupt someone’s internal, math-solving, brain-module while they were trying to solve a problem, you should also find it problematic to do the same thing to their pen and paper when they are mid-solution (And vice versa).

But even the Weak EPP has its problems. When exactly do the reasons transfer over? What reasons could we have for finding internal and external interferences in mental processes problematic? In short: when might some form of weak ethical parity arise?

2. Three Criteria for Parity: Original Value, Integration and Replaceability
Heinrich’s article focuses on three criteria that he thinks are relevant when considering whether there is ethical parity. I want to consider each in turn.

The first criterion focuses on the distinction between original value and derivative value. It’s easy to explain the distinction; harder to defend it. Go back to the pen and paper example from the previous section. You could argue that the pen and paper have no original/intrinsic value in this scenario. The value that they have is entirely derivative: it derives from the fact that they are currently playing an important part in your mathematical problem-solving process. If you had acted differently, they would have no value. For example, if you transferred to a different pen and paper because you made a critical error the first time round, the original pen and paper would have no value; or if tried to solve the problem in your head, they would never acquire any value. In other words, you, and all your constituent parts, have original/intrinsic value: the value of the external props and artifacts is entirely dependent on the uses to which you put them. Focusing on this distinction scuppers most claims to ethical parity. External props can never have quite the same moral weight as internal mental realisers because they will always lack original value.

Original Value Criterion: You and your constituent parts have original/intrinsic value but external props and artifacts have merely derivative value. Thus there will always be an important ethical distinction between what’s internal to you and what’s not.

The criterion is easy to explain because it has intuitive pull: we probably do think about ourselves (and what is a proper part of ourselves) in this way. But I think it is slightly more difficult to defend because it depends on a number of contested claims. The first contested claim concerns what actually counts as a proper part of ourselves. If we accept the extended mind hypothesis, then external props could count as proper parts of our selves and hence could have the same original value as internal parts (Heinrichs seems to accept this point). The second contested claim follows from this and concerns whether or not internal parts are always intrinsically valuable. If we cast a more critical eye over our internal parts, we might find that some of them do not really count as proper parts of our selves because they do not form some essential or integral element of who were are. In that case their destruction could be ethically trivial. For example, the destruction of one of my neurons is hardly an ethical tragedy: I can survive perfectly fine without it. This suggests, to me, that a single neuron lacks intrinsic value: it is not an essential part of who I am, even if it is internal to my body/brain. The third contested claim concerns whether or not all external props lack intrinsic value. I think this could be challenged. Some external props might have their own, independent value, e.g. aesthetic beauty. Admittedly, this is a tangential point in this debate, but it is worth bearing in mind.

The second criterion for assessing ethical parity is the degree of integration between the user and the external prop. Even though Heinrichs makes much of the original/derivative distinction he acknowledges that some external props could be so closely integrated with a person’s cognitive processes that their value, even though derivative, could be very high. Consider the surgeon who relies on robotic arms to help her complete a delicate operation; or the blind person who uses a cane to help them navigate. There is a high degree of integration between the external props and the user in both of these cases. If you broke down the robotic arms, or stole the cane, you would be doing something with a lot of moral disvalue. This is because the user depends so heavily on the prop that you would seriously disrupt their mental/cognitive processes by interfering with it.

Integration Criterion: When a user is highly integrated with an external prop it can have a high degree of moral value.

But how do you assess degrees of integration? Various sub-criteria have been proposed over the years. Richard Heersmink has argued that there are eight sub-criteria of integration, including (i) the amount information that flows between the user and prop; (ii) the reliability of that information; (iii) the durability of the prop; (iv) the degree of trust placed in the prop; (v) the procedural transparency of the prop; (vi) the informational transparency of the prop; (vii) the degree of individualisation/customisation of the prop and (viii) how much the prop transforms the capabilities of the user. All of these seem sensible, and I agree that the more integrated a user is with the external prop the higher the moral value attached to it. But as Heinrichs points out, Heersmink’s criteria work best when we are dealing with information technologies, and not with other kinds of external props (e.g. brain stimulation devices).

This leads Heinrichs to consider another criterion, one that he thinks is particularly important: the replaceability criterion. To explain how he thinks about it, I will quote directly from his article:

Replaceability Criterion: “Generally, an irreplaceable contributor to one and the same cognitive process is, ceteris paribus, more important [i.e. carries more value] than a replaceable one.: (Heinrichs 2017, 11)

Using this criterion, Heinrichs suggests that many internal parts are irreplaceable and so their destruction carries a lot of moral weight, whereas many external props are replaceable and so their destruction carries less weight. That said, he also accepts that some external props could be irreplaceable which means that destroying them would mean doing a serious wrong to an agent. However, he argues that such irreplaceability needs to be assessed over two different timescales. An external prop might be irreplaceable in the short-term — when mid-activity — but not in the long-term. Someone could steal a blind person’s cane while they are walking home, thereby doing significant harm to them with respect to the performance of that activity, but the cane could be easily replaced in the long-term. The question is whether this kind of long-term replaceability makes any moral difference? Intuitively, it seems like it might. Destroying something that is irreplaceable in both the short and long-term would seem to be much worse than destroying something that is replaceable in the long-term. Both are undoubtedly wrong, but they are not ethically on a par.

This brings us, at last, to the question posed in the introduction. If technology continues to advance, and if we develop more external props that allow us to easily replace parts of our brains and bodies — if, in some sense, the component parts of all mental processes are readily fungible — will that mean that there is something trivial about the destruction of the original biological parts? Here’s where the replaceability criterion starts to get into trouble. If you accept that the degree of replaceability makes a moral difference, you start slipping down a slope to a world in which many of our commonsense moral beliefs lose traction. The destruction of limbs and brain parts could be greeted with equanimity because they can be easily replaced. The counterintuitive nature of this world has led others to argue that the replaceability criterion should be deployed with some caution in this context. It clearly doesn’t capture everything we care about when it comes to understanding interpersonal wrongs: there are intrinsic wrongs/harms associated with destroying parts of someone’s body or mental processes that need to be taken very seriously, even if those parts are easily replaceable. Replaceability cannot erase all wrongdoing.

But this suggests to me that more needs to said about when replaceability really matters and when it doesn’t. It’s possible, after all, that our moral intuitions about right and wrong should not be trusted in a world of perfect technological fungibility. One suggestion I have is that the intrinsic/instrumental value distinction could play an important role in determining when replaceability really matters. Some things are intrinsically value: any replacement with a functional equivalent will fail to provide the same level of value. Consider a beloved family pet. You could replace it with another pet, but it wouldn’t be the same. Other things are instrumentally valuable: replacement with a functional equivalent would provide the same level of value. Consider a knife or fork that you use to eat your food. If one falls on the ground and is replaced by functional equivalent we don’t lament the loss of the original.

So I think the critical question, then, is whether the parts of our cognitive/mental processes (or biological systems) have some intrinsic value, such that any replacement would fail to provide the same level of value, or whether they are merely instrumentally valuable: they matter because they help to sustain us in certain activities and, ultimately, sustain our personal identities. I tend to favour the instrumental view, which would imply that nostalgia for our original biological parts is irrational in a future of perfect technological fungibility. This does not mean that there is nothing wrong with attacking someone and interfering with those parts. It just means that it might be less wrong than it is in our current predicament.

That might be a disturbing conclusion for some.

No comments:

Post a Comment