(Previous Entry, Series Index)
This is the third part of my series on Nicholas Agar’s book Truly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.
I’m offering my reflections on parts of the book as a read through it. I’m currently on the second half of Chapter 3. In the first half of Chapter 3, Agar argued that humans are (rightly) uninterested in the activities of the radically enhanced because they cannot veridically engage with those activities. That is to say: because they cannot accurately imagine what it is like to engage in those activities. I discussed this argument in the previous entry.
As Agar himself notes, the argument in the first half of the chapter only speaks to the internal goods of certain human activities. In other words, it argues that we should keep enhancements modest because we shouldn’t wish to lose goods that are intrinsic to our current activities. This ignores the possible external goods that could be brought about by radical enhancement. The second half of the chapter deals with these.
1. External Goods and the False Dichotomy
It would be easy for someone reading the first half of chapter 3 to come back at Agar with the following argument:
Trumping External Goods Argument: I grant that there are goods that are internal and external to our activities, and I grant that radical enhancement could cause us to lose certain internal goods. Still, we can’t dismiss the external goods that might be possible through radical enhancement. Suppose, for example, that a radically enhanced medical researcher (or team of researchers) could find a cure for cancer. Wouldn’t it be perverse to forgo this possibility for the sake of some internal goods? Don’t certain external goods (which may be made possible by radical enhancement) trump internal goods?
The proponent of this argument is presenting us with a dilemma, of sorts. He or she is saying that we can stick with the internal and external goods that are possible with current or slightly enhanced human capacities, or we can go for more and better external goods. It would seem silly to opt for the former when the possibilities are so tantalising, especially given that Agar himself acknowledges that new internal goods may be possible with radically enhanced abilities.
The problem with this argument is that it presents us with a false dilemma. We don’t have to pick and choose; we can have the best of both worlds. How so? Well, as Agar sees it, we don’t have to radically enhance our abilities in order to secure the kinds of external goods evoked by the proponent of the trumping argument. We have other kinds of technology (e.g. machines and artificial intelligences) that can help us to do this.
What’s more, as Agar goes on to suggest, these other kinds of technology are far more likely to be successful. Radical forms of enhancement need to be integrated with the human biological architecture. This is a tricky process because you have to work within the constraints posed by that architecture. For example, brain-computer interfaces and neuroprosthetics, currently in their infancy, face significant engineering challenges in trying to integrate electrodes with neurons. External devices, with some user-friendly interface, are much easier to engineer, and don’t face the same constraints.
Agar illustrates this with a thought experiment:
The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction.
Which investment strategy do you choose?
The question is a bit of a throwaway since, obviously, Pharaohs are unlikely to have the patience for investment of either sort. Still, it seems like the second investment strategy is the wiser one. We’ve had machines to assist construction for a long time now that aren’t directly integrated with our biology. They are extremely useful, going well beyond what is possible for a human. This suggests that the second option is more likely to be successful. Agar argues that this is all down to the integration problem.
2. Gambling on radical enhancement: is it worth it?
I think it’s useful reformulate Agar’s argument using some concepts and tools from decision theory. I say this because many of Agar’s arguments against radical enhancement seem to rely on claims about what should we be willing (or unwilling) to gamble on when it comes to enhancement. So it might be useful to have one semi-formal illustration of the decision problems underlying his arguments, which can then be adapted for subsequent examples.
We can do this by for the preceding argument by starting with a decision tree. A decision tree is, as the name suggests, a tree-like diagram that represents the branching possibilities you confront every time you make a decision. The nodes in this diagram either depict decision points or points at which probabilities affect different outcomes (sometimes we think of this in terms of “Nature” making a decision by determining the probabilities, but this is a just a metaphor).
Anyway, the decision tree for the preceding argument works something like this. At the first node, there is a decision point: you can opt for radical enhancement or modest (or no) enhancement. This then branches out into two possible futures. In each of those futures there is a certain probability that we will secure the kinds of external goods (like cancer cures) alluded to by the proponent of the trumping argument, and a certain (complementary) probability that we won’t. So this means that either of our initial decisions leads to two further possible outcomes. This gives us four outcomes in total:
Outcome A: We radically enhance, thereby losing our current set of internal goods, and fail to secure trumping external goods.
Outcome B: We radically enhance, thereby losing our current set of internal goods, but succeed in securing trumping external goods.
Outcome C: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, and fail to secure trumping external goods through other technologies.
Outcome D: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, but succeed in securing trumping external goods through other technologies.
This is all depicted in the diagram below.
With the diagram in place, we have a clearer handle on the decision problem confronting us. Even without knowing what the probabilities are, or without even having a good estimate for those probabilities, we begin to see where Agar is coming from. Since radical enhancement always seems to entail the loss of internal goods, modest enhancement looks like the safer bet (maybe even a dominant one). This is bolstered by Agar’s argument we have good reason to suppose that the probability of securing the trumping external goods is greater through the use other technologies. Hence, modest enhancement really is the better bet.
There are a couple of problems with this formalisation. First, the proponent of radical enhancement may argue that it doesn’t accurately capture their imagined future. To be precise, the proponent could argue that I haven’t factored in the new forms of internal good that may be made possible with radically enhanced abilities. That’s true, and that might be a relevant consideration, but bear in mind that those new internal goods are, at present, entirely unknown. Is it not better to stick with what we know?
Second, I think I’m being a little too-coarse grained in my description of the possible futures involved. I think it’s odd to suggest, as the decision tree does, that there could be a future in which we never achieve certain trumping external goods. That would suppose that there could be a future in which there is no progress on significant moral problems at our current level of technology. That seems unrealistic to me. Consequently, I think it might be better to reformulate the decision tree with a specific set of external goods in mind (e.g. things like a cure for cancer, or for world hunger, childhood mortality etc. etc.).
3. The External Mind Objection
There is another objection to Agar’s argument that is worth addressing separately. It is one that he himself engages with. It is the objection from the proponent of the external mind thesis. This thesis can be characterised in the following manner:
External Mind Thesis: Our minds are not simply confined to our skulls or bodies. Instead, they spill out into the world around us. All the external technologies and mechanisms (e.g. calculators, encyclopedias) we use to help us think and interact with the world are part of our “minds”.
The EMT has been famously defended by Andy Clark (and David Chalmers). Clark argues that the EMT implies that we are all cyborgs because of the way in which technology permeates in our lives. The EMT can be seen to follow from a functionalist theory of mind.
The thing about the EMT is that it might also suggest that the distinction Agar draws between different kinds of technological enhancement is an unprincipled one. Agar wants to argue that technologies that enhance by being integrated with our biology are different from technologies that enhance by providing us with externally accessible user interfaces. An example would be the difference between a lifting machine like a forklift and a strength enhancing drug that allows us to lift heavier objects. The former is external and non-integrated; the latter is internal and integrated. The defender of the EMT argues that this is a distinction without a difference. Both kinds of technological assistance are part of us, part of how we interact with and think about the world.
Agar could respond to this by simply rejecting the EMT, but he doesn’t do this. He thinks the EMT may be a useful framework for psychological explanation. What he does deny, however, is its usefulness across all issues involving our interactions with the world. There may be some contexts in which the distinction between the mind/body and the external world count for something. For example, in the study of the spread of cancer cells, the distinction between what goes on in your body, versus what goes on in the world outside it, is important (excepting viral forms of cancer). Likewise, the distinction between what goes on in our heads and what goes on outside, might count for something. In particular, if we risk losing internal goods through integrated enhancement, why not stick with external enhancement? This doesn’t undermine Clark’s general point that we are “cyborgs”; it just says that there are different kinds of cyborg existence, some of which might be more valuable to us than others.
I don’t have any particular issue with this aspect of Agar’s argument. It seems correct to me to say that the EMT doesn’t imply that all forms of extension are equally valuable.
That brings us to the end of chapter 3. In the next set of entries, I’ll be looking at the arguments in chapter 4, which have to do with radical enhancement and personal identity.
No comments:
Post a Comment