Pages

Saturday, January 4, 2014

Is mind-uploading existentially risky? (Part Three)



(Part one, Part two)

This is the third (and final) part in my ongoing series about the rationality of mind-uploading. The series deals with something called Searle’s Wager, which is an argument against the rationality of mind-uploading. The argument was originally developed by Nicholas Agar in his 2011 book Humanity’s End. This series, however, covers a debate between Agar and Levy in the pages of the journal AI and Society. The first two parts discussed with Levy’s critique; this part discusses Agar’s response.

The basic elements of Searle’s Wager are easily summarised: assume we have some putative mind-uploading technology which scans your brain, and then creates an artificial replica of it that can be seamlessly integrated with other digital media (before finally destroying the original copy). To all external appearances, this artificial replica appears to be you: it has the memories and relevant behavioural traits, and claims to be conscious. Is this enough to make uploading rational? No, argues Agar. It could be — following the arguments of John Searle (among others) — that this upload lacks true intrinsic intentionality and consciousness: that it is not really “you” but rather an empty simulacrum of who you once were. In other words, it is possible that uploading entails your death; not your continuation in another form. This would be because we live in a world in which weak AI is true, but not strong AI.

Agar’s argument is based on principles of rational choice. His claim is that the possibility that uploading entails death, coupled with the minimal benefits of uploading from the perspective of one who is likely to face the decision, is enough to determine its irrationality. Levy’s critique challenged Agar on both of these fronts. In brief, Levy argued that:

The non-zero probability of Weak AI is not enough to make uploading irrational: In his original discussion, Agar claimed that the non-zero probability that uploading entailed death was enough to make the Wager work. Levy said that this was false. Non-zero probabilities of death come pretty cheap, and if we tried to avoid them all we wouldn’t be able to perform any actions in the first place. Thus, the principle of rational choice motivating the wager was absurd. 
Agar understates the potential benefits of uploading: Another key part of Agar’s argument was the claim that by the time putative uploading technologies became available, other advanced forms of biological enhancement and life extension would be available. Thus, the benefits of uploading to the person faced with the decision would be minimal: they could already continue their life in a biologically-enhanced form. What good would a digital or artificial form be? Levy responded by arguing that this ignores the role of anchoring points in assessing the benefits of uploading. From our current perspective, there might be little difference between continued biological existence and uploaded existence, but from the perspective of the already-biologically enhanced, things might seem very different.

In his article, Agar responds to both of these arguments. The remainder of this post summarises these responses.


1. The Probabilities Really Do Matter
Responding to the first argument, Agar makes a significant concession to Levy’s critique: the probabilities really do matter. Unlike the Pascalian Wager by which it is inspired, Searle’s Wager deals with finite rewards and finite costs. Consequently, the mere non-zero risk of death is not by itself enough to render uploading irrational. Instead, the risk of death must cross some relevant threshold of reasonability before it is sufficient to render uploading irrational.

Having made that concession, Agar then proceeds to argue that the risk of death from uploading does cross this reasonability threshold. He does not do this by arguing for an arbitrary numerical figure for reasonability (e.g. 5% risk) but instead by arguing for a principle of epistemic modesty. This will be familiar to anyone who has read up on the epistemology of disagreement (something I once covered in relation to religious disagreements). The principle looks like this (there is an unfortunate typo in the official version of Agar’s article; I have removed it from this statement of the principle):

Principle of Epistemic Modesty (PEM): suppose others believe differently from you on a matter of fact. They appear to have access to the fact that is as good as yours and they seem not to be motivated by considerations independent of the fact. You should take seriously the possibility that you are wrong and your opponents are right.

Agar supports this principle, both in relation to strong AI and other philosophical doctrines, with a series of observations and thought experiments. For example, he cites the fact that many reasonable people (e.g. Searle) who have access to all the same facts and arguments, reject strong AI. He also discusses the general problem that many philosophical doctrines — such as the veracity of strong AI over weak AI — are not subject to empirical confirmation. At best, the truth of strong AI could be verified, subjectively, by the person who chooses to undergo uploading. But, of course, such a person would have to take the very risk that Searle’s Wager counsels against to verify the claim.

Another problem mentioned by Agar, in support of PEM, is that many controversial philosophical doctrines like Strong AI are dependent on or connected to other philosophical doctrines that are also quite controversial. He gives the example of physicalism and strong AI to prove his point. Although strong AI is not strictly incompatible with non-physicalism, Agar suggests that the truth of physicalism would raise the probability of Strong AI; whereas its falsity would lower the probability. While he himself is a committed physicalist, he acknowledges that it could be false and this probability should impact upon the overall probability of Strong AI.

For the most part, I don’t have a problem with these particular arguments. I agree that it is difficult to verify certain philosophical doctrines; and I agree that dependency does affect overall probability (though there is a problem here when it comes to dwindling probabilities: if you join together a sufficient number of uncertain propositions you make virtually anything seem improbable). I’m less sure about the reasonability of other people’s beliefs and how these should affect one’s reasoning, but I’ll ignore that problem here (it is discussed in my earlier series on disagreement in the philosophy of religion). Where I do have a problem is with Agar use of a thought experiment to explain what it means to take your opponent’s views seriously.

He asks you to imagine an omniscient being giving you the opportunity to place a bet on all your core philosophical commitments — physicalism, naturalism, consequentialism, Platonism, scientific realism, epistemological externalism and so on. The bet works like this: (i) if it turns out that your commitments are true, you will get a small monetary reward; (ii) if it turns out that they are false, you will die. Would you accept the bet? Agar says that he is pretty sure that physicalism is true, but he wouldn’t bet his life on it. He thinks that our commitment to other controversial philosophical doctrines should entail a similar level of caution.

The problem I have with this thought experiment is the role it plays in the overall dialectic. Agar’s overarching goal is to show that the probability of Strong AI being false is sufficient to warrant caution in the case of Searle’s Wager. But this thought experiment simply restates Searle’s Wager, albeit in a more general form that applies to other philosophical doctrines. It seems like he is just underlining or highlighting the original argument, not presenting a new argument that supports the Wager. Am I wrong? Comments below please.


2. The Rewards of Uploading?
Responding to the second of Levy’s criticisms, Agar tries to argue that the costs of uploading (relative to the putative benefits) are sufficient — even from the perspective of the hypothetical future agent who may face the decision — to render uploading irrational. Agar’s discussion of this point is largely a repeat of his previous claims about the limited gains of uploading, coupled with some responses to Levy’s specific complaints. I’ll run through these briefly here, offering some critical comments of my own along the way.

When it comes to the potential costs of uploading, Levy criticises Agar for claiming that this might entail death simply because it might entail the cessation of consciousness. According to Levy, a “modified” version of the organism view of identity can block this conclusion. Agar, rightly as far as I’m concerned, expresses some puzzlement at this. The organism views maintains that continuation of the same identity depends on the continuation of the underlying biological organism. Given that uploading entails switching from a biological organism to a non-biological one, it is difficult to see how it blocks the conclusion that uploading entails death. There would have to be some serious “modification” of the organism view to make that comprehensible. Furthermore, even if identity was preserved in this manner, the cessation of continued consciousness might make uploading bad enough to be worth avoiding (although see here for a provocative counterargument).

When it comes to the potential benefits of uploading, Levy criticises Agar for not fully appreciating the decision from the perspective of the hypothetical future agent. That person, as you recall, is likely to have many biological enhancements and the potential for an indefinite biological life with those enhancements. Still, might they not be tempted by the additional enhancements of non-biological existence? Agar responds by going into more detail about the alleged benefits of uploading, appealing in the process to the writings of perhaps its most influential advocate: Ray Kurzweil. The notion of mind-uploading appeals to Kurzweil (in part) because it could allow for the exponential expansion of our intellects. Apparently, with uploading we will be able cannibalise the inanimate parts of the universe and incorporate them into our own intelligences:

[u]ltimately, the entire universe will become saturated with our intelligence.... We will determine our own fate rather than having it determined by the current ‘dumb’ simple, machinelike forces that rule celestial mechanics. 
(Kurzweil, The Singularity is Near, p. 29)

Agar suggests that this vastly expanded intellect is less appealing than it first seems. After all, it may well entail a kind of hivemind (or Borg-like) existence, with some significant erosion of individuality or personality. Those things are valuable to us and are likely to continue to be valuable to us in the future. Do we really want to risk losing them? Furthermore, the values of this vastly expanded intellect could be truly alien from our own. Surely it would be better to stick to a comprehensible, more human-like form of existence? (This is actually Agar’s main argument throughout his book: don’t gamble on posthuman values)

Emerging out of this argument is Agar’s response to Levy about the case for investing in mind-uploading technologies in the here and now. As you’ll recall from part one, Levy also criticised Agar for surreptitiously switching back and forth between different timeframes when developing the case for Searle’s Wager: sometimes looking at it from the perspective of a hypothetical future mind-uploader, and sometimes from the perspective of the present. The only decision we have to make in the here and now is whether to invest in mind-uploading technologies or not. Levy suggested that it could make sense to invest in these things, even if we don’t currently value an uploaded existence, because we might come to value them at the relevant time. It would be like a man who tries to learn bridge in his early 50s, even though he has no present interest in it, because he hopes he will value it when he comes to retirement. At that point in time, bridge-playing may be an enjoyable social activity.

Agar responds be arguing that it may indeed make sense for the future retiree to make decisions like this, but that’s only because his future entails a comprehensibly human form of existence. He wants to have an enjoyable retirement, with a rich and rewarding social life, and he sees that bridge-playing might be an essential part of that. This is very different from the biological human gambling on a non-biological form of existence that may not even entail his/her continued existence as an individual.

I think there is some merit to these points. Of course, uploading need not entail the kind of exponential increase in intelligence envisioned by Kurzweil, but in that case the question needs to be asked whether it is any different from an enhanced and indefinite form of biological existence. This does, however, bring me to an aspect of Agar’s argument that I find difficult to follow. Agar repeatedly insists (both in this article and in the original book) that his argument is concerned with the future person who has biological enhancements, that includes something like De Grey’s LEV-technologies. To that person, uploading will be irrational. Agar underscores the centrality of this to his argument by conceding that if such alternative technologies are not available, uploading will seem less irrational. Why so? Because it might be the only plausible means for a terminally ill person to continue their lives.

I get that. But then Agar also insists that his argument does not depend on whether LEV-technologies (or their ilk) are actually available but on whether they are likely to become available before uploading technologies (i.e. he insists that his claim is about the relative likelihood of the technologies coming onstream):

My point here requires only that LEV is likely to arrive sooner than uploading. Uploading requires not only a completed neuroscience, total understanding of what is currently the least well-understood part of the human body, but also perfect knowledge of how to convert every relevant aspect of the brain’s functioning into electronic computation. It is therefore likely to be harder to achieve than LEV. 
(Agar, 2012, p. 435)

This is the bit a I don’t get. I think that in saying this Agar is illegitimately switching back and forth between different timeframes in the manner Levy critiqued. As I see it, there are two ways to interpret the claim about relative likelihood:

Interpretation One: Agar is speaking from the present about the timelines according to which different technologies will be realised. In other words, he is simply claiming that LEV is likely to be achieved before uploading. This might be true. The problem is that it doesn’t affect the rationality of the decision from the perspective of the hypothetical future agent. From that perspective, it’s either going to be the case that LEV and uploading are available or one or the other. If it’s the latter, then it might be rational to opt for uploading if it is the only means of continuing one’s life. The relative likelihood is unimportant. 
Interpretation Two: Agar is speaking from the perspective of the hypothetical future agent, who doesn’t quite have the full suite of LEV technologies available to them, but faces a choice between some temporarily life-extending therapy and uploading. Maybe then the argument is that this person should gamble on the temporarily life-extending technology in the hope that more LEV technologies will come onstream before they die.

The first interpretation is textually supported by what Agar says, but it doesn’t help the argument about future irrationality. The second interpretation is more sensible in this regard, but isn’t supported by what Agar says. It also faces some limitations. For example, there could well be scenarios in which the benefits of a few more months of biological existence are outweighed by the possible benefits of uploading.

So I’m left somewhat puzzled by what Agar means when he talks about the relative likelihood of the different technologies. Which interpretation does he favour? Is there another interpretation that I am missing out on?


3. Conclusion
To briefly sum up, in this series I’ve looked at the debate between Neil Levy and Nicholas Agar over the merits of Searle’s Wager. The wager purports to show that mind-uploading is irrational because of the risk that it entails death. Levy critiqued the wager on the grounds that it relied on an unsustainable principle of rational choice — viz. we should avoid any action that carried a non-zero risk of death — and understated the possible benefits of uploading.

In this post, we’ve looked at Agar’s responses. He agrees with Levy that his original principle of rational choice is unsustainable, but responds by claiming that the risk of death from uploading is still sufficiently high to merit caution. He also defends his cost/benefit analysis of uploading by arguing that it involves gambling on a posthuman form of existence. I’ve raised some questions about the arguments he uses to support both of these responses.

Anyway, I’ll leave it there for now. I’ll be doing some more posts on mind-uploading in the near future.

No comments:

Post a Comment