Pages

Thursday, January 2, 2014

Is mind uploading existentially risky? (Part Two)


(Part One)

This is the second in a series of posts looking at Searle's Wager and the rationality of mind-uploading. Searle's Wager is an argument that was originally developed by the philosopher Nicholas Agar. It claims that uploading one's mind to a computer (or equivalent substrate) cannot be rational because there is a risk that it might entail death. I covered the argument on this blog back in 2011. In this series, I'm looking at a debate between Nicholas Agar and Neil Levy about the merits of the argument. The current focus is on Levy's critique.

As we saw in part one, the Wager argument derives its force from the uncertainty about which of the following two philosophical theses is true: (i) weak AI, which holds that mind-uploading is not possible because a machine cannot be conscious or capable of human-type understanding; or (ii) strong AI, which holds that mind-uploading is possible. Agar submits that because there is some chance that weak AI is true, it cannot be rational to upload your mind since that could entail your self-destruction.

Or, at least, it cannot be rational from the perspective of our future descendants who may face the choice. This is because, by the time they face it, technology is likely to have advanced to the point where radical (biological) life extension is a reality and hence any decision to upload would require forgoing the benefits of radical biological life extension. In part one, we saw how Levy takes Agar to task for his inconsistency about the relevant timeframes in the argument. In today's post we are going to look at the larger issue of probabilities and philosophical risk. These play an important part in Agar's overall argumentative strategy, but Levy thinks Agar's use of probabilities is misleading: if we followed Agar's approach, we would never be able to rationally make any decision.

The remainder of this post is divided into three sections. First, I look at the topic of philosophical risk and the relationship between Searle's Wager and the classic Pascal's Wager. Second, I outline Levy's critique. Third, I offer some brief comments on this critique. I really will be brief in this final section since, in the next post, I will be dealing with Agar's response to Levy and Agar anticipates some of the points I would like to make.


1. Philosophical Risk and Pascal's Wager
I strategically de-emphasised the links between Searle's Wager and Pascal's Wager in part one. But there is no avoiding them now since certain parts of Levy's critique play upon the differences between the two. I apologise in advance to those of you who are familiar with Pascal's Wager and the extremely extensive literature about it.

Pascal's Wager is an argument about the rationality of believing in God. Originally developed by the French mathematician (and polymath) Blaise Pascal, it was supposed to show that anyone who wished to maximise expected utility (or minimise expected loss) should believe in God. This was because the expected reward that would come from doing so was so great -- in fact, infinitely great -- and the expected loss of not doing so was so extreme -- in fact, infinitely extreme -- that believing was clearly the rational thing to do, provided that the probability of God’s existence was greater than zero.

In broad outline, the Pascalian Wager had the following structure:

Pascal's Wager: God may or may not exist, and we each have a choice of whether or not to believe in him. The expected rewards/losses that come from doing so are:
A. Non-belief + God does exist: Punishment forever in hell (i.e. infinite punishment);
B. Non-belief + God does not exist: Some finite utility gain over the course of one lifetime due to forgoing the costs of believing.
C. Belief + God does exist: Reward forever in heaven (i.e. infinite reward).
D. Belief + God does not exist: Some finite utility loss over the course of one lifetime due to the costs of believing.

Now, there are many criticisms of this characterisation of the wager in the literature. For example, people argue that it fails to capture all the possible choices one has to make (which God, which specific doctrinal view etc.), or that it presumes too readily that believing is a voluntary action with certain definite consequences. But if we set these aside, the argument does make an interesting point about relative risks and the role they ought to play in decision-making. You see, Pascal's key observation was that the risks/rewards from believing/not-believing are asymmetrical. In particular, the risks from not-believing, when coupled with the potential gains, were so massively outweighed by the potential gains from believing that the decision to believe dominated the decision to not-believe, so long as God's existence had some non-zero probability attached to it.

Pascal's Wager contains probably the most extreme example of risk asymmetry you’ll ever see — that’s what makes it so potentially compelling. It is unlikely that you'll come across another argument that involves literally infinite gains being compared to infinite losses. Nevertheless, risk asymmetry arguments of this general type are common in the philosophical literature. The recently buoyant field of moral risk, for example, centres around several risk asymmetry arguments. Proponents of these arguments claim that they should lead us to alter our moral choices. For example, there are those that argue that the moral risks associated with aborting a foetus are sufficiently asymmetrical to the moral rewards (or, indeed, moral neutrality) of aborting a foetus to think that one ought to minimise moral risk by avoiding abortion.

Clearly, that's a controversial argument. But Searle's Wager involves something with a similar structure. In essence, the claim being made by Agar is that the risk associated with uploading in a world in which weak AI is true is sufficiently asymmetrical to the reward associated with uploading in a world in which strong AI is true, for not-uploading to be the rational choice. It is this alleged asymmetry, and its supposed affect on rational choice, that Neil Levy challenges in his critique.


2. Too Much Philosophical Doubt in the World?
In making his argument about the risk asymmetries involved in uploading, Agar doesn’t actually offer us any specific estimates of those risks. This is not surprising since any such estimates would be highly controversial. What Agar does do is to make some very general claims about the kinds of probabilities that would be required for his argument to be successful. When he does, Levy latches onto a particular quote that he thinks undermines the risk asymmetry argument. The quote is:

[I]f there is room for rational disagreement you should not treat the probability of Searle being correct as zero...This is all that the Wager requires 
(Agar, 2010, p. 71)

What Agar appears to be saying here is that risk asymmetry between the world in which weak AI is true and the world in which it is false, is so great that so long as there is a non-zero probability that we are living in that world, uploading will always be irrational. Why so? Because if we are living in that world, then there is a non-zero probability that uploading will entail our deaths.

Levy argues that this is much too strong a claim. A non-zero probability might make a difference in the case of Pascal’s Wager, where the risks and rewards are potentially infinite. But when we are dealing with finite gains and losses, this is not true. Consequently, Agar is working with a principle of rational choice that cannot be plausibly sustained. If it was the case that a non-zero probability of death always ruled out a particular choice, then we wouldn’t be able to perform any actions. After all, virtually any action you might like to think of carries a non-zero probability of death.

Levy’s critique can be interpreted as a simple reductio-style argument. As follows (numbering continues from part one):


  • (8) Agar’s argument requires only that any action with non-zero probability of death attached to it be avoided.
  • (9) If it were the case that any action that carried a non-zero probability of death were to be avoided, many actions (including actions that Agar thinks might be rational) would be ruled out.
  • (10) It would be absurd to rule out all of these actions.
  • (11) Therefore, the principle guiding Agar’s argument is absurd.


We’ve already seen how premise (8) is textually supported by the quote from Agar’s book. Thus, the other two premises (9) and (10) are the key to Levy’s critique. What can be said in favour of them?

Let’s start with premise (9). Levy gives several examples of the problems that flow from this principle. I’ll just focus on a couple here. The first is the rationality of using neuroprosthetic enhancements. These are artificial devices that are connected up with the rest of your biological brain and enhance some particular cognitive functions. Agar seems to endorse the use of such devices on the grounds that the brain is modular and hence replacing certain parts of it (NB: I don’t think Agar advocates complete replacement) with functionally enhanced analogues would be better than copying and uploading an emulation of the whole brain. But, of course, this is not quite true. As Levy points out, there is a non-zero chance that the modular hypothesis about brain function is wrong and that replacing individual parts of it will actually lead to the cessation of consciousness — i.e. your death. Levy argues that something similar is true of other putative forms of enhancement — e.g. genetic modification — and, indeed, a whole host of other mundane actions — e.g. crossing the road. Non-zero probabilities of death come pretty cheap and infiltrate many of our decisions.

(Levy also points out how Agar’s argument relies on other controversial philosophical claims that have a non-zero probability of falsehood. For example, claims about the badness of death, or about what it means for a person to cease existing.)

The point of all these examples is relatively clear: if we had to start ruling out all these other actions, particularly the mundane ones, we would never be able to act at all. That would be absurd. It follows then that Agar’s motivating assumption about a non-zero probability of death is itself absurd. Levy thinks that Agar must resolve this absurdity by making some more concrete claims about the costs and benefits associated with uploading.


3. Comments and Conclusions
What are we to make of all this? Funnily enough, when I read Agar’s argument a few years back I didn’t latch onto the quote about non-zero probabilities in quite the same way as Levy. I never thought that Agar’s argument relied on the absurd view that any philosophical risk is sufficient to render an action irrational — something I discussed in an earlier series on moral risk. I always thought that it relied on the view that risks above a certain threshold were sufficient to render an action irrational. Or, maybe, that the relative risk of uploading was much higher than that of not-uploading so that the latter was always going to be preferable to the former.

To put it another way, I never thought that premise (8) was a core commitment of Agar’s argument; I thought the motivating principle had something to do with risk thresholds or relative risks. It turns out that Agar agrees with this. After Levy penned his critique, Agar responded and in doing so he clarified the assumptions underlying the wager. We’ll look at what he had to say the next day.

No comments:

Post a Comment