Showing posts with label Ray Kurzweil. Show all posts
Showing posts with label Ray Kurzweil. Show all posts

Sunday, May 22, 2011

Should we Upload Our Minds? Agar on Searle's Wager (Part Two)



(Part One)

This post is the second in a brief series looking at Nicholas Agar’s Searlian Wager argument. The argument is a response to Ray Kurzweil’s claim that we should upload our minds to some electronic medium in order to experience the full benefits of the law of accelerating returns. If that means nothing to you, read part one.

The crucial premise in the Searlian Wager argument concerns the costs and benefits of uploading your mind versus the costs and benefits of not uploading your mind. To be precise, the crucial premise says that the expected payoff of uploading your mind is less than the expected payoff of not uploading your mind. Thus, it would not be rational to upload your mind.

In this post I want outline Agar’s defence of the crucial premise.


1. Agar’s Strategy
The following is the game tree representing Searle’s Wager. It depicts the four outcomes that arise from our choice of whether to upload or not under the two possible conditions (Strong AI or Weak AI).




The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):


  • (6) Eu(~U) > Eu(U)


But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).


  • (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).


  • (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).


As I say, this strategy is interesting. While I know that it is effective for a certain range of values (I checked), it is beyond my own mathematical competence to prove that it is generally effective (i.e. true for all values of a, b, c, and d, and all probabilities p and 1-p, that satisfy the conditions set down in 8 and 9). If anyone is comfortable trying to prove this kind of thing, I’d be interested in hearing what they have to say.

In the meantime, I’ll continue to spell out how Agar defends (8) and (9).


2. A Fate Worse than Death?
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:

You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool -196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.

This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/possible survival (with electronic enhancements), on the other.

The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that -- given appropriate funding -- medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.

If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.


3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.

The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:

For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).

How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.

I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?

Agar’s second line of defence for premise (9) concerns species-relative values and claims that converting ourselves into electronic beings will result in the loss of experiences and motivations that are highly valuable. Here, at last, we get a whisper of Agar’s main argument, but alas it remains a whisper. He promises to elaborate further in chapter nine.


4. Conclusion
This concludes Agar’s main defence of the Searlian Wager argument. The implication of the argument is simple: the greater certainty attached to continued biological existence will make it the more attractive option. As a result, it will never be rational to upload our minds.

Following on from his main defence, Agar looks at the possibility of testing to see whether uploading preserves conscious experience before deciding to fully upload ourselves. This could reduce the uncertainty associated with the wager and thus make uploading the rational choice. Agar thinks any proposed experiments are unlikely to prove what we would like them to prove. The uncertainty would seem to be at the heart of the hard problem of consciousness.

Finally, Agar also discusses, at the end of chapter four, the problem of unfriendly AI and the dangers associated with creating electronic copies of yourself. I won’t discuss these issues here. Enough food for thought should have been provided by the wager argument itself.

Saturday, May 21, 2011

Should we Upload Our Minds? Agar on Searle's Wager (Part One)



I’m currently working my way through Nicholas Agar’s book Humanity’s End. The book is a contribution to the ongoing debate over the merits of human enhancement. Agar develops and defends something he calls the species-relativist argument against radical enhancement. I set out the basic structure of this argument, and commented on some of its key elements, in previous posts. My comments were based on my reading of chapters 1 and 2 of the book. I now wish to turn my attention to chapters 3 and 4.

My initial reaction to these chapters is one of disappointment. Things had been running along rather smoothly up until this point: Agar had set out his conclusion, told us how he was going to argue for it, and left some important threads of the argument dangling tantalisingly before us. That he didn’t continue with its development was rather surprising.

For you see, in chapters 3 and 4, Agar discusses the views of the arch-technological utopianist Ray Kurzweil. This was not unexpected -- Agar told us he would discuss the views of four pro-enhancement writers in chapter 1 -- what was unexpected was the aspect of Kurweil’s arguments he chose to discuss. Only the faintest whispers of the species-relativist argument can be heard in the two chapters.

Despite this shortcoming, there is still much of value in Agar’s discussion of Kurzweil. And over the next two posts I want to focus on what I take to be the most interesting aspect of that discussion: the Searle’s Wager argument.


1. Wager Arguments in General
We are all familiar with the concept of a wager. It is a concept that applies in a certain kind of decision-making context, one with uncertainty. So you put money on a horse because you think it might win a race; you bet with your insurance company that your house will burn down in the coming year; and so on.

Those contexts can be described in a cumbersome form, using the tools of informal argumentation; or they can be described in a more elegant form, using the tools of decision theory. I’ll run through both forms of description here.

This is the cumbersome form, it assumes that there are two possible states of the world and two possible courses of action:


  • (1) The world is either in state X (with probability p) or state Y (with probability 1-p); and you can choose to do either φ or ψ.
  • (2) If the world is in state X and you do φ, then outcome a will occur; alternatively, if you do ψ, then outcome b will occur.
  • (3) If the world is in state Y and you do φ, then outcome c will occur; alternatively, if you do ψ then outcome d will occur.
  • (4) Therefore, the expected payoff or utility of φ (Eu(φ)) = (p)(a) + (1-p)(c); and the expected payoff of ψ (Eu(ψ)) = (p)(b) + (1-p)(d). (from 1, 2, and 3)
  • (5) You ought to do whatever yields the highest expected payoff.
  • (6) So if Eu(φ) > Eu(ψ), you ought to do φ; and if Eu(ψ) > Eu(φ) you ought to do ψ.


That’s it. As I said, this is a cumbersome way of expressing the logic of the wager. The more elegant way uses the same set of equations and inequalities, but represents the decision-making context in a graphical or diagrammatic form. One form of representation uses a decision tree (or game tree) and the other form uses an outcome matrix. The latter should be familiar to anyone who has been reading my series on game theory.

I prefer the tree representation and I give it below. The first node in this tree represents Nature. Nature is like a player in a game, albeit an unusual one. She (I’ll use the convention of viewing nature as a female) selects the possible states of the world in accordance with certain probabilities, not in anticipation of what you might do (which is what a strategic player would do). To get the picture right you can imagine Nature rolling a dice before making her move. The second set of nodes represents you. You have to make a decision about the most appropriate thing for you to do. You do so in accordance with the standard principles of practical rationality: pick the option with the highest expected payoff, given what you know about the likely probabilities guiding Nature’s move.


The Wager


This should all be relatively straightforward. Where wager arguments tend to get interesting is when they point to one overwhelmingly good (or bad) outcome, that can make it rational to choose the action that leads to (or avoids) that outcome, even when the probability of that outcome arising is very low.

The most famous example of such a wager argument comes, of course, from Pascal. He argued that even if the probability of God’s existence was low (perhaps exceedingly low), the expected reward that comes from believing in his existence is so high, that it would be practically rational to believe in his existence. This is because the costs of all the earthly encumbrances of belief pale in comparison to the potential rewards.



2. Uploading our Minds
Agar employs a very similar argument in response to Kurweil’s view that we should, in the future, upload our minds to an electronic, non-biological medium. He calls it the Searlian wager argument because it utilises some of the views of the cantankerous old Berkeley philosopher John Searle. As you can probably guess from my description, Searle is not the most agreeable of figures (at least, not to me). Still, that shouldn’t cloud our judgment of Agar’s argument which, despite its appeal to Searle, seems quite legitimate to me.

First things first, what would it mean to upload our minds? Agar envisions two possible scenarios. The first is a one-off scenario in which a fully biological being presses a button and completely copies his or her psychology into a computer. The second involves a step-by-step process in which a biological being gradually replaces parts of his or her mind with electronic analogues until eventually the electronic components dominate.

Second question: why might this seem like a good thing to do? Answer: it would allow us to take full advantage of the law of accelerating returns.* As Agar puts it (describing Kurweil’s position):

“The message from AI is that anything done by the brain can be done better by electronic chips. According to Kurzweil, those who grasp this message will progressively trade neurons for neuroprostheses. When the transfer of mind into machine is complete, our minds will be free to follow the trajectory of accelerating improvement...We’ll soon become millions and billions of times more intelligent than we currently are.”

Sounds good, right?

Maybe not. Agar thinks there is crucial uncertainty facing the person making the decision to upload. And that the uncertainty is such as to make the potential costs of uploading outweigh any of the benefits arising from being “millions and billions of times more intelligent than we currently are.”

The crucial uncertainty arises from the fact that there are two possible theories of artificial intelligence:

Strong AI: According to this theory it will be possible someday to create a computer that is capable of genuine, conscious thought. In other words, a computer that will have experiences that are similar to those had by ordinary human beings (the experiences may, of course, be more stupendous and awe-inspiring, but they will be there nonetheless).
Weak AI: According to this theory although it might be possible for computers to completely mimic and simulate aspects of human thought and behaviour, this does not mean that the computer will actually have conscious experiences and thoughts like those had by human being. To believe that a computer simulating thought is actually consciously thinking, is like believing that a computer simulating a volcano is actually erupting.

Kurweil defends the first theory. Searle defends the second. Who is right does not matter. All that matters is that it is possible for Weak AI to be true. This possibility creates the conditions necessary for the wager argument to thrive.



3. The Searlian Wager Outlined
The Searlian Wager argument can now be stated. We start with the premise that our conscious experience is valuable to us. In fact, it might be supremely valuable to us: the ground from which all other personal values emanate. So it follows that it would be pretty bad for us to lose our consciousness. But according to Weak AI that’s exactly what might happen if we choose to upload ourselves to a computer. Now, admittedly, Weak AI is just a possibility, but the loss it entails is sufficient to outweigh any of the potential benefits from uploading. Thus, following the logic of the wager argument, it will never be rational to choose to upload.

Let’s restate that argument in the more cumbersome form:


  • (1) It is either the case that Strong AI is true (with probability p) or that Weak AI is true (with probability 1-p); and you can either choose to upload yourself to a computer (call this “U”) or not (call this “~U”).
  • (2) If Strong AI is true, then either: (a) performing U results in us experiencing the benefits of continued existence with super enhanced abilities; or (b) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (3) If Weak AI is true, then either: (c) performing U results in us destroying ourselves; or (d) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (4) Therefore, the expected payoff of uploading ourselves (Eu(U)) is = (p)(a) + (1-p)(c); and the expected payoff of no uploading ourselves (Eu(~U) is = (p)(b) + (1-p)(d).
  • (5) We ought to do whatever yields the highest expected payoff.
  • (6) Eu(~U) > Eu(U)
  • (7) Therefore, we ought not to upload ourselves.


Here’s the relevant game tree.


Searle's Wager


What are we to make of this argument? Is it any good? Well, one obvious problem is that we have no figures to plug into the relevant equations and inequalities. And it is these equations and inequalities that carry all the weight. In particular, the inequality expressed in premise (6) seems to be the crux of the argument. Agar thinks that this premise can be defended. We'll see why in the next part.


* The law of accelerating returns posits that the returns we receive from technological advance grow at an exponential, as opposed to linear, rate.