Pages

Saturday, May 21, 2011

Should we Upload Our Minds? Agar on Searle's Wager (Part One)



I’m currently working my way through Nicholas Agar’s book Humanity’s End. The book is a contribution to the ongoing debate over the merits of human enhancement. Agar develops and defends something he calls the species-relativist argument against radical enhancement. I set out the basic structure of this argument, and commented on some of its key elements, in previous posts. My comments were based on my reading of chapters 1 and 2 of the book. I now wish to turn my attention to chapters 3 and 4.

My initial reaction to these chapters is one of disappointment. Things had been running along rather smoothly up until this point: Agar had set out his conclusion, told us how he was going to argue for it, and left some important threads of the argument dangling tantalisingly before us. That he didn’t continue with its development was rather surprising.

For you see, in chapters 3 and 4, Agar discusses the views of the arch-technological utopianist Ray Kurzweil. This was not unexpected -- Agar told us he would discuss the views of four pro-enhancement writers in chapter 1 -- what was unexpected was the aspect of Kurweil’s arguments he chose to discuss. Only the faintest whispers of the species-relativist argument can be heard in the two chapters.

Despite this shortcoming, there is still much of value in Agar’s discussion of Kurzweil. And over the next two posts I want to focus on what I take to be the most interesting aspect of that discussion: the Searle’s Wager argument.


1. Wager Arguments in General
We are all familiar with the concept of a wager. It is a concept that applies in a certain kind of decision-making context, one with uncertainty. So you put money on a horse because you think it might win a race; you bet with your insurance company that your house will burn down in the coming year; and so on.

Those contexts can be described in a cumbersome form, using the tools of informal argumentation; or they can be described in a more elegant form, using the tools of decision theory. I’ll run through both forms of description here.

This is the cumbersome form, it assumes that there are two possible states of the world and two possible courses of action:


  • (1) The world is either in state X (with probability p) or state Y (with probability 1-p); and you can choose to do either φ or ψ.
  • (2) If the world is in state X and you do φ, then outcome a will occur; alternatively, if you do ψ, then outcome b will occur.
  • (3) If the world is in state Y and you do φ, then outcome c will occur; alternatively, if you do ψ then outcome d will occur.
  • (4) Therefore, the expected payoff or utility of φ (Eu(φ)) = (p)(a) + (1-p)(c); and the expected payoff of ψ (Eu(ψ)) = (p)(b) + (1-p)(d). (from 1, 2, and 3)
  • (5) You ought to do whatever yields the highest expected payoff.
  • (6) So if Eu(φ) > Eu(ψ), you ought to do φ; and if Eu(ψ) > Eu(φ) you ought to do ψ.


That’s it. As I said, this is a cumbersome way of expressing the logic of the wager. The more elegant way uses the same set of equations and inequalities, but represents the decision-making context in a graphical or diagrammatic form. One form of representation uses a decision tree (or game tree) and the other form uses an outcome matrix. The latter should be familiar to anyone who has been reading my series on game theory.

I prefer the tree representation and I give it below. The first node in this tree represents Nature. Nature is like a player in a game, albeit an unusual one. She (I’ll use the convention of viewing nature as a female) selects the possible states of the world in accordance with certain probabilities, not in anticipation of what you might do (which is what a strategic player would do). To get the picture right you can imagine Nature rolling a dice before making her move. The second set of nodes represents you. You have to make a decision about the most appropriate thing for you to do. You do so in accordance with the standard principles of practical rationality: pick the option with the highest expected payoff, given what you know about the likely probabilities guiding Nature’s move.


The Wager


This should all be relatively straightforward. Where wager arguments tend to get interesting is when they point to one overwhelmingly good (or bad) outcome, that can make it rational to choose the action that leads to (or avoids) that outcome, even when the probability of that outcome arising is very low.

The most famous example of such a wager argument comes, of course, from Pascal. He argued that even if the probability of God’s existence was low (perhaps exceedingly low), the expected reward that comes from believing in his existence is so high, that it would be practically rational to believe in his existence. This is because the costs of all the earthly encumbrances of belief pale in comparison to the potential rewards.



2. Uploading our Minds
Agar employs a very similar argument in response to Kurweil’s view that we should, in the future, upload our minds to an electronic, non-biological medium. He calls it the Searlian wager argument because it utilises some of the views of the cantankerous old Berkeley philosopher John Searle. As you can probably guess from my description, Searle is not the most agreeable of figures (at least, not to me). Still, that shouldn’t cloud our judgment of Agar’s argument which, despite its appeal to Searle, seems quite legitimate to me.

First things first, what would it mean to upload our minds? Agar envisions two possible scenarios. The first is a one-off scenario in which a fully biological being presses a button and completely copies his or her psychology into a computer. The second involves a step-by-step process in which a biological being gradually replaces parts of his or her mind with electronic analogues until eventually the electronic components dominate.

Second question: why might this seem like a good thing to do? Answer: it would allow us to take full advantage of the law of accelerating returns.* As Agar puts it (describing Kurweil’s position):

“The message from AI is that anything done by the brain can be done better by electronic chips. According to Kurzweil, those who grasp this message will progressively trade neurons for neuroprostheses. When the transfer of mind into machine is complete, our minds will be free to follow the trajectory of accelerating improvement...We’ll soon become millions and billions of times more intelligent than we currently are.”

Sounds good, right?

Maybe not. Agar thinks there is crucial uncertainty facing the person making the decision to upload. And that the uncertainty is such as to make the potential costs of uploading outweigh any of the benefits arising from being “millions and billions of times more intelligent than we currently are.”

The crucial uncertainty arises from the fact that there are two possible theories of artificial intelligence:

Strong AI: According to this theory it will be possible someday to create a computer that is capable of genuine, conscious thought. In other words, a computer that will have experiences that are similar to those had by ordinary human beings (the experiences may, of course, be more stupendous and awe-inspiring, but they will be there nonetheless).
Weak AI: According to this theory although it might be possible for computers to completely mimic and simulate aspects of human thought and behaviour, this does not mean that the computer will actually have conscious experiences and thoughts like those had by human being. To believe that a computer simulating thought is actually consciously thinking, is like believing that a computer simulating a volcano is actually erupting.

Kurweil defends the first theory. Searle defends the second. Who is right does not matter. All that matters is that it is possible for Weak AI to be true. This possibility creates the conditions necessary for the wager argument to thrive.



3. The Searlian Wager Outlined
The Searlian Wager argument can now be stated. We start with the premise that our conscious experience is valuable to us. In fact, it might be supremely valuable to us: the ground from which all other personal values emanate. So it follows that it would be pretty bad for us to lose our consciousness. But according to Weak AI that’s exactly what might happen if we choose to upload ourselves to a computer. Now, admittedly, Weak AI is just a possibility, but the loss it entails is sufficient to outweigh any of the potential benefits from uploading. Thus, following the logic of the wager argument, it will never be rational to choose to upload.

Let’s restate that argument in the more cumbersome form:


  • (1) It is either the case that Strong AI is true (with probability p) or that Weak AI is true (with probability 1-p); and you can either choose to upload yourself to a computer (call this “U”) or not (call this “~U”).
  • (2) If Strong AI is true, then either: (a) performing U results in us experiencing the benefits of continued existence with super enhanced abilities; or (b) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (3) If Weak AI is true, then either: (c) performing U results in us destroying ourselves; or (d) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (4) Therefore, the expected payoff of uploading ourselves (Eu(U)) is = (p)(a) + (1-p)(c); and the expected payoff of no uploading ourselves (Eu(~U) is = (p)(b) + (1-p)(d).
  • (5) We ought to do whatever yields the highest expected payoff.
  • (6) Eu(~U) > Eu(U)
  • (7) Therefore, we ought not to upload ourselves.


Here’s the relevant game tree.


Searle's Wager


What are we to make of this argument? Is it any good? Well, one obvious problem is that we have no figures to plug into the relevant equations and inequalities. And it is these equations and inequalities that carry all the weight. In particular, the inequality expressed in premise (6) seems to be the crux of the argument. Agar thinks that this premise can be defended. We'll see why in the next part.


* The law of accelerating returns posits that the returns we receive from technological advance grow at an exponential, as opposed to linear, rate.

2 comments:

  1. "you bet with your insurance company that your house won’t burn down in the coming year; and so on."

    You bet regarding whether or not it will burn down, and you bet that it will, while the insurance company bets that it won't.

    "To believe that a computer simulating thought is actually thinking, is like believing that a computer simulating a volcano is actually erupting."

    While simulated lava won't burn you, simulated thoughts can still outsmart you.

    ReplyDelete
  2. Quite right on the insurance issue. As for the Weak AI issue, I thought the context implied that actual thinking meant conscious thinking. I've changed the wording to make it clearer.

    ReplyDelete