Pages

Monday, June 17, 2013

Can we upload our minds? Hauskeller on mind-uploading (Part Two)



(Part One)

Could we achieve digital immortality by uploading our minds? Would such a process prolong our existence? Those are the questions being asked in this series of posts. The series is looking at Michael Hauskeller’s article “My Brain, My Mind and I: Some Philosophical Assumptions of Mind-Uploading”, which casts a sceptical eye over the notion of mind-uploading.

Part one is probably essential pre-reading before tackling this part, but to quickly recap, we are investigating the Mind-Uploading Thesis (MUT). According to the MUT, it is possible to prolong the existence of the Lockean self (i.e. the self as a continuing subject of conscious experience) by replacing one’s brain with an artificial analogue. There are several conceivable ways of doing this, two of which are important for our purposes: (i) gradually, whereby there is a step-by-step replacement of one’s brain parts by artificial equivalents; or (ii) discontinuously, whereby one’s brain is copied, emulated in some non-biological medium, and then the original is destroyed.

Obviously, we can’t say for sure that mind-uploading is feasible at this point in time. The concept is science fictional in the strong sense. Still, we can make some arguments about it. The best argument in its favour comes from the functionalist theory of mind. This argument can be stated as follows:


  • (1) It is possible to recreate a particular set of functional relationships in any media (Multiple Realisability Thesis)
  • (2) The mind (Lockean person) is simply a particular set of functional relationships between neurons (etc.) (Functionalist Thesis).
  • (3) Therefore, it is possible to recreate the mind in a non-biological medium, such as a digital computer or artificial neural network.
  • (4) Therefore, it is possible to preserve the Lockean person through mind-uploading.


One of the major problems with this argument comes in the inference from (3) to (4). As was highlighted at the end of part one, there is a logical gap between the two: even if it is possible to copy a mind and emulate it in a digital or artificial medium, it does not follow that the copy will preserve the Lockean person. Indeed, quite the opposite would seem to be true: if you have two copies of something they both have an independent ontological identity.

If the MUT is to be defended, additional assumptions and arguments must be provided to plug the gap between (3) and (4). In this post, we’ll look at two arguments that try to do exactly that. First up is an argument based on an analogy with book-copying, and second is the argument from gradual replacement. As we shall see, Hauskeller rejects both arguments, but I’m not so sure he’s that convincing in his rebuttals, particularly in the case of the second argument.


1. The Story-in-a-Book Analogy
You could argue that the gap between (3) and (4) is an illusion, an artifact of a misconception. The misconception relates to the nature and implications of a purely functionalist theory of mind. A proponent of that theory might argue that the whole point of functionalism is that the exact same mind can be instantiated in more than one medium at a time.

Consider an analogy. There are two copies of James Joyce’s Ulysses sitting on my shelves. The books have different covers, different fonts, and different page layouts. But they contain the same story. That’s because the story is an emergent property of the functional relationship between different words and sentences. These words and sentences can be reproduced in many mediums, and yet still share a one-to-one identity. The stories are not only qualitatively identical, they are numerically identical too. Couldn’t this be true of the mind as well? Isn’t the mind just like a story in the brain? And if so, why couldn’t one-and-the-same mind be recreated in multiple media?


This gives us an argument from analogy:


  • (5) When you have two copies of the same book, they contain one-and-the-same story, i.e. the stories are identical.
  • (6) The mind is like a story in a book.
  • (7) Therefore, it is possible to have two copies of the brain with one-and-the-same mind.


Hauskeller thinks this argument breaks down because it ignores a crucial ingredient in the original case. Two books cannot be said to share the same story without a reader’s mind interpreting the words and symbols on the page. Indeed, stories themselves don’t really exist without this crucial ingredient (since language is largely a matter of collective belief). It is the reader’s mind that recreates the story and allows for the one-to-one relationship between the stories in the two books.

What difference does this make to the argument? Hauskeller thinks it motivates a counter-analogy. This one focusing on two readers rather than two books. Imagine there are two readers reading the same story. Imagine further that they are so immersed in the story that no other thoughts or memories intrude on their reading. Thus, the stream of consciousness in both minds is the exact same. Despite this equivalency, is it still not true to say that there are different persons undergoing the same set of experiences? And if this is right, doesn’t it imply that creating a functional copy of the mind will not preserve the Lockean person? Hauskeller argues that it does, that different selves can share the same thoughts. (Note: Hauskeller bases this argument on conceivability not possibility. In other words, he is not claiming that the two readers example is actually possible, but that it is conceivable).

There is a possible riposte to this. It argues that the Lockean person (self) is an immediate property of our conscious experiences. So that if two brains are sharing the exact same stream of thoughts — as they are in the two readers example — then they really are one-and-the-same person. Hauskeller grants that this objection would defeat his argument. All I will say here is that this is a pretty recondite and difficult metaphysical question. I’m not sure we could ever know for sure who is right. This brings me back to the decision-theoretical framework suggested by Agar (mentioned in part one). Assuming a technology did exist for uploading the mind, and assuming all outward-facing evidence suggested that the digital copy thought the same thoughts as you did, you’ll just have to take a gamble on which metaphysical thesis is true. I have to confess, I would not be willing to bet that my Lockean self would survive. But if I’m going to die anyway, I’d have nothing to lose by undergoing the procedure.


2. The Gradual Replacement Argument
The previous argument speaks mainly to the case of discontinuous uploading and replacement. What about gradual replacement? This is where your brain is gradually replaced by functionally equivalent artificial parts. There is no point at which your original biological brain ceases the be and your new artificial version begins. There is instead a step-by-step slide into an artificial you. A more elaborate version of the scenario involves the complete replacement of all body parts with artificial equivalents, with much the same implications.

And what are those implications? Well, consider the spectrum below. At R1 you are a biologically normal adult, with full Lockean personhood. At R2 you replace your limbs. Are you still the same Lockean person? Surely you are. Now move on to R3 where you replace all your sensory organs with artificial equivalents. Is your identity preserved in this case? Presumably it is since we can do this to a minimal degree already and it doesn’t seem to destroy the Lockean self. How about R4, where individual neurons are gradually replaced by functional equivalents? This is obviously trickier since we don’t know for sure what would happen, but it seems to me that your identity would be preserved if you replaced a few of your neurons. Admittedly though, this would be a gamble. That leaves us finally with R5 where every part of your brain is replaced. Assuming Lockean personhood has been preserved through all the previous replacements, it would be odd to claim that it was suddenly destroyed at this point. (Note: there are direct connections between this case and the classic thought experiment of the Ship of Theseus).


There is an argument here and it rests on a simple inductive principle:

Replacement Principle: If a part of an object O is gradually replaced in a step from Rn to Rn + 1, and the identity of O is preserved in that step, then the gradual replacement of further parts in equivalent steps (i.e. Rn+1 to Rn + 2) will also preserve the identity of O.

But is this principle correct? Hauskeller argues that it is not. He says that arguments from graduality generally fail because they deny the reality of change. The classic Sorites thought experiment is said to illustrate the problem. Take a heap of sand and start reducing it in size, one grain at a time. The removal of one grain is never sufficient to change the heap of sand into a non-heap, but nevertheless if you keep removing grains of sand it will eventually become a non-heap. So even if there is gradual change, and even if there is no point along the continuum of change at which an object can definitely be said to change from one form to another, there is nevertheless change. Reasoning along these lines could defeat the argument from gradual replacement.

Similarly, Hauskeller argues that there are cases in which an object is gradually altered, and even though each alteration doesn’t seem to fundamentally change its properties, it does change radically and discontinuously towards the very end of the sequence. Imagine you have a bowl of water, warmed to 20 degrees celsius. Now imagine that you gradually reduce it in temperature, one degree at a time. As you repeat this over and over, the water won’t seem to fundamentally change. In particular, it won’t look like it is changing from liquid to solid. But this is exactly what happens. The water turns to ice once it is cooled below 0 degrees. If we apply this reasoning the gradual replacement of brain parts, it could be that although no fundamental change seems to be occurring throughout the gradual replacement, at the very end it does. You could rapidly change from you, into someone else altogether.

There is something appealing about Hauskeller’s arguments. And his general objections to the Replacement Principle are well-taken. But it still seems to me like the principle could apply in certain cases. Most of the cells in the human body replace themselves over time without this fundamentally changing who we are (as best I can tell anyway). Admittedly, this continual replacement does not seem to be true of all neurons, even though new brain cells and new neural networks can form over time. Still, you’d have to think that certain brain cells are somehow magical, irreplaceable, consciousness-exuding entities to think it is impossible not to have gradual replacement without preservation of identity.

To be clear, the point I am making is not that the Replacement Principle is defensible — I don’t think it is — but I do think replacement without fundamental change is possible in at least some cases. Mind-uploading might very well be such a case. Hauskeller’s counterexamples don’t defeat this point. They do give reasons to doubt preservation, and these would need to be factored in when making any decision about uploading, but so too would the reasons for thinking it is possible.


3. Conclusion
To sum up, mind-uploading gains support from the functionalist theory of mind. But there is significant metaphysical uncertainty surrounding it. A functional copy of the mind may preserve the Lockean self, but then again it may not. Likewise, the gradual replacement of the brain may preserve the Lockean self, but then again it may not. Any decision to upload would be made under this metaphysical uncertainty, and this would impact on our decision making.

No comments:

Post a Comment