Pages

Thursday, June 13, 2013

Can we upload our minds? Hauskeller on Mind-Uploading (Part One)



A lot of people would like to live forever, or at least for much longer than they currently do. But there is one obvious impediment to this: our biological bodies break down over time and cannot (with current technologies) be sustained indefinitely. So what can be done to avoid our seemingly inevitable demise? For some, like Aubrey de Grey, the answer lies in tweaking and re-engineering our biological bodies. For others, the answer lies in the more radical solution of mind-uploading, or the technological replacement of our current biological bodies.

This solution holds a lot of promise. We already replace various body parts with artificial analogues, what with artificial limbs, organs, and sensory aids (including, more recently, things like artificial retina and cochlear implants). These artificial analogues are typically more sustainable, either through ongoing care and maintenance or renewal and replacement, than their biological equivalents. So why not go the whole hog? Why not replace every body part, including the brain, with some technological equivalent?

That is the question at the heart of Michael Hauskeller’s article “My Brain, My Mind, and I: Some Philosophical Assumptions of Mind Uploading”. The paper offers a sceptical look at some of the assumptions underlying the whole notion of mind-uploading. In this post and the next, I’m going to run through some of Hauskeller’s arguments. In the remainder of this post, I’ll try to do two things. First, I’ll look to clarify what is meant by “mind-uploading” and what we would be trying to achieve by doing it. Second, I’ll introduce the basic argument in favour of mind-uploading, the argument from functionalism, and note some obvious objections to it.

This series of posts is probably best read in conjunction with my earlier series on Nicholar Agar’s argument against uploading. That series looked at mind-uploading from a decision-theoretical perspective, and offers what is, to my mind, the most persuasive objection to mind uploading (though, I hasten to add, I’m not sure that it is overwhelmingly persuasive). Hauskeller’s arguments are more general and conceptual. Indeed, he repeatedly relies on the view that the concerns he raises are conceivable, and worth bearing in mind for that reason, and doesn’t take the further step to argue that they are possible or probable. If you are more interested in whether you should go for mind-uploading or not, I think the concerns raised by Hauskeller are possibly best fed back into Agar’s decision-theoretic framework. Still, for the pure philosophers out there — those deeply concerned with metaphysical questions of mind and identity — there is much to grapple with in Hauskeller’s paper.


1. What are we talking about and why?
In my introduction, I noted the obvious link between mind uploading and the quest for life extension. That’s probably enough to pique people’s curiosity, but if we are going to assess mind uploading in a serious way we need to clarify three important issues.

First up, we need to clarify exactly what it is we wish to preserve or prolong through mind-uploading. I think the answer is pretty obvious: we want to preserve ourselves (our selfs), where this is defined in terms of Lockean personhood. In other words, I would say that the essence of our existence consists in the fact that we are continuing subjects of experience. That is to say, sentient, self-aware, and aware of our continuing sentience over time (even after occasional bouts of unconsciousness). If we are not preserved as Lockean persons through mind-uploading, then I would suggest that there is very little to be said for it from our perspective (there may be other things to be said for it). One important thing to note here is that Lockean personhood allows for great change over time. I may have a very different set of characteristics and traits now than I did when I was five years old. That’s fine. What matters is that there is a continuing and overlapping stream of consciousness between my five year-old self and my current self. For ease of reference, I’ll refer to the claim that mind-uploading leads to the preservation and prolongation of the Lockean person as the “Mind-Uploading Thesis” (MUT).

The second thing we need to do is to clarify what we actually mean by mind-uploading. In his article, Hauskeller adopts a definition from Adam Kadmon, according to which mind-uploading is the “transfer of the brain’s mindpattern onto a different substrate”. In other words, your brain processes are modelled and then transferred from their current biological neuronal substrate, to a different substrate. This could be anything from a classic digital computer, to a device that uses artificial neurons that directly mirror and replicate the brain’s current processes. Hopefully, that is a reasonably straightforward idea. More important than the basic idea of uploading is the actual method through which it is achieved. Although there may be many such methods, for present purposes two are important:

Gradual Uploading/Replacement: The parts of the brain are gradually replaced by functionally equivalent artificial analogues. Although the original brain is, by the end of this process, destroyed, there is no precise moment at which the biological brain ceases to be and the artificial one begins. Instead, there is a step-by-step progression from wholly biological to wholly artificial.
Discontinuous Uploading/Replacement: The brain is scanned, copied and then emulated in some digital or artificial medium, following which the original brain is destroyed. There is no gradual replacement of the parts of the biological brain.

There may be significant differences between both kinds of uploading, and these differences may have philosophical repercussions. I suspect the latter, rather than the former, is what most people have in mind when they think about uploading, but I could be wrong.

Finally, in addition to clarifying the means through which uploading is achieved, we need to clarify the kinds of existence one might have in the digital or artificial form. There are many elaborate possibilities explored in the sci-fi literature, and I would encourage people to check some of these out, but again for present purposes, I’ll limit the focus to two broad kinds of existence, with intermediate kinds obviously also possible:

Wholly Virtual Existence: Once transferred to an artificial medium, the mind ceases to interact directly with the external world (though obviously it relies on that world for some support) and instead lives in a virtual reality, with perhaps occasional communication with the external world.
Non-virtual Existence: Once transferred to an artificial medium, the mind continues to interact directly with the external world through some set of actuators (i.e. tools for bringing about changes in the external world). These might directly replicate the human body, or involve superhuman “bodies”.

An added complication here comes in the shape of multiple copies of the same brain living out different existences in different virtual and non-virtual worlds. This should probably be factored into any complete account of mind-uploading. For an interesting fictional exploration of the idea of virtual existence with multiple copies, I would recommend Greg Egan’s book Permutation City.

Anyway, with those clarifications out of the way, we can move on to discuss the arguments for and against the MUT.


2. The Argument from Functionalism
Basic support for the MUT could come from the functionalist theory of mind. According to functionalism, the mind is not, in fact, a particular ontological substance or substrate. Rather, the mind is a kind of informational pattern that is constituted by the functional relationships between different ontological entities and activities. The belief among functionalists is that if you can replicate those functional relationships you can replicate a mind. And since functional relationships can, in theory, be instantiated in any medium it follows that minds are multiply realisable.

If this is correct, it could provide support for the MUT. The argument might run like this: since the Lockean person is, to the best of our knowledge, associated with a particular set of functional relationships between neurons, glial cells, neurotransmitters, neural networks, and so forth, it would follow that if those functional relationships can be replicated in other media, so too could the Lockean person. Let’s try to formalise this to see more clearly where the strengths and weaknesses lie:


  • (1) It is possible to recreate a particular set of functional relationships in any media (Multiple Realisability Thesis)
  • (2) The mind (Lockean person) is simply a particular set of functional relationships between neurons (etc.) (Functionalist Thesis).
  • (3) Therefore, it is possible to recreate the mind in a non-biological medium, such as a digital computer or artificial neural network.
  • (4) Therefore, it is possible to preserve the Lockean person through mind-uploading.


This argument has a couple of points in its favour. For starters, note how relatively modest its claims are. It does not claim that mind-uploading is probable or likely, merely that it is possible (or even more weakly that it is conceivable). It may turn out that there are technological or physical hurdles to multiple realisability of minds. For instance, it may be that we could never generate enough energy to sustain an artificial copy of a brain (though people are working on these issues as we speak). Still, if there is a genuine possibility, that is itself significant. Further, note how the argument does trace out an appealing line of inference from functionalism and multiple realisability to the MUT.

Nevertheless, there are some problems with the argument. I’ll mention two significant ones here, one of which is particularly important from Hauskeller’s perspective. First, there is the obvious point that the argument depends on the truth of functionalism. Some people have argued that functionalism is an implausible theory of mind, particularly if what we are most interested in about our minds is not their cognitive abilities but the Lockean persons they seem to instantiate. The most famous arguments in this school of thought are John Searle’s Chinese Room thought experiment and Ned Block’s Chinese Nation thought experiment. These are probably familiar to most readers of this blog, but to quickly summarise both Searle and Block use their thought experiments as reductios of the functionalist position. Take Block as an example. He imagines the entire Chinese nation perfectly replicating all the functional relationships in the human brain. He then asks: would that entire nation of people thereby become a self-aware Lockean person? Surely not, surely it is absurd to think that by just performing a series of computations an entire nation of people could become a distinct conscious self.

To his credit, Hauskeller doesn’t think the Block/Searle line of attack is all that persuasive. This is for the simple reason that whether the mind can be detached from the brain is, at this moment in time, entirely a matter of speculation. To butcher a quote:

”[A]s long as we have not had the chance to put the theory to test by actually producing an accurate whole brain emulation and then seeing what happens [we won’t know whether it is absurd]. In other words, it is an empirical question, which we cannot decide on purely philosophical grounds.
(Hauskeller, 2012, p. 191) 

I’m inclined to agree. Thought experiments involving Chinese Rooms or Chinese Nations probably seem absurd because we don’t have the capacities to imagine what those things would really look like if they were emulating the whole brain.

There is, however, another major problem with the argument from functionalism. If we go back to the formalised version above, and pay close attention to the inference from (3) to (4), we begin to see the problem. To say that Lockean personhood would be preserved in a digital or artificial analogue is very different from saying that it is possible to recreate a mind in an artificial medium. I could create a mind in a digital environment, and that mind might have the capacity for self-awareness and sentience that we value so dearly, but that in itself wouldn’t involve the preservation or prolongation of a particular Lockean person. The kind of overlapping sentient continuity that we are looking for isn’t entailed by the creation of a mind in an artificial medium. Indeed, on the face of it, creating a functional copy of the mind would never be enough for this. If I take a picture and print out two copies, the two copies are not one and the same thing. They are distinct ontological entities. Surely the same would be true of a biological brain and any functional copy of it?

To put all this more succinctly, there is a significant logical gap between (3) and (4). You cannot derive the latter from the former without some additional assumptions and argumentation. This is true even if the functionalist view of the mind is correct. So what additional assumptions and arguments might we need? That’s a question I’ll take up the next day when examining two of Hauskeller’s anti-MUT arguments.

No comments:

Post a Comment