Pages

Friday, July 28, 2017

New Paper: In Defence of the Epistemological Objection to Divine Command Theory




I have a new paper coming out in the journal Sophia. It's about the so-called 'epistemological objection' to divine command theory. This builds on some of my previous posts on the topic, albeit at much greater length and in more detail. The paper argues that DCT's inability to account for the moral obligations of reasonable non-believers is a problem that undermines its credibility as a metaethical theory. Full details below, along with links to the pre-publication version of the paper.


Title: In Defence of the Epistemological Objection to Divine Command Theory
Journal: SOPHIA: An international journal of philosophy and traditions 
Links: Philpapers; Academia; Research Gate
Abstract: Divine Command Theories (DCTs) comes in several different forms but at their core all of these theories claim that certain moral statuses (most typically the status of being obligatory) exist in virtue of the fact that God has commanded them into exist. Several authors argue that this core version of the DCT is vulnerable to an epistemological objection. According to this objection, DCT is deficient because certain groups of moral agents lack epistemic access to God’s commands. But there is confusion as to the precise nature and significance of this objection, and critiques of its key premises. In this article I try to clear up this confusion and address these critiques. I do so in three ways. First, I offer a simplified general version of the objection. Second, I address the leading criticisms of the premises of this objection, focusing in particular on the role of moral risk/uncertainty in our understanding of God’s commands. And third, I outline four possible interpretations of the argument, each with a differing degree of significance for the proponent of the DCT.






Wednesday, July 26, 2017

Episode #27 - Gilbert on the Ethics of Predictive Brain Implants

profile_image.png

In this episode I am joined by Frédéric Gilbert. Frédéric is a philosopher and bioethicist who is affiliated with quite a number of universities and research institutes around the world. He is currently a Scientist Fellow at the University of Washington (UW), in Seattle, US but has a concomitant appointment with the Department of Medicine, at the University of British Columbia, Vancouver, Canada. On top of that he is an ARC DECRA Research Fellow, at the University of Tasmania, Australia. We talk about the ethics of predictive brain implants.

You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:50 - What is a predictive brain implant?
  • 5:20 - What are we currently using predictive brain implants for?
  • 7:40 - The three types of predictive brain implant
  • 16:30 - Medical issues around brain implants
  • 18:45 - Predictive brain implants and autonomy
  • 22:40 - The effect of advisory implants on autonomy
  • 35:20 - The effect of automated implants on autonomy
  • 38:17 - Empirical findings on the experiences of patients
  • 47:00 - Possible future uses of PBIs
  • 51:25 - Dangers of speculative neuroethics
 

Relevant Links

   

Sunday, July 23, 2017

The Everlasting Check: Understanding Hume's Argument Against Miracles



"I have discovered an argument [...] which, if just, will, with the wise and learned, be an everlasting check to all kinds of superstitious delusion"
(Hume, Of Miracles

Miraculous events lie at the origins of most religions. Jesus’s resurrection from the dead. Moses’s parting of the Red Sea. Mohammed’s journeys on the back of a winged horse. Joseph Smith’s shenanigans with the Angel Moroni. All these events are, in common parlance, miraculous. If you wish to be a religious believer, you must accept the historical occurrence of at least some of these originating miraculous events. The problem is that you don’t get to observe them — to verify them with your own eyes. You must rely on the testimony of others, often handed down to you through religious texts or a lineage of oral histories. Is this testimonial evidence ever sufficient to warrant belief in the miraculous?

David Hume famously argued that it wasn’t. In his essay Of Miracles, which appears as section 10 of his larger work An Enquiry Concerning Human Understanding, Hume argues that testimonial evidence is unlikely to ever be sufficient to warrant belief in a miraculous event. Hume’s argument has been the subject of much interpretation and debate over the past 250 or so years. Much of that debate obscures or misrepresents what Hume actually argued. Fortunately, the philosopher Alexander George has recently published a beautiful exposition and analysis of Hume’s argument, titled The Everlasting Check: Hume on Miracles, which corrects the record on a number of key points.

George’s analysis is somewhat similar to that of Robert Fogelin, which I have written about previously. In essence, both authors argue that Hume’s argument is routinely misinterpreted as providing an a priori (or ‘in principle’) case against the possibility of testimonial proof of miracles. But this is emphatically not what Hume argues: Hume merely argues that testimonial proof of the miraculous is extremely unlikely. The mistake stems from the fact that Hume’s essay is broken into two parts, and many people assume that both parts present two separate arguments: an a priori argument and an a posteriori argument. They do not. Both parts must be read together as presenting one single argument.

Although Fogelin and George reach similar conclusions, they do so by subtly different means. George’s exposition has the advantage of being more thorough, more up to date, and ultimately more straightforward. On top of that, George makes some important points about how Hume defines miracles and how he relates the evidence for the occurrence of natural laws to the evidence for the reliability of testimony. When you understand these points, much of Hume’s argument falls neatly into place.

So what I want to do over the remainder of this post is to share George’s analysis of Hume’s argument. By the end, I’m hoping that the reader will appreciate the strengths (and limitations) of Hume’s analysis.


1. The Basic Structure of Hume’s Argument
Hume’s argument is about proof of miracles. But what is a ‘miracle’? Hume is pretty clear about this. He defines a miracle as:

Miracle = A violation of the laws of nature.

The problem is that this raises a further question: what is a law of nature? Some people argue that a law of nature is an exceptionless pattern in the natural world. The law of energy conservation or the second law of thermodynamics would be common examples. But to say that a particular law, L, is an exceptionless pattern is also somewhat ambiguous. Is the pattern truly exceptionless? In other words, is it some ontological necessity of the universe? Or is it simply that we have never observed an exception? In other words, is it a strong epistemic inference from our observations?

George argues that Hume favoured a strictly epistemic understanding of the laws of nature. He viewed laws of nature as well-confirmed regularities. We say that there is a law of conservation of energy because we have never observed energy coming into being from nothing; we have only ever observed it being changed from one form to another. But who knows, maybe some day we will observe an exception and this exception will itself be well-confirmed and hence we will have to revise our original conception of the law. That said, laws of nature are, for Hume, very very well-confirmed. We usually have the strongest possible evidence in their favour.

This epistemic understanding of miracles is consistent with Hume’s general empiricism, and it allows for miracles to come in degrees: an event can be more or less miraculous depending on how well-confirmed the regularity with which it is inconsistent really is. Indeed, Hume himself distinguished so-called ‘marvels’ from ‘miracles’ on the grounds that the former were inconsistent with less well-confirmed regularities than the latter.

This is all by way of saying that Hume’s target is best understood in the following manner (note: this is my gloss on Hume, not George’s):

Miracle* = A violation of a very well confirmed regularity that is observed in the natural world.

What then of Hume’s argument? That argument has a very simple structure, consisting of two premises and a conclusion. George uses formal mathematical terminology to describe the two premises of the argument, referring to them as the ‘first lemma’ and ‘second lemma’, respectively. He also refers to the conclusion as a ‘theorem’. I will follow suit though I don’t think it is strictly necessary:


  • (1) If the falsehood of testimony on behalf of an alleged miraculous religious event is not “more miraculous” than the event itself, then it is not rational to believe in the occurrence of that event on the basis of that testimony. [First Lemma]
  • (2) The falsehood of the testimony we have on behalf of alleged religious miraculous events is not more miraculous than those events themselves. [Second Lemma]
  • (3) Therefore, it is not rational to believe that those miraculous religious events have occurred. [Hume’s Theorem]




I have modified the wording of the lemmas and the theorem slightly from how they appear in George’s original text. I did this in order to make the argument more logically coherent and more consistent with Hume’s aims. As we will see, Hume doesn’t try to argue against all testimonial proofs of the miraculous; merely against the historical testimonial proof provided on behalf of the major religions. Indeed, Hume specifies under what conditions it might be acceptable to believe in a miracle.

For those who care about how this argument maps onto the structure of Hume’s essay, the first lemma is defended in the first part and the second lemma is defended in the second part. Again, to reiterate what was said above, it is important that we don’t disconnect the two parts. You cannot derive Hume’s Theorem from the first part alone.


2. Establishing the First Lemma
The first lemma of Hume’s argument is what has generated most of the debate and controversy. It is significant because it states a general principle or test that should apply to the evaluation of testimonial evidence. As such, it has significance beyond the debate about the occurrence of religious miracles. Here’s how Hume introduces it:

[N]o testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish. And even in that case, there is a mutual destruction of arguments, and the superior only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior. 
(Hume, Of Miracles para 13)

Here’s the basic idea: the evidence we have in favour of the laws of nature is strong. We have observed the same regularities over and over again. So our epistemic confidence in their status is very very high. A miraculous event, by definition, contradicts these regularities. To believe in the miracle we would need to have even stronger evidence in favour of its occurrence than we do in favour of the laws of nature. When the evidence supplied comes in the shape of testimony, then we will only reach this standard when the probability of the testimony being false is lower (i.e. more miraculous) than the probability of the miracle itself.

The major philosophical problem with this analysis is that it relies on the commensurability of the testimonial evidence in favour of the miraculous and evidence in favour of the laws of nature. In order to establish the respective improbabilities of the two propositions (testimony is false; miracle is true) we need to establish some common metric along which those probabilities can be measured. This is where Hume made his most important philosophical contribution to the debate. He argued that all evidence ultimately stems from our observations of the world around us. Any particular proposition that we hold to be true about the world (e.g. that dead men stay dead; that energy is conserved; that there is a universal tendency toward increased entropy) is ultimately warranted by observations we have made about the world. This is clearly true for the laws of nature, but it is also true for testimonial proofs as well. The reason why we are usually confident of testimony is because because there is a well-confirmed regularity to the effect that when someone says that an event occurred it usually did occur. To put it another way, Hume is saying that there is a law of testimony. This law of testimony is grounded in our experiences of the relationship between testimony and events in the real world.

Drawing out this equivalence is critical to Hume’s project, and it is a point that interpreters of Hume often miss, according to George. It is only through establishing this equivalence that Hume is entitled to say that we can meaningfully compare the relative probabilities of testimonial proof and a putative law of nature. As he puts it:

Given Hume’s central commensurating analysis in Part 1 of his essay, it is indeed meaningful to evaluate whether the relation is greater than holds between an event’s occurrence and a testimony’s falsehood (because we can now appreciate that there is a lawlike claim about nature with which the falsehood of that testimony conflicts). 
(George 2016, 15)

This equivalence is what justifies the general principle stated in Hume’s first lemma. We are only warranted in believing a miracle on the basis of testimony if the warrant for the law of testimony in that instance outweighs the warrant for the law of nature. This only happens when the falsehood of the testimony is more improbable than the falsehood of the law of nature.




2. Establishing the Second Lemma
The first lemma is interesting in and of itself, but it doesn’t get us anywhere close to establishing that it is irrational to believe in historically testified miracles. Obviously, the suspicion underlying the first lemma is that the warrant we have for the law of testimony is much weaker than the warrant we have for any candidate law of nature. After all, even though testimony is usually accurate, there are plenty of times when it isn’t. People misunderstand what they have seen. They fabulate and overinterpret. They suffer from a confirmation bias, whereby they assume that what they saw is consistent with their prior beliefs/desires. They also, sometimes, lie outright for their own gain.

The purpose of the second lemma is to establish that the law of testimony breaks down in the case of historically testified miracles: that the falsehood of the testimony in favour of those miraculous events is not more miraculous than the events themselves. Hume does this by setting out four main lines of argument in favour of the second lemma. These can be briefly summarised as:

(A) The Reliability Argument: Testimonial evidence is strongest when it meets certain conditions of reliability. Specifically, when it comes from (i) many witnesses; (ii) of good sense, education and integrity; (iii) who have reputations that could be tarnished by the evidence they are presenting; and (iv) they testify to an even that occurred in public and so would have enabled to detection of fraud. Hume’s contention is that the evidence for religious miracles doesn’t meet these conditions.

(B) The Propensity to the Marvellous Argument: People in general have a propensity to the marvellous. As Hume puts it ‘the passion of suprize and wonder, arising from miracles, being an agreeable emotion, gives a sensible tendency towards the belief of those events, from which it is derived…[we] love to partake of satisfaction at second-hand or by rebound, and place a pride and delight in exciting the admiration of others.’ In other words, we have an emotional tendency to accept and repeat claims to the miraculous. What’s more, this tendency is even higher in the case of religious miracles because of the authority and power that is often granted to accepted religious prophets and missionaries. This means there is a tendency to trade emotional satisfaction for accuracy. This may not be done deliberately; it may be entirely subconscious; but it still undermines credibility.

(C) The ‘Ignorant and Barbarous’ Peoples Argument: The people from whom testimony of historical religious miracles emanates are ‘ignorant and barbarous’. They were likely to be predisposed to credulity and misunderstanding; they were not sceptical and disinterested observers. Obviously, Hume’s language in expressing this argument is antiquated and un-PC.

(D) The ‘End to Commonsense’ Argument: When it comes to religious affairs in general, Hume argues that there is an ‘end of commonsense’. In other words, people don’t follow commonsense rules of reasoning and inference when it comes to religious matters. Systems of religious thought tend to blind people to the truth and we should consequently weigh religious claims accordingly. Hume uses a thought experiment to make his point here. He asks us to imagine a group of people who testified to the resurrection of Elizabeth I in order to found a new system of religion. How much weight would we accord their testimony? Very little. Hume suggests that all ‘men of sense’ would ‘reject the fact…without farther examination’.



The first and third of these arguments have to do with the credibility and reliability of the witnesses we have for religious miracles. The strength of these arguments depends on how accurately they represent the historical record. The second and fourth arguments are more general, and focus on people’s tendency to be misled when it comes to the marvellous and the miraculous, particularly when it is religious in nature.

It is important to emphasise that these arguments do not undermine all testimony in favour of miracles. Hume is very clear about this. He thinks it would be possible to have proof of a miracle that overcomes the test established by the first lemma. Indeed, he uses a thought experiment — ‘the eight days of darkness’-thought experiment — to illustrate the conditions under which testimony would provide proof of the miraculous. The thought experiment asks us to imagine that “all authors, in all languages, agree, that, from the first of January 1600, there was a total darkness over the whole earth for eight days: Suppose that the tradition of this extraordinary event is still strong and lively among the people: That all travellers, who return from foreign countries, bring us accounts of the same tradition, without the least variation or contradiction”. This admittedly sets a high bar, but that’s what we would expect when testimony is going up against a law of nature and it at least shows that testimony might be sufficient on some occasions.


3. Limitations of Hume’s Argument
Hume’s argument has many limitations. My belief is that the general principle established by the first lemma is fairly robust: it is difficult to see how else testimony could override a law of nature. The arguments adduced in support of the second lemma are much less robust. If you read any religious apologetics, you will know that arguments (A) and (C) are highly contested. Apologists introduce all sorts of arguments for thinking that the testimony we have is more reliable than Hume claims. For instance, in relation to the resurrection of Jesus, they will argue that we do have multiple witnesses (perhaps as many as 500), that some of them were well-educated and disinterested, and that some of them did suffer greatly for what they believed. Now, I tend to think that Hume is still, broadly speaking, correct and that the apologetical arguments ultimately fail, but clearly a lot more work would be needed to address each and every one of these contentions and thereby shore up the support for the second lemma.

I also think that arguments (B) and (D) are pretty contentious. Hume is certainly on to something. There are lots of putative miracle claims out there, and systems of religious thought sometimes do come with benefits for believers which may cause them to be more credulous than they ought to be. Furthermore, psychological evidence suggests that we do have a propensity to over-ascribe agency to events in the natural world, and to misinterpet what our senses have shown us. These errors probably lie at the foundation of many miracle claims. But to dismiss all religion as anathema to commonsense and rationality seems to go too far to me. I think, along with Oscar Wilde, that ‘commonsense’ is not that common and that it is a mistake to assume that secular/non-religious people have some innate epistemic superiority over their religious brethren. It’s more complicated than that.

Despite this, I would argue that Hume provides a pretty good framework for evaluating religious miracle claims and that while he may be wrong (or too superficial and glib) on some of the critical details, anyone who cares about this issue in the modern day plays on the terrain that he defined nearly three centuries ago.




Friday, July 21, 2017

The Argument from Irreducible Complexity


Bacterial flagella


When I was a student, well over a decade ago now, intelligent design was all the rage. It was the latest religiously-inspired threat to Darwinism (though it tried to hide its religious origins). It argued that Darwinism could never account for certain forms of adaptation that we see in the natural world.
What made intelligent design different from its forebears was its seeming scientific sophistication. Proponents of intelligent design were often well-qualified scientists and mathematicians, and they dressed up their arguments with the latest findings from microbiology and abstruse applications of probability theory. My sense is that the fad for intelligent design has faded in the intervening years, though I have no doubt that it still has its proponents.

That’s all really by way of an apology for the following post, which is going to revisit some of the arguments touted by intelligent design proponents, arguments that have long been challenged and dismissed by scientists and philosophers alike. My excuse for this is that I have recently been reading Benjamin Jantzen’s excellent book An Introduction to Design Arguments which goes through pretty much every single design argument in the history of Western thought, and subjects them all to fair and serious criticism. He has two chapters on arguments from the intelligent design movement: one based on Michael Behe’s argument from irreducible complexity and one based on William Dembski’s argument from specified complexity. Both arguments get at the same basic point, but arrive there by different means. I want to look at the argument from irreducible complexity in the remainder of this post, summarising some of Jantzen’s thoughts on it.

I’m hoping that this is of interest to people who are familiar with the idea of irreducible complexity as well as those who are not. If nothing else, the following analysis helps to clarify the structure of the argument from irreducible complexity, and to highlight some important conceptual issues when it comes to interpretation of natural phenomena.


1. The Argument Itself
Let’s start by clarifying the structure of the argument. The basic idea is that certain natural phenomena, specifically features of biological organisms, display a property that cannot be accounted for by mainstream evolutionary theory. In Behe’s case the relevant property is that of irreducible complexity. But what is this property? To answer that, we’ll need to look at Behe’s definition of an ‘irreducibly complex system’:

Irreducibly complex system (ICS) = Any single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to cease functioning. 
(Behe 1996, 39)

There is a problem with this definition, but we’ll get to that in the next section. For now, it would probably help us to wrap our heads around the notion if we had an example of an ICS. Behe’s favourite example is the bacterial flagellum. This is a thin, filament-like, appendage that protrudes from the cell membrane of many species of bacteria. It is used to help propel the bacteria through liquid. When observed with the aid of a microscope, one of the remarkable features of a bacterial flagellum is that it functions like a rotary motor, where the flagellum is like a freely-rotating axle, supported by a complex assemblage of protein parts. Behe’s contention, and we can accept this, is that if you removed one component from the complex assemblage it would cease to function as a rotary motor.

A slightly more familiar example, and one also used by Behe, is a mousetrap (an old-fashioned, spring-loaded one). This is made up of fewer functional parts, but every one of them is essential if the mousetrap is going to perform its desired function of trapping — and unfortunately killing — mice and other small vermin. Thus, it is an ICS because if you remove one of the parts it ceases to function as intended.

Hopefully this suffices for understanding the property of irreducible complexity. What about the argument from irreducible complexity? That argument begins by identifying an ICS and then works like this:


  • (1) X is an irreducibly complex system. (For illustrative purposes say ‘X= bacterial flagellum’)

  • (2) If X is an irreducibly complex system, then X must have been brought about by intelligent design.

  • (3) Therefore, X (the bacterial flagellum) must have been intelligently designed.


Two important interpretive points about this argument. First, note that the use of the variable-term X is significant. While the bacterial flagellum is the most widely-discussed example, the idea behind the argument is that there are many such ICSs in nature and hence many things in need of an explanation in terms of intelligent agency. Second, note the conclusion. The claim is not that God must have created the bacterial flagellum but, rather, that an intelligent designer did. For tactical reasons, proponents of intelligent design liked to hide their religious motivations, trying to claim that their theory was scientific, not religious in nature. This was largely done in order to get around certain legal prohibitions on the teaching of religion under US constitutional law. I’m not too interested in that here though. I view the intelligent design movement as a religious one, and hence the arguments they proffer as on a par with pretty much all design arguments.

Now that we are clear on the structure of the argument, we can proceed to critically evaluate it. There are two major criticisms I want to discuss, both drawn from Jantzen’s book.


2. The First Criticism: Problems with the concept of irreducible complexity
The first criticism takes issue with the first premise. Appropriately enough. That premise claims that there are readily identifiable ICSs in the natural world. But is this true? Go back for a moment to Behe’s definition (given above). It defines an ICS in relation to a so-called ‘basic function’. The idea is that the basic function of the bacterial flagellum is to propel a bacterium through liquid. All the protein parts of the rotary-motor are directed towards the performance of that basic function, and this is what makes it right and proper to say that removal of one of those parts would cause the system to cease functioning. The same goes for the mousetrap. The basic function of the mousetrap is to capture and kill mice. All the parts of the system are geared toward that end.

That probably sounds fine, but there’s a subtle interpretive problem lurking in the background. It’s easy enough to say that the basic function of the mousetrap is to trap and kill mice. After all, we know the purpose for which it was designed. We know why all the parts are arranged in the format that they are. When it comes to natural objects, it’s a very different thing. Every object, organism, or event in the physical world causes many effects. A mouth is a useful food-grinding device, but it is also a breeding ground for bacteria, a signalling tool (e.g. smiles and smirks), a pleasure organ, and more. To say that one of these effects constitutes its ‘basic function’ is contentious. As Jantzen puts it:

Physical systems that were not crafted by human hands do not come with inscriptions telling us what they are for. 
(Jantzen 2014, 191)

We cannot read the basic function of an alleged ICS off the book of nature. We need interpretive principles. One such principle would be to appeal to the intentions of an intelligent designer. But proponents of intelligent design don’t like to do this because they try to remain agnostic about who the designer is. Furthermore, even if they admitted to being orthodox theists, there would be problems. The mind of God is mysterious thing. Many a theologian has squandered a career trying to guess at His intentions. Some say we should not even try: God has beyond-our-ken reasons for action.

Another possibility is to try to naturalise the concept of a basic function. But this too poses a dilemma for the proponent of intelligent design. One popular way of naturalising basic functions is to appeal to the theory of evolution by natural selection — i.e. to argue that the basic function of a system is the one that was favoured by natural selection — but since the goal of intelligent design theorists is to undermine natural selection this solution is not available to them. The other way to do it is to define the basic function of a system in terms of the causal contribution that the system makes to some larger system. Thus, for instance, you can say that the basic function of the lens of the eye is to focus light rays because this contributes to the larger system that enables us to see.

The main problem with this second approach is that it simply pushes the problem back a further step. It defines the basic functionality of a sub-system by reference to the functionality of the larger system of which it is a part. But then the question becomes: what is the function of that larger system? It’s only once we have settled the answer to that question that we can figure out whether the sub-system is indeed an ICS, which lands us back with the original problem: that basic functions cannot simply be read off the book of nature.

To summarise:


  • (4) In order to successfully identify an ICS, you must be able to identify the basic function of the system in question.

  • (5) In order to determine the basic function of a system you must either: (a) appeal to the intentions of the designer of the system; (b) appeal to the purpose for which the system has been naturally selected; or (c) identify the causal contribution that the system makes to some super-system with function y.

  • (6) A proponent of intelligent design cannot appeal to the intentions of the designer, since they wish to remain agnostic about the intentions of the designer.

  • (7) A proponent of intelligent design cannot appeal to natural selection, since their goal is to deny its truth.

  • (8) Appealing to the causal contribution that the system makes to some super-system simply pushes the problem back a step.




This leads to the negation of premise (1), i.e. the claim that we have successfully identified an ICS.
This is a somewhat technical objection and its unlikely to have much intuitive appeal. It just seems too obvious to most people that the basic function of something like the bacterial flagellum is to propel a bacterium; that the basic function of the eye is to see; that the basic function of the teeth is to grind food; and so on. It’s only if you really interrogate our reasons for thinking that this is obvious that you begin to see the problem.

Fortunately, there are other ways to object to the argument.


3. Second criticism: The problem of evolutionary co-optation
The main criticism of the argument from irreducible complexity focuses on premise (2) of the argument. That premise claims that the only possible explanation for the existence of an ICS is that it was brought into being by an intelligent designer. But why think that? Aren’t there other plausible explanations for the existence of an ICS? Couldn’t natural selection do the trick?


  • (9) Natural selection can explain the existence of an ICS.


The proponent of intelligent design says ‘no’. They argue that natural selection — if they accept the idea at all — can only work in a gradual, step-wise fashion. This might enable the development of some systems that display interesting adaptations and functionality, but it can only work if every step in the chain has a function (i.e. contributes positively to the organism’s survival and reproduction). The problem is that an ICS cannot evolve in a gradual, step-wise fashion. Suppose you have forty different protein parts that need to be arranged in a very precise way in order for the bacterial flagellum to function as it does. It is beyond the bounds of credibility to believe that this could happen through random mutations in an organism’s genetic code. Too many things have to line up in a precise order for that to happen. It would be like having a forty-wheeled combination lock, randomly spinning each wheel, and then hoping to end up with the right sequence. You might get two or three in the right place, but not all forty. You need intelligent designers to bring about improbable (and functional) arrangements.


  • (10) Natural selection cannot explain the existence of an ICS because natural selection only accounts for gradual, step-wise changes. An ICS cannot emerge from gradual stepwise changes.


The evolutionist’s response to this is pretty straightfoward: you’re thinking about it in the wrong way. It may well be true that the bacterial flagellum is, currently, irreducibly complex, such that if you altered or removed one part it would no longer function as a rotary motor. But that doesn’t mean that the parts that currently make up the flagellum couldn’t have had other functions over the course of evolutionary history, or couldn’t have contributed to other systems that are not irreducibly complex over that period of time. The flagellum is the system that has emerged at the end of an evolutionary sequence, but evolution did not have that system in mind when it started out. Evolution isn’t an intelligently directed process. Anything that works (that contributes to survival or reproduction) gets preserved and multiplied, and the bits and pieces that work can get merged into other systems that work. So one particular protein may have contributed to a system that helped an organism at one point in time, but then get co-opted into another, newer, system at a later point in time.

That’s the critical point. The history of evolution is a history of co-optation. Just like the mechanic who might take a part from an old car engine in order to make a new improved one, so too does evolution repurpose and reorganise parts into new, improved systems. This is effectively what other microbiologists have pointed out in response to Behe. They’ve noted that the proteins in the bacterial flagellum have other uses in other biological systems. Furthermore, many evolutionary texts are filled with examples of the co-optation process. Jantzen has a very nice example in his book about the evolution of flying insects. He highlights research showing how they evolved from sea-dwelling crustacean ancestors. In the process, the thoracic gill plates of the ancestors (whose original purpose was to facilitate oxygen respiration under water) were repurposed in order to enable the insects to push themselves along the surface of the water. They then evolved to enable the insects to ‘sail’ along the surface of the water, before finally (and I’m skipping several steps here) evolving into full-blown wings.


  • (11) Natural selection can explain the evolution of an ICS through the process of co-optation, i.e. through the fact that the component parts of biological systems often get repurposed and reorganised into new systems over the course of evolutionary history.




This might still leave a puzzle as to why natural selection has favoured the creation of ICSs. After all, ICSs are highly vulnerable to change: mess around with one component and the system ceases to function. Why wouldn’t there be some inbuilt redundancy of parts? There are many responses to this. It is quite possible that an organism (or, rather, species) could survive the loss of one ICS. There are, after all, many ways of making a living, as the diversity of life on earth proves. But also, vulnerable and fragile systems can emerge from less vulnerable ones. A.G. Cairns-Smith famously used the example of a stone arch to illustrate the point. An arch is irreducibly complex. Remove one stone and the whole thing collapses. But arches are built by having scaffolding in place during the construction process. It’s only once the keystone is in place that the scaffolding is removed and the system becomes more vulnerable to change. Many alleged ICSs could have emerged through an analogous process.

Okay so that’s it for this post. Hopefully this has effectively explained the concept of irreducible complexity and the two main criticisms of the argument. If you have read this far, I trust it has been of interest to you, even if it does retread old ground.




Monday, July 17, 2017

Episode #26 - Behan on Technopolitics and the Automation of the State

apHSdJO_.jpg

In this episode I talk to Anthony Behan. Anthony is a technologist with an interest in the political and legal aspects of technology. We have a wide-ranging discussion about the automation of the law and the politics of technology.  The conversation is based on Anthony's thesis ‘The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland’, (a link to which is available in the links section below).

You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 2:35 - The relationship between technology and humanity
  • 5:25 - Technology and the legitimacy of the state
  • 8:15 - Is the state a kind of technology?
  • 13:20 - Does technology have a political orientation?
  • 20:20 - Automated traffic monitoring as a case study
  • 24:40 - Studying automated traffic monitoring in Ireland
  • 30:30 - The mismatch between technology and legal procedure
  • 33:58 - Does technology create new forms of governance or does it just make old forms more efficient?
  • 39:40 - The problem of discretion
  • 43:45 - The feminist gap in the debate about the automation of the state
  • 49:15 - A mindful approach to automation
  • 53:00 - Postcolonialism and resistance to automation
 

Relevant Links

 

Saturday, July 15, 2017

Slaves to the Machine: Understanding the Paradox of Transhumanism




TL;DR: This is the text of a keynote lecture I delivered to the 'Transcending Humanity' conference at Tubingen University on the 13th July 2017. It discusses the alleged tension between the transhumanist ideal of biological freedom and the glorification of technological means to that freedom. In the talk, I argue that the tension is superficial because the concept of freedom is multidimensional.


1. The Paradox of Transhumanism
In September of 1960, in the official journal of the American Rocket Society (now known as the American Institute of Aerospace and Astronautics), Manfred E Clynes and Nathan S Kline, published a ground-breaking article. Manfred Clynes was an Austrian-born, Australian-raised, polymath. He was educated in engineering and music and he still is an original and creative inventor, with over 40 patents to his name, and a competent concert pianist. Nathan Kline was a Manhattan-based psychopharmacologist, one of the pioneers of the field, responsible for developing drugs to treat schizophrenia and depression. Their joint article was something of a diversion from their main lines of research, but has arguably had more cultural impact than the rest of their work put together.

To understand it, we need to understand the cultural context in which it was written. September 1960 was the height of the Cold War. The Soviet Union had kick-started the space race three years earlier with the successful launch of its two Sputnik satellites into Earth’s orbit. The United States was clambering to make up lost ground. The best and brightest scientific talent was being marshalled to the cause. Clynes and Kline’s article was a contribution to the space race effort. But instead of offering practical proposals for getting man into space, they offered a more abstract, conceptual perspective. They looked at the biological challenge of spaceflight. The problem, as they described it, was that humans were not biologically adapted to spaceflight. They could not breathe outside the earth’s atmosphere, and once beyond the earth’s magnetic sphere would be bombarded by nasty solar radiation. In short, humans were not ‘free’ to explore space.

What could be done to solve the problem? This is where Clynes and Kline made their bold proposal. The standard approach was to create mini-environments in space that are relatively congenial to human beings. Hence, the oxygen filled spaceship and the hyperprotective spacesuit. This would suffice for short-term compatibility between fragile human biological tissue and the harsh environment of space, but it would be a precarious solution at best:

Artificial atmospheres encapsulated in some sort of enclosure constitute only temporizing, and dangerous temporizing at that, since we place ourselves in the same position as a fish taking a small quantity of water along with him to live on land. The bubble all too easily bursts.

If we ever wanted to do more in space — if we wanted to travel to the farthest reaches of our solar system (and beyond) — a different approach would be needed. We would have to alter our physiology through the creation technological substitutes and extensions of our innate biology:

If man attempts partial adaptation to space conditions, instead of insisting on carrying his whole environment along with him, a number of new possibilities appear. One is then led to think about the incorporation of integral exogenous devices to bring about the biological changes which might be necessary in man’s homeostatic mechanisms to allow him to live in space qua natura.

This is where Clynes and Kline made their most famous contribution to our culture. What should we call a human being that was technologically enhanced so as to adapt to the environment of space? Their suggested neologism was the “cyborg” - the cybernetic organism. This was the first recorded use of the term — a term that now generates over 40 million results on Google.

Modern transhumanists share something with Clynes and Kline. They are not interested in winning the Cold War nor, necessarily, exploring the outer reaches of space (though some are), but they are acutely aware of the limitations of human biology. They agree with Clynes and Kline in thinking that, given our current biological predicament, we are ‘unfree’. They wish to use technology to escape from this predicament - to unleash us from the shackles of evolution. Consequently, transhumanism is frequently understood as a liberation movement — complete with its own liberation theology, according to some critics — that sees technology as an instrument of freedom. Attend any transhumanist conference, or read any transhumanist article, and you will become palpably aware of this. You can’t escape the breathless enthusiasm with which transhumanists approach the latest scientific research in biotechnology, genetics, robotics and artificial intelligence. They eagerly await the critical technologies that will enable us to escape from our biological prison.

But this enthusiasm seems to entail a strange paradox. The journalist Mark O’Connell captures it well in his recent book To Be a Machine. Having lived with, observed, and interviewed some of the leading figures in the transhumanist movement over the past couple of years, O’Connell could not help but be disturbed by the faith they placed in technology:

[T]ranshumanism is a liberation movement advocating nothing less than a total emancipation from biology itself. There is another way of seeing this, an equal and opposite interpretation, which is that this apparent liberation would in reality be nothing less than a final and total enslavement to technology. 
(O’Connell 2017, 6)

This then is the ‘paradox of transhumanism’: if we want to free ourselves in manner envisaged by contemporary transhumanists, we must swap our biological prison for a technological one.

I have to say I sympathise with this understanding of the paradox. In the past five or six years, I have developed an increasingly ambivalent relationship with technology. Where once I saw technology as a tool that opened up new vistas of potentiality, I now see more sinister forces gathering on the horizon. In my own work I have written about the ’threat of algocracy’, i.e. the threat to democratic processes if humans end up being governed entirely by computer-programmed algorithm. I see this as part and parcel of the paradox identified by O’Connell. After all, the machines to which we might be enslaved speak the language of the algorithm. If we are to be their slaves, it will be an algorithmic form of enslavement.

So what I want to do in the remainder of this talk is to probe the paradox of transhumanism from several different angles. Specifically, I want to ask and answer the following three questions:

(1) How should we understand the kind of freedom desired by transhumanists?
(2) How might this lead to our technological enslavement?
(3) Can the paradox be resolved?

In the process of answering these questions, I will make one basic argument: human freedom is a complex, multidimensional phenomenon. Perfect freedom is a practical (possibly a metaphysical) impossibility. So to say that transhumanism entails a paradox is misleading. Transhumanism entails a tradeoff between different sources and forms of unfreedom. The question is whether this tradeoff is better or worse than our current predicament.


2. What is Transhumanist Freedom Anyway?
How should we understand the transhumanist desire for freedom? Let’s start by considering the nature of freedom itself. Broadly speaking, there are two concepts of freedom that are used in philosophical discourse:

Metaphysical Freedom: This is freedom in its purest sense. This is the actual ability to make choices about our lives without external determination or interference. When people discuss this form of freedom they often use the term ‘freedom of will’ or ‘free will’ and they will debate different theories such as libertarianism, compatibilism and incompatibilism. In order to have this type of freedom, two things are important: (i) the ability to do otherwise than we might have done (the alternative possibilities condition) and (ii) the ability to be the source of our own decisions (the sourcehood condition). There are many different interpretations of both conditions, and many different views on which is more important.

Political Freedom: This is freedom in a more restricted sense. This is the ability to make choices about our lives that are authentic representations of our own preferences, without interference or determination from other human beings, whether they be acting individually or collectively (through institutions or governments). This is the kind of freedom that animates most political debates about ‘liberty’, ‘freedom of speech’, ‘freedom of conscience’ and so on.

Obviously, metaphysical freedom is the more basic category. Political freedom is a sub-category of metaphysical freedom. This means it is possible for us to have political freedom without having metaphysical freedom. My general feeling is that you either believe in metaphysical freedom or you don’t. That is to say, you either believe that we have free will in its purest sense, or you don’t, or you think we have to redefine and reconceptualise the concept of free will to such an extent that it becomes indistinguishable from other ‘lesser forms’ of freedom. This is because metaphysical freedom seems to require an almost total absence of dependency on external causal forces, and it is really only if you believe in the idea of non-natural souls or agents that you can get your head around the total absence of such dependency. (Put a bookmark in that idea for now, we will return to it later).

Political freedom is different. Even people who are deeply sceptical about metaphysical freedom tend to be more optimistic about the possibility of limiting interference or determination by other external agents. Thus, it is possible to be politically free even if it is not possible to be metaphysically free. It is worth dwelling on the different types of political freedom for a moment, doing so will pay dividends later on when we look at transhumanist freedom and the enslavement to technology. Following Isaiah Berlin’s classic work, we can distinguish between positive and negative senses of political freedom. In the positive sense, political freedom requires that individuals be provided with means to act in way that is truly consistent with their own preferences (and so forth). In the negative sense, political freedom requires the absence of interference or limitation by other agents.

I’m going to set the positive sense of freedom to one side for the remainder of this talk, though you may be able to detect its ghostly presence in some aspects of the discussion. For now, I want to further clarify the negative sense. There are two leading theories of political freedom in the negative sense. The distinction between the two can be explained by reference to two famous historical thought experiments. The first is:

The Highwayman: You are living in 17th century Great Britain. You are travelling by stagecoach when you are waylaid by a masked ‘highwayman’. The highwayman points his pistol at you and offers you a deal: ‘your money or your life?’* You give him your money and he lets you on your way.

Here is the question: did you give him your money freely? According to proponents of a theory known as ‘freedom as non-interference’ you did not. The highwayman interfered with your choice by coercing you into giving him the money: he exert some active influence over your will. Freedom as non-interference is a very popular and influential theory in contemporary liberal political theory, but some people argue that it doesn’t cover everything that should be covered by a political concept of freedom. This is drawn out by the second thought experiment.

The Happy Slave: You are a slave, legally owned by another human being. But you are happy slave. Your master treats you well and as luck would have it, what he wants you to do, lines up with what you prefer to do. Consequently, he never interferes with your choices. You live in harmony with one another.

Here’s the question: are you free? The obvious answer is ‘no’. Indeed, life as a slave is the paradigm of unfreedom. But, interestingly, this is a type of unfreedom that is not captured by freedom as non-interference. After all, in the example just given there is never any interference with your actions. This is where the second theory of negative freedom comes into play. According to proponents of something called ‘freedom as non-domination’, we lack political freedom is we live under the dominion of another agent. In other words, if we have to ingratiate ourselves to them and rely on their good will to get by. The problem with the happy slave is that, no matter how happy he may be, he lives in a state of domination.

Okay, we covered a lot of conceptual ground just there. Let’s get our bearings by drawing a map of the territory. We start with the general concept of metaphysical freedom — the lack of causally determining influences on the human will — we then move down to the narrower political concept of freedom. Political freedom is necessary but not sufficient for metaphysical freedom. Political freedom comes in positive and negative forms, with there being two major specifications of negative freedom: FNI and FND.




The question I now want to turn to is how to understand the transhumanist liberation project? How does it fit into this conceptual map? The position I will defend is that transhumanist freedom is a distinct sub-category of freedom. It is not full-blown metaphysical freedom (this is important, for reasons we shall get back to later on) and it is not just another form of political freedom. It is, rather, adjacent to and distinct from political freedom.

Transhumanists are concerned with limitations on human freedom that are grounded in our biology (this links back, once more Clynes and Kline’s project). Thus, transhumanist freedom is ‘biological freedom’:

Biological Freedom: The ability to make choices about our lives without being constrained by the limitations that are inherent in our biological** constitution.

What kinds of biological limitations concern transhumanists? David Pearce, one of co-founders of the World Transhumanist Association (now Humanity+), argues that transhumanists are motivated by the three ‘supers’: (i) superlongevity, i.e. the desire to have extra long lives; (ii) superintelligence, i.e. the desire to be smarter than we currently are; and (iii) superwellbeing, i.e. the desire to live in a state of heightened bliss. The desire for each of these three ‘supers’ stems from a different biological limitation. Superlongevity is motivated by the biological limitation of death: one of the unfortunate facts about our current biological predicament is that we have been equipped with a biological machinery that tends to decay and cease functioning after about 80 years. Superintelligence is motivated by the information-processing limitations of the human brain: our brains are marvels of evolution, but they function in odd ways, limiting our knowledge and understanding of the world around us. And superwellbeing is motivated by the biological constraints on happiness. This is Pearce’s unique contribution to the transhumanist debate. He notes that some people are equipped with lower biological baselines of wellbeing (e.g. people who suffer from depression). This puts a limit on how happy they can be. We should try to overcome this limit.

There are other forms of biological freedom in the transhumanist movement. A prominent sub-section of the transhumanist community is interested in something called ‘morphological freedom’, which is essentially freedom from biological form. Fans of morphological freedom want to change their physical constitution so that they can experience different forms of physical embodiment. The slide shows some examples of this.

For what it’s worth, I think characterising transhumanism as a liberation movement with the concept of biological freedom at its core, is better than alternative characterisations, such as viewing it as a religion or a social movement concerned with technological change per se.

There are two advantages to characterising transhumanism in this way. The first is that it is reasonably pluralistic: it covers most of the dominant strands within the transhumanist community, without necessarily committing to a singular view of what the good transhumanist life consists of. If you ask a transhumanist what they want, beyond the freedom from biological constraint, you’ll get a lot of different views. The second is that it places transhumanism within an interesting historical arc. It has long been argued — by James Hughes in particular — that transhumanism is a continuation of the Enlightenment project. Indeed, some of the leading figures in the Enlightenment project were proto-transhumanists: the Marquis de Condorcet being the famous example. Where the Enlightenment project concerned itself with developing freedom through the celebration of reason and the desire for political change — i.e. to address the sources of unfreedom that arose from the behaviour of other human beings — the transhumanist project concerns itself with the next logical step in the march towards freedom. Transhumanists are, in essence, saying ‘Look we have got the basic hang of political freedom — we know how other humans limit us and we have plausible political models for overcoming those limits — now let’s focus on another major source of unfreedom: the biological one.’

Let’s take a breath here. The image below places the biological concept of freedom into the conceptual map of freedom from earlier on. The argument to this point is that transhumanism is concerned with a distinct type of freedom, namely: biological freedom. This type of freedom insists that we overcome biological limitations, particularly those associated with death, intelligence and well-being. The next question is whether in their zeal to overcome those limitations transhumanists make a Faustian pact with technology?





3. Are we becoming slaves to the machine?
The transhumanist hope for achieving biological freedom certainly places an inordinate amount of faith in technology. On the face of it, this makes a lot of sense. Humans have been using technology to overcome our biological limitations for quite some time. One of the ancient cousins of modern day homo sapiens is homo habilis. Homo habilis used primitive stone tools to butcher and skin animals, thereby overcoming the biological limitations of hands, feet and teeth. We have been elaborating on this same theme ever since. From the birth of agriculture to the dawn of the computer age, we have being using technology to accentuate and extend our biological capacities.

What is interesting about the technological developments thus far is that they have generally left our basic biological form unchanged. Technology is largely something that is external to our bodies, something that we use to facilitate and mediate our interactions with the world. This is as true of the Acheulean handaxe as it is of the smartphone. Of course, this isn’t the full picture. Some of our technological developments have involved tinkering with our biological form. Consider vaccination: this involves reprogramming the body’s immune system. Likewise there are some prosthetic technologies — artificial limbs, cochlear implants, pacemakers, deep brain stimulators — that involve replacing or augmenting biological systems. These technological developments are the first step towards the creation of literal cyborgs (ones that Clynes and Kline would have embraced). Still, the developments on this front have been relatively modest, with most of the effort focused on restoring functionality to those who have lost it, and not on transcending limitations in the manner desired by transhumanists.

So this is where we are currently at. We have made impressive gains in the use of externalising technologies to augment and transcend human biology; we have made modest gains in the use of internal technologies. Transhumanists would like to see more of this happening and a faster pace. Where then is the paradox of transhumanism? In what sense are we trading a biological prison for a technological one? We can answer that question in two stages. First, by considering in more detail the different possible relations between humans and technology, and then by considering the various ways in which those relations can compromise freedom.

There have been many attempts to categorise human-technology relationships over the years. I don’t claim that the following categorisation is the final and definitive one, merely that it captures something important for present purposes. My suggestion is that we can categorise human-technology relations along two major dimensions: (i) the internal-external dimension and (ii) the complementary-competitive dimension. The internal-external dimension should be straightforward enough as it captures the distinctions mentioned above. It is a true dimension, continuous rather than discrete in form. In other words, you cannot always neatly categorise a technology as being internal or external to our biology. Proponents of distributed and extended cognition, for example, will insist that humans sometimes form fully integrated-systems with our ‘external’ technologies thus on occasion collapsing the internal-external distinction.

The complementary-competitive dimension is a little bit more opaque and possibly more discontinuous. It comes from the work of the complexity theorist David Krakauer, who has developed it specifically in relation to modern computer technology and how it differs from historical forms of technological enhancement. As he sees it, most of our historical technologies, be they handaxes, spades, abaci or whatever, have a tendency to complement human biology. In other words, they enable humans to form beneficial partnerships with technology, oftentimes extending their innate biological capacities in the process. Thus, using a handaxe will strengthen your arm muscles and using an abacus will strengthen your cognitive ones. Things started to change with the Industrial revolution when humans created machines that fully replaced human physical labour. They have started to change even more with the advent of computer technology that can fully replace human cognitive labour. Thus it seems that technology no longer simply complements humanity; it competes with us.

I think what Krakauer says about external technologies also applies equally well to internal technologies. Some internal technologies try to work with our innate biological capacities, extending our powers and enabling greater insight and understanding. A perceptual implant like an artificial retina or cochlear implant is a good example of this. Contrariwise, there are some internal technologies that effectively bypass our innate biological capacities, carrying out tasks on our behalf, without any direct or meaningful input from us. Some brain implants seem to work like this, radically altering our behaviour without our direct control or input. They are like mini autonomous robots implanted into our skulls, taking over from our biology, not complementing it.

I could go on, but this should suffice for understanding the two dimensions along which we can categorise our relationships with technology. Now, even though I said that these could be viewed as true dimensions (i.e. as continuous rather than discrete in nature), for the purposes of simplification, I want to use the two dimensions to construct a two-by-two matrix for categorising our relationships with technology.



This categorisation system muddies the waters somewhat from our initial, optimistic view of technology-as-tool. It still seems to be the case that technology can help us to transcend or overcome our biological limitations. We can use computers, the internet and artificial intelligence to greatly enhance and extend our knowledge and understanding of the world. We can use technologies to produce more valuable things and to get more of what we want, thereby enhancing our well-being. We could also, potentially, use technology to extend our lives, either by generating biotechnological breakthroughs that enable cell-repair and preservation (nanorobots in the bloodstream anyone?), or, more fancifully, by fusing ourselves with machines to become complete cyborgs. This could be achieved, in part, through external technologies but, more likely in the long-term, through the use of internal technologies that directly fuse with our biology. At this point we will reach an apotheosis in our relationship with technology, becoming one with the machine. In this sense, technology really does seem to hold out the possibility of achieving biological freedom.

The mud in the water comes from the fact that this reliance on machines leads to new forms of limitation and dependency, and hence new forms of unfreedom. This is where the paradox of transhumanism arises. If we want to take advantage of the new powers and abilities afforded to us by machines, it seems like we must accept technological interference, manipulation, and domination.

There are many ways in which technology might be a source of unfreedom. For illustrative purposes, I’ll just mention three:

Technological coercion: This arises when conditions are attached to the use of technology. In other words, we only get to take advantage of its powers if we explicitly or tacitly agree to forgo something else. We see this happening right now. Think about AI assistants or social media services or fitness tracking devices. They arguably improve our lives in various ways, but we are often only allowed to use them if we agree to give up something important (e.g. our privacy) or submit to something unpleasant (e.g. relentless advertising). Sometimes the bargain may involve genuine coercion — e.g. an insurance company promising you lower premiums if you agree to wear a health monitoring bracelet at all times — sometimes the coercive effect may be more subtle — e.g. facebook offering you an endless stream of distracting information in return for personal information that they can sell to advertisers. But in both cases there is a subtle interference with your ability to make choices for yourself.

Technological domination: This arises when technology provides genuine benefits to us without actually interfering with our choices, but nevertheless exerts a dominating influence over our lives because it could be used to interfere with us if we step out of line. Some people argue that our current situation of mass surveillance leads to technological domination. As we are now all too aware, our digital devices are constantly tracking and surveilling our every move. The information gathered is used for various purposes: to grant access to credit, to push advertising, to monitor terrorist activities, to check our mental health and emotional well-being. Some people embrace this digital panopticon, arguing that it can be used for great good. Sebastian Thrun, the co-founder of Google X, for example imagines a future in which we are constantly monitored for medical diagnostic purposes. He thinks this could help us to avoid bad health outcomes. But the pessimists will argue that living in a digital panopticon is akin to living as a happy slave. You have the illusion of freedom, nothing more.

Technological dependency/vulnerability: This arises when we rely too heavily on technology to make choices on our behalf or when we become helpless without its assistance. This undermines our freedom because it effectively drains our capacity for self-determination and resiliency. This might be the most serious form of technological unfreedom, and the one most commonly discussed. We all probably have a vague sense of it happening too. Many of us feel addicted to our devices, and helpless without them. A clear example of this dependency problem would be the over-reliance of people on services like Google maps. There are many stories of people who have got into trouble by trusting the information provided to them by satellite navigation systems, even when it was contradicted by what was right before their eyes. Technology critics like Nicholas Carr argue that this is leading to cognitive degeneration (i.e. technology is actively degrading our biological mental capacities). More alarmingly, cybersecurity experts like Marc Goodman argue that it is leading to a situation of extreme vulnerability. Goodman uses the language of the ‘singularity’, beloved by technology enthusiasts, to make his point. He argues that because most of technology is now networked, and because, with the rise of the internet of things, every object in the world in being slowly added to that network, everything is potentially hackable and corruptible. This is leading to a potential singularity of crime, where the frequency and magnitude of criminal attacks will completely overwhelm us. We will never not be victims of criminal attack. If that doesn’t compromise our freedom, I don’t know what does.

These forms of technological unfreedom can arise from internal and external technologies, as well as from complementary and competitive technologies. But the potential impact is much greater as we move away from external, complementary technologies towards internal, competitive technologies. With external-complementary technologies there is always the possibility of decoupling from the technological systems that compromises our freedom. With internal-competitive technologies this becomes less possible. Since transhumanism is often thought to be synonymous with the drive toward more internalised forms of technology, and since most of the contemporary forms of internal technology are quasi-competitive in nature, you can see how the alleged paradox of transhumanism arises. We are moving down and to the right in our matrix of technological relations and this engenders the Faustian pact outlined at the start.



Before I move on to consider ways in which this paradox can be resolved, I want to briefly return to the diagram I sketched earlier on in which I arranged the metaphysical, political, and biological concepts of freedom. To that diagram we can now add another concept of freedom: technological freedom, i.e. the ability to make choices and decisions for oneself without interference with, domination by, or limitation by technological forces. But where exactly should this new concept of freedom be placed? Is it a distinctive type of freedom or is it a sub-freedom of political freedom?

This may be a question of little importance to most readers, but it matters from the perspective of conceptual purity. Some people have tried to argue that technological freedom is another form of political freedom. They do so because some of the problems that technology poses for freedom are quite similar to the political problems of freedom. This is because technology is still, often, a tool used by other powerful people in order to manipulate, coerce and dominate. Nevertheless, people who have taken this view have also noted problems that arise when you view technological unfreedom as just another form of political unfreedom. Technological domination, for example, often doesn’t emanate from a single, discrete agent or institution, as does political domination. Technological domination is, according to some writers, ‘functionally agentless’. Something similar is true of technological coercion. It is not directly analogous to the simple interaction between the highwayman and his victim. It’s more subtle and insidious. Finally, technological dependency doesn’t seem to involve anything like the traditional forms of political unfreedom. For these reasons, I think it is best to understand technological freedom as a distinct category of unfreedom, one that occasionally overlaps with the political form, but is dissociable from it.




4. Dissolving the Paradox
Now that we have a much clearer understanding of the paradox (and how it might arise) we turn to the final and most important question: can the paradox be resolved? I want to close by making four arguments that respond to this question.

First, I want to argue that there is no intrinsic paradox of transhumanism. In other words, there is nothing in the transhumanist view that necessarily entails or requires that we substitute biological unfreedom for technological unfreedom. The tension between biology and technology is contingent. Go back to the two-by-two matrix I sketched in the previous section. I used this to explain the alleged paradox by arguing that the transhumanist dilemma arises from the impulse/tendency to move down and to the right in our relationships with technology, i.e. to move towards internal-competitive technologies. But that should have struck you as a pretty odd thing to say. There is no reason why transhumanists should necessarily want to move in that direction. Indeed, if anything, their preferred quadrant is the bottom-left one (i.e. the internal-complementary one). After all, they want to preserve and extend what is best about humanity, using technology to compensate for the limitations in our biology, not to completely replace us with machines (to the extent that they wish to become cyborgs or uploaded minds they definitely want to preserve our sense of self). So they don’t necessarily embrace extreme technological dependency and vulnerability. The problem arises from the fact that moving down and to the left is less accessible than moving down and to the right. The current historical moment is one in which the most impressive technological gains are coming from artificial intelligence and robotics, the quintessential competitive technologies, and not from, say, more complementary biotechnologies. If our path to biological freedom did not force us to rely on such technologies, transhumanists would, I think, be happier. Admittedly, this is the kind of argument that will only appeal to a philosopher — those of us who love differentiating the necessary from the contingent — but it is important nonetheless.

The second argument I want to make is that there is no such thing as perfect freedom. Pure metaphysical freedom — i.e. freedom from all constraints, limitations, manipulations and interferences — is impossible. Furthermore, even if it were possible, it would not be desirable. If we are to be actors in the world, we must be subject to that world. We must be somehow affected or influenced by the causal forces in the world around us. We can never completely escape them. This is important because our sense of self and our sense of value is bound up with constraint and limitation. It is because I made particular choices at particular times that I am who I am. It is because I am forced to choose that my choices have value. If it didn’t matter what choices I made at a particular moment, if I could always rewind the clock and change what I did, this value would be lost. Nothing would really matter because everything would be revisable.

This then leads to the third argument, which is that whenever we think about advancing the cause of freedom, we must think in terms of trade-offs, not absolutes. Since you cannot avoid all possible constraints, limitations, manipulations or interferences, you must ask yourself: which mix of those things represents the best tradeoff? It is best to view freedom as a multidimensional phenomenon, not something that can be measured or assessed along a single dimension. This is something that philosophers and political scientists have recognised for some time. This is why there are so many different concepts of freedom, each one tending to emphasise a different dimension or aspect of freedom. Consider the philosopher Joseph Raz’s theory of autonomy (which we can here deem to be equivalent to a theory of freedom).*** This theory argues that there are three conditions of freedom: (i) rationality, i.e. the ability to act for reasons in pursuit of goals; (ii) optionality, i.e. the availability of a range of valuable options; and (iii) independence, i.e. freedom from interference or domination. These conditions can be taken to define a three-dimensional space of freedom against which we can assess individual lives. The ideal life is one that has maximum rationality, optionality and independence. But it is often not possible to ensure the maximum degree of each. Being more independent, for example, often reduces the options available to you and makes some choices less rationally tractable (i.e. you are less able to identify the best means to a particular end because you stop relying on the opinions or advice of others). Furthermore, we often willingly sacrifice freedom in one domain of life in order to increase it in another, e.g. we automate our retirement savings, thereby reducing freedom at one point in time, in order to increase it at a later point in time.

This is a long way of saying that transhumanism should be interpreted as one view of how we should tradeoff across the different dimensions of freedom. Transhumanists think that the biological limitations on freedom are great, having shorter lives, less intelligence and less well-being than we might otherwise leads to diminished human flourishing. Consequently, they might argue that we ought to trade these biological limitations for technological ones: what’s a loss of privacy compared to the gain in longevity/intelligence/wellbeing. Their critics — the technological pessimists — have a different understanding of the tradeoffs. They think that biological limitations are better than technological ones. That living under a technological panopticon is a much worse fate than living under the scythe of biological decay and death.

This brings me to my final argument. This one is slightly more personal nature. For what it’s worth, I tend to sympathise with both transhumanists and technological pessimists. I think most of the transhumanist goals are commendable and desirable. I think we should probably strive to remove the various forms of biological limitation identified by transhumanists (I am being cagey here since I disagree with certain interpretations and understandings of those goals). Furthermore, I think that technology — particularly internal-complementary technologies — represent the best hope for transhumanists in this regard. At the same time, I think it is dangerous to pursue the transhumanist goal by simply plunging headlong into the latest technological innovations. We need to be selective in how we embrace technology and be cognisant of the ways in which it can limit and compromise freedom. In essence, I disagree with understanding the debate about technology and its impact on freedom in a simple, binary way. We shouldn’t be transhumanist cheerleaders or resolute technological pessimists. We should be something in between, perhaps: cautiously optimistic technological sceptics.

To conclude, and to briefly sum up, the paradox of transhumanism is intriguing. Thinking about the tension between biological freedom and technological freedom can help to clarify and shed light on our ambiguous modern relationship with technology. Nevertheless, the paradox is more of an illusion than a reality. It dissolves upon closer inspection. This is because there is no pure form of freedom: we are (and should always be) forced to live with some constraints, limitations, manipulations and interferences. What we need to do is to figure out the best tradeoff or compromise.



* I have never quite understood the logic of this deal. Although this is the popular way of phrasing it, presumably the highwayman’s actual offer is ‘your money or your life and your money’ since his ultimate goal is to take your money. 


** If I were being more technically sophisticated in this discussion, I would point out that the concept of the ‘biological’ is controversial. Some people argue that certain biological categories/properties are socially constructed. The classic example might be the property of sex/gender. If you take that view of at least some biological properties, then the distinction between biological freedom and political freedom would be more blurry. If I were being even more technically sophisticated I would point out that social construction comes in different forms and not all of these are threatening to the distinction I try to draw in the text. Specifically, I would argue that most of the biological limitations that preoccupy transhumanism are causally socially constructed rather than constitutively socially constructed. 


*** There is, arguably, a technical distinction between freedom and autonomy. Following the work of Gerald Dworkin we can argue that freedom is a local property that applies to particular decisions, whereas autonomy is a global property that applies to an extended set of decisions. The two concepts are ultimately related.