Friday, April 19, 2019

The Ethics of Designing People: The Habermasian Critique




Suppose in the not-too-distant future we master the art of creating people. In other words, technology advances to the point that you and I can walk into a store (or go online!) and order a new artificial person from a retailer. This artificial person will be a full-blown person in the proper philosophical sense of the term “person”. They will have all the attributes we usually ascribe to a human person. They will have the capacity to suffer, to think rationally, to desire certain futures, to conceive of themselves as a single coherent self and so on. Furthermore, you and I will have the power to design this person according to our own specifications. We will be able to pick their eye colour, height, hairstyle, personality, intelligence, life preferences and more. We will be able completely customise them to our tastes. Here’s the question: would it be ethical for us to make use of this power?

Note that for the purposes of this thought experiment it doesn’t matter too much what the artificial person is made of. It could be a wholly biological entity, made from the same stuff as any human child, but genetically and biomedically engineered according to our customisation. Or it could also be wholly artificial, made from silicon chips and motorised bits, a bit like Data from Star Trek. None of this matters. What matters is that (a) it is a person and (b) it has been custom built to order. It is ethical to create such a being?

Some people think it wouldn’t be; some people think it would be. In this post I want to look at the arguments made by those who think it would be a bad idea to design a person from scratch in this fashion. In particular I want to look at a style of argument made popular by the German philosopher Jurgen Habermas in his critique of positive eugenics. According to this argument, you should not design a person because doing so would necessarily compromise the autonomy and equality of that person. It would turn them into a product not a person; an object not a subject.

Although this argument is Habermasian in origin, I’m not going to examine Habermas’s version of it. Instead, I’m going to look at a version of it that is presented by the Polish philosopher Maciej Musial in his article “Designing (artificial) people to serve - the other side of the coin”. This is an interesting article, one that responds to an argument from Steve Petersen claiming that it would be permissible to create an artificial person who served your needs in some way. I’ve covered Petersen’s argument before on this blog (many moons ago). Some of what Musial says about Petersen’s argument has merit to it, but I want to skirt around the topic of designing robot servants (who are still persons) and focus on the more general idea of creating persons.


1. Clarifying the Issue: The “No Difference” Argument
To understand Musial’s argument, we have to understand some of the dialectical context in which it is presented. As mentioned, it is a response to Steve Petersen’s claim that it is okay to create robot persons that serve our needs. Without going into all the details of Petersen’s argument, one of the claims that Petersen makes while defending this view is that there is no important difference between programming or designing an artificial person to really want to do something and having such a person come into existence through a process of natural biological conception and socialisation.

Why is that? Petersen makes a couple of points. First, he suggested that there is no real difference between being born by natural biological means and being programmed/designed by artificial means. Both processes entail a type of programming. In the former case, evolution by natural selection has “programmed” us, indirectly and over a long period of time, with a certain biological nature; in the latter case, the programming is more immediate and direct, but it is fundamentally the same thing. This analogy is not ridiculous. Some people — notably Daniel Dennett in his book Darwin’s Dangerous Idea — have argued that evolution is an algorithmic process, very much akin to computer programming, that designs us to serve certain evolutionary ends; and, furthermore, evolutionary algorithms are now a common design strategy in computer programming.

The other point Petersen makes is that there is no real difference between being raised by one’s parents and being intentionally designed by them. Both processes have goals and intentions behind them. Parents often want to raise their children in a particular way. For example, some parents want their children to share their religious beliefs, to follow very specific career paths, and to have the success that they never had. They will take concrete steps to ensure that this is the case, bringing their children to church every week, giving them the best possible education, and (say) training them in the family business. These methods of steering a child’s future have their limitations, and might be a bit haphazard, but they do involve intentional design (even if parents deny this). All Petersen is imagining is that different methods, aimed at the same outcome, become available. Since both methods have the same purpose, how could they be ethically different?

To put this argument in more formal terms:


  • (1) If there is no important difference between (i) biologically conceiving and raising a natural person and (ii) designing and programming an artificial person, then one cannot object to the creation of an artificial person on the grounds that it involves designing and programming them in particular ways.

  • (2) There is no important difference between (i) and (ii) (following the arguments just given)

  • (3) Therefore, one cannot object to the creation of artificial persons on the grounds that it involves designing and programming them in particular ways.


To be clear, there are many other ethical objections that might arise in relation to the creation of artificial persons. Maybe it would be too expensive? Maybe their presence would have unwelcome consequences for society? Some of these are addressed in Petersen’s original article and Musial’s response. I am not going to get into them here. I am solely interested in this “no difference” argument.


2. The Habermasian Response: There is a difference
The Habermasian response to this argument takes aim at premise (2). It rests on the belief that there are several crucial ethical differences between the two processes. Musial develops this idea by focusing in particular on how being designed changes one’s relationship with oneself, one’s creators, and the rest of society.

Before we look at his specific claims it is worth reflecting for a moment on the kinds of differences he needs to pinpoint in order to undermine the “no difference”-argument. It’s not just any difference that will do. After all, the processes are clearly different in many ways. For example, one thing that people often point to is that biological conception and parental socialisation are somewhat contingent and haphazard processes over which parents have little control. In other words, parents may desire that their children turn out a particular way, but they cannot guarantee that this will happen. They have to play the genetic and developmental lottery (indeed, there is even a well-known line of research suggesting that beyond genetics parents contribute little to the ultimate success and happiness of their children).

That’s certainly a difference, but it is not the kind of difference you need to undermine the “no difference” argument. Why not? Because it is not clear what its ethical significance is. Does a lack of control make one process more ethically acceptable than another? On the face of it, it’s not obvious that it does. If anything, one might suspect the ethical acceptability runs in the opposite direction. Surely it is ethically reckless to just run the genetic and developmental lottery and hope that everything turns out for the best? For contingency and lack of control to work to undermine the “no difference” argument it will need to be shown that they translate into some other ethically relevant difference. Do they?

Musial highlights two potentially relevant differences that they might translate into in his article. The first has to do with the effects of being designed and programmed on a person’s sense of autonomy. The gist of this argument is that if one person (or a group of persons) designs another person to have certain capacities or to serve certain ends, then that other person cannot really be the autonomous author of their own lives. They must live up to someone else’s expectations and demands.
Of course, someone like Petersen would jump back in at this point and say that this can happen anyway with traditional parental education and socialisation. Parents can impose their own expectations and demands on their children and their children can feel a lack of autonomy as a result. Despite this, we don’t think that traditional parenting is ethically impermissible (though I will come back to this issue again below).

But Musial argues that this does not compare like with like. The expectations and demands of traditional parenting usually arise after the child has “entered the world of intersubjective dialogue”. In other words, a natural child can at least express its own wishes and make its feelings known in response to parental education and socialisation. It can reject the parental expectations if it wishes (even if that makes its life difficult in other ways). Similarly, even if the child does go along with the parental expectations, it can learn to desire the things the parent’s desire for it and to achieve the things they wish it to achieve. This is very different from having those desires and expectations pre-programmed into the child before it is born through genetic manipulation or biomedical engineering. It is much harder to reject those pre-programmed expectations because of the way in which they are hardwired in.

It might be disputed at this juncture that even biological children will have some genetic endowments that they do not like and are hard to reject. For example, I am shorter than I would like to be. I am sure this is as a result of parental genetics. I don’t hold it against them or question my autonomy as a result. But Musial argues that my frustration with being shorter than I would like to be is different from the frustration that might be experienced by someone who is deliberately designed to be a particular height. In my case, it is not that my parent’s imposed a particular height expectation on me. They just rolled the genetic dice. In the case of someone who is designed to be a particular height, they can trace that height back to a specific parental intention. They know they are living up to someone else’s expectations in a way that I do not.

Musial’s second argument has to do with equality. The claim is that being designed and programmed to serve a particular aim (or set of aims) undermines an egalitarian ethos. Egalitarianism (i.e. the belief that all human beings are morally equal) can only thrive in a world of contingency. Indeed, in the original Habermasian presentation, the claim was that contingency is a “necessary presupposition” of egalitarian interpersonal relationships. This is because if one person has designed another there is a dependency relationship between them. The designee knows that they have been created at the whim of the designer and are supposed to serve the ends of the designer. There is a necessary and unavoidable asymmetry between them. Not only that, but the designee will also know themselves to be different from all other non-designed persons.

Musial argues that the inequality that results from the design process can be both normative and empirical in nature. In other words, the designee may be designated as normatively inferior to other people because they have been created to serve a particular end (and so do not have the open-ended freedom of everyone else); and the designee may just feel themselves to be inferior because they know they have been intended to serve an end, or may be treated as inferior by everyone else. Either way, egalitarianism suffers.

One potential objection to this line of thought would be to argue that the position of the designee in this brave new world of artificial persons is not that different from the position of all human beings under traditional theistic worldviews. Under theism, the assumption is usually that we are all designed by God. Isn’t there are necessary relationship of inequality as a result? Without getting into the theological weeds, this may indeed be true but even still there is a critical difference between being a designee under traditional theism and being a designee in the circumstances being envisaged by Musial and others. Under theism, all human persons are designees and so all share in the same unequal status with respect to the designer. That’s different from a world in which some people are designed by specific others to serve specific ends and some are not. In any event, this point will only be relevant to someone who believes in traditional theism.


3. Problems with the Habermasian Critique
That’s the essence of the Habermas/Musial critique of the no difference argument. Is it any good? I have a two major concerns.

The first is a general philosophical one. It has to do with the coherence of individual autonomy and freedom. One could write entire treatises on both of these concepts and still barely scratch the surface of the philosophical debate about them. Nevertheless, I worry that the Habermas/Musial argument depends on some dubious, and borderline mysterian, thinking about the differences between natural and artificial processes and their effect on autonomy. In his presentation of the argument, Musial concedes that natural forces do, to some extent, impact on our autonomy. In other words, our desires, preferences and attitudes are shaped by forces beyond our control. Still, following Habermas, he claims that “natural growth conditions” allow us to be self-authors in a way that artificial design processes do not.

I’ll dispute the second half of this claim in a moment but for now I want dwell on the first half. Is it really true that natural growth conditions allow us to be self-authors? Maybe if you believe in contra-causal free will (and if you believe this is somehow absent in created persons). But if you don’t, then it is hard to see how can this be true if it is conceded that external forces, including biological evolution and cultural indoctrination, have a significant impact on our aptitudes, desires and expectations? It may be true that under natural growth conditions you cannot identify a single person who has designed you to be a particular way or to serve a particular end — the causal feedback loops are a bit too messy for that — but that doesn’t make the desires that you have more authentically yours as a result. Just because you can pinpoint the exact external cause of a belief or desire in one case, but not in the other, it does not mean that you have greater self-authorship in the latter. You have an illusion of self-authorship, nothing more. Once that illusion is revealed to you, how it is anymore existentially reassuring than learning that you were intentionally designed to be a particular way? If anything, we might suspect that latter would be more existentially reassuring. At least you know that you are not the way you are due to blind chance and dumb luck (in this respect it might be worth noting that a traditional goal of psychoanalytic therapy was to uncover the deep developmental and non-self determined causes of your personal traits and foibles). Furthermore, in either case, it seems to me that the illusion of autonomy could be sustained despite the knowledge of external causal influences. This would be true if, even having learned of the illusion, you still have the capacity for rational thought and the capacity to learn from your experiences.

This brings me to the second concern, which is more important. It has to do with the intended object or goal behind the intentional design of an artificial person. Notwithstanding my concerns about the nature of autonomy, I think the Habermas/Musial argument does provide reason to worry about the ethics of creating people to serve very specific ends. In other words, I would concede that it might be questionable to create, say, an artificial person who has been designed and programmed to really want to do your ironing. If that person is a genuine person — i.e. has the cognitive and emotional capacities we usually associate with personhood — then it might be disconcerting for them to learn that they were designed for this purpose, and it might impact on their sense of autonomy and equality if they are.

But this is only because they have been designed to serve a very specific end. If the goal of the designer/programmer is not to create a person to serve a specific end but, rather, to design someone who has enhanced capacities for autonomous thought, then the problem goes away. In that case, the artificial person would probably be customised to have greater intelligence, learning capacity, foresight, and imagination than a natural born person, but there would be no specific end that they are intended to serve. In other words, the designer would not be trying to create someone who could to the ironing but, rather, someone who could live a rich and flourishing life, whatever they decide for themselves. I’m not a parent (yet) myself, but I imagine that this should really be the goal of ethical parenting: not to raise the next chess champion (or whatever) but to raise someone who has the capacity to decide what the good life should be for themselves. Whether that is done through traditional parenting, or through design and programming, strikes me as irrelevant.

I would add to this that the Habermas/Musial argument, even in the case of a person who has been designed to serve a specific end, only works on the assumption that the specific end that the person has been designed to serve is hard to reject after they learn that they have been designed to serve that end. But it is not obvious to me that this would be the case. If we have the technology that enables us to specifically design artificial people from birth, it seems likely that we would also have the technology to reprogram them in the middle of life too. Consequently, someone who learns that they have been designed to serve a particular end could easily reject that end by having themselves reprogrammed. It’s only if you assume that this power is absent, or that designers exert continued control over the lives of the designees, that the tragedy of being designed might continue.

It could be argued, in response to this, that if you are not designing an artificial person to serve a specific end, then then there is no point in creating them. Musial raises this as a worry at the end of his article, when he suggests that the only ethical way to create an artificial person is to not specific any of their features. But I think this is wrong. You can specify some of their features without specifying that they serve a specific end, and if you are worried about the ethics of creating such a person that does not serve a specific end you may as well ask: what’s the point of creating natural persons if they don’t serve any particular ends? There are many reasons to do so. In my paper “Why we should create artificial offspring”, I argued that we might want to create artificial people in order to secure a longer collective afterlife, and because doing so would add value to our lives. That’s at least one reason.

This is not to say there are no problems with creating artificially designed persons. For example, I think creating an artificially enhanced person (i.e. one with capacities that exceed those of most ordinary human beings) could be problematic from an egalitarian perspective. This is not because the designee would be in an inferior position to the non-designed but rather because the non-designed might perceive themselves to be at a disadvantage relative the designee. This has been a long-standing concern in the enhancement debate. But worrying about that takes us beyond the Habermasian critique and is a something to address another day.




Friday, April 12, 2019

The Argument for Medical Nihilism




Suppose you have just been diagnosed with a rare illness. You go to your doctor and they put you through a series of tests. In the end, they recommend that you take a new drug — wonderzene — that has recently been approved by the FDA following several successful trials. How confident should you be that this drug will improve your condition?

You might think that this question cannot be answered in the abstract. It has to be assessed on a case by case basis. What is the survival rate for your particular illness? What is its underlying pathophysiology? What does the drug do? How successful were these trials? And in many ways you would be right. Your confidence in the success of the treatment does depend on the empirical facts. But that’s not all it depends on. It also depends on assumptions that medical scientists make about the nature of your illness and on the institutional framework in which the scientific evidence concerning the illness and its treatment is produced, interpreted and communicated to patients like you. When you think about these other aspects of the medical scientific process, it might be the case that you should very sceptical about the prospects of your treatment being a success. This could be true irrespective of the exact nature of the drug in question and the evidence concerning its effectiveness.

That is the gist of the argument put forward by Jacob Stegenga in his provocative book Medical Nihilism. The book argues for an extreme form of scepticism about the effectiveness of medical interventions, specifically pharmaceutical interventions (although Stegenga intends his thesis to have broader significance). The book is a real tour-de-force in applied philosophy, examining in detail the methods and practices of modern medical science and highlighting their many flaws. It is eye-opening and disheartening, though not particularly surprising to anyone who has been paying attention to the major scandals in scientific research for the past 20 years.

I highly recommend reading the book itself. In this post I want to try to provide a condensed summary of its main argument. I do so partly to help myself understand the argument, and partly to provide a useful primer to the book for those who have not read it. I hope that reading it stimulates further interest in the topic.


1. The Master Argument for Medical Nihilism
Let’s start by clarifying the central thesis. What exactly is medical nihilism? As Stegenga notes in his introductory chapter, “nihilism” is usually associated with the view that “some particular kind of value, abstract good, or form of meaning” does not exist (Stegenga 2018, 6). Nihilism comes in both metaphysical and epistemological flavours. In other words, it can be understood as the claim that some kind of value genuinely does not exist (the metaphysical thesis) or that it is impossible to know/justify one’s belief in its existence (the epistemological thesis).

In the medical context, nihilism can be understood relative to the overarching goals of medicine. These goals are to eliminate both the symptoms of disease and, hopefully, the underlying causes of disease. Medical nihilism is then the view that this is (very often) not possible and that it is very difficult to justify our confidence in the effectiveness of medical interventions with respect to those goals. For what it’s worth, I think that the term ‘nihilism’ oversells the argument that Stegenga offers. I don’t think he quite justifies total nihilism with respect to medical interventions; though he does justify strong scepticism. That said, Stegenga uses the term nihilism to align himself with 19th century medical sceptics who adopted a view known as ‘therapeutic nihilism’ which is somewhat similar to the view Stegenga defends.

Stegenga couches the argument for medical nihilism in Bayesian terms. If that’s something that is unfamiliar to you, then I recommend reading one of the many excellent online tutorials on Bayes’ Theorem. Very roughly, Bayes’ Theorem is a mathematical formula for calculating the posterior probability of a hypothesis or theory (H) given some evidence (E). Or, to put it another way, it is a formula for calculating how confident you should be in a hypothesis given that you have received some evidence that appears to speak in its favour (or not, as the case may be). This probability can be written as P(H|E) — which reads in English as “the probability of H given E”. There is a formal derivation of Bayes’ Theorem that I will not go through. For present purposes, it suffices to know that the P(H|E) depends on three other probabilities: (i) the prior probability of the hypothesis being true, irrespective of the evidence (i.e P(H)); (ii) the probability (aka the “likelihood”) of the evidence given the hypothesis (i.e. P(E|H); and (iii) the prior probability of the evidence, irrespective of the hypothesis (i.e. P(E)). This can be written out as an equation, as follows:

P(H|E) = P(H) x P(E|H) / P(E)*

In English, this equation states that the probability of the hypothesis given the evidence is equal to the prior probability of the hypothesis, multiplied by the probability of the evidence given the hypothesis, divided by the prior probability of the evidence.

This equation is critical to understanding Stegenga’s argument because, without knowing any actual figures for the relevant probabilities, you know from the equation itself that the P(H|E) must be low if the following three conditions are met: (i) the P(H) is low (i.e. if it is very unlikely, irrespective of the evidence, that the hypothesis is true); (ii) the P(E|H) is low (i.e. the evidence observed is not very probable given the hypothesis); and (iii) the P(E) is high (i.e. it is very likely that you would observe the evidence irrespective of whether the hypothesis was true or not). To confirm this, just plug figures into the equation and see for yourself.

That’s all the background on Bayes’ theorem that you need to understand Stegenga’s case for medical nihilism. In Stegenga’s case, the hypothesis (H) in which we are interested is the claim that any particular medical intervention is effective, and the evidence (E) in which we are interested is anything that speaks in favour of that hypothesis. So, in other words, we are trying to figure out how confident we should be about the claim that the intervention is effective given that we have been presented with evidence that appears to support its effectiveness. We calculate that using Bayes’ theorem and we know from the preceding discussion that our confidence should be very low if the three conditions outlined above are met. These three conditions thus form the premises of the following formal argument in favour of medical nihilism.


  • (1) P(H) is low (i.e. the prior probability of any particular medical intervention being effective is low)
  • (2) P(E|H) is low (i.e. the evidence observed is unlikely given the hypothesis that the medical intervention is effective)
  • (3) P(E) is high (i.e. the prior probability of observing evidence that favours the treatment, irrespective of whether the treatment is actually effective, is high)
  • (4) Therefore (by Bayes’ theorem) the P(H|E) must be low (i.e. the posterior probability of the medical intervention being successful, given evidence that appears to favour it, is low)




The bulk of Stegenga’s book is dedicated to defending the three premises of this argument. He dedicates most attention to defending premise (3), but the others are not neglected. Let’s go through each of them now in more detail. Doing so should help to eliminate lingering confusions you might have about this abstract presentation of the argument.


2. Defending the First Premise: The P(H) is Low
Stegenga offers two arguments in support of the claim that medical interventions have a low prior probability of success. The first argument is relatively straightforward. We can call it the argument from historical failure. This argument is an inductive inference from the fact that most historical medical interventions are unsuccessful. Stegenga gives many examples. Classic ones would include the use of bloodletting and mercury to cure many illnesses, “hydropathy, tartar emetic, strychnine, opium, jalap, Daffy’s elixir, Turlington’s Balsam of life” and many more treatments that were once in vogue but have now been abandoned (Stegenga 2018, 169).

Of course, the problem with focusing on historical examples of this sort is that they are often dismissed by proponents of the “standard narrative of medical science”. This narrative runs like this “once upon a time, it is true, that most medical interventions were worse than useless, but then, sometime in the 1800s, we discovered scientific methods and things started to improve”. This is taken to mean that you can’t use these historical examples to question the prior probability of modern medical treatments.

Fortunately, you don’t need to. Even in the modern era most putative medical treatments are failures. Drug companies try out many more treatments than ever come to market, and among those that do come to market, a large number end up being withdrawn or restricted due to their relative uselessness or, in some famous cases, outright dangerousness. Stegenga gives dozens of examples on pages 170-171 of his book. I won’t list them all here but I will give a quick flavour of them (if you click on the links, you can learn more about the individual cases). The examples of withdrawn or restricted drugs include: isotretinoin, rosiglitazone, valdecoxib, fenfluramine, sibutramine, rofecoxib, cerivastatin, and nefazodone. The example of rofecoxib (marketed as Vioxx) is particularly interesting. It is a pain relief drug, usually prescribed for arthritis, that was approved in 1999 but then withdrawn due to associations with increased risk of heart attack and stroke. It was prescribed to more than 80 million people when it was on the market (there is some attempt to return it to market now). And, again, that it just one example among many. Other prominent medical failures include monoamine oxidase inhibitors, which were widely prescribed for depression in the mid-20th century, only later to be abandoned due to ineffectiveness, and hormone replacement therapy (HRT) for menopausal women.

These many examples of past medical failure, even in the modern era, suggest that it would be wise to assign a low prior probability to the success of any new treatment. That said, Stegenga admits that this is a suggestive argument only since it is very difficult to give an accurate statement of the ratio of effective to ineffective treatments from this data (one reason for this is that it is difficult to get a complete dataset and the dataset that we do have is subject to flux, i.e. there are several treatments that are still on the market that may soon be withdrawn due to ineffectiveness or harmfulness).

Stegenga’s second argument for assigning a low prior probability to H is more conceptual and theoretical in a nature. It is the argument from the paucity of magic bullets. Stegenga’s book isn’t entirely pessimistic. He readily concedes that some medical treatments have been spectacular successes. These include the use of antibiotics and vaccines for the treatment of infectious diseases and the use of insulin for diabetic treatment. One property shared by these successful treatments is that they tend to be ‘magic bullets’ (the term comes from the chemist Paul Ehrlich). What this means is that they target a very specific cause of disease (e.g. virus or bacteria) in an effective way (i.e. they can eliminate/destroy the specific cause of disease without many side effects).

Magic bullets are great, if we can find them. The problem is that most medical interventions are not magic bullets. There are three reasons for this. First, magic bullets are the “low-hanging fruit” of medical science: we have probably discovered most of them by now and so we are unlikely to find new ones. Second, many of the illnesses that we want to treat have complex, and poorly understood, underlying causal mechanisms. Psychiatric illnesses are a classic example. Psychiatric illnesses are really just clusters of symptoms. There is very little agreement on their underlying causal mechanisms (though there are lots of theories). It is consequently difficult to create a medical intervention that specifically and effectively targets a psychiatric disease. This is equally true for other cases where the underlying mechanism is complex or unclear. Third, even if the disease were relatively simple in nature, human physiology is not, and the tools that we have at our disposal for intervening into human physiology are often crude and non-specific. As a result, any putative intervention might mess up the delicate chemical balancing act inside the body, with deleterious side effects. Chemotherapy is a clear example. It helps to kill cancerous cells but in the process it also kills healthy cells. This often results in very poor health outcomes for patients.

Stegenga dedicates an entire chapter of his book to this argument (chapter 4) and gives some detailed illustrations of the kinds of interventions that are at our disposal and how non-specific they often are. Hopefully, my summary suffices for getting the gist of the argument. The idea is that we should assign a low prior probability to the success of any particular treatment because it is very unlikely that the treatment is a magic bullet.


3. Defending the Second Premise: The P(E|H) is Low
The second premise claims that the evidence we tend to observe concerning medical interventions is not very likely given the hypothesis that they are successful. For me, this might be the weakest link in the argument. That may be because I have trouble understanding exactly what Stegenga is getting at, but I’ll try to explain how I think about it and you can judge for yourself whether it undermines the argument.

My big issue is that this premise, more so than the other premises, seems like one that can really only be determined on a case-by-case basis. Whether a given bit of evidence is likely given a certain hypothesis depends on what the evidence is (and what the hypothesis is). Consider the following three facts: the fact that you are wet when you come inside the house: the fact that you were carrying an umbrella with you when you did; and the fact that you complained about the rain when you spoke to me. These three facts are all pretty likely given the hypothesis that it is raining outside (i.e. the P(E|H) is high). The facts are, of course, consistent with other hypotheses (e.g. that you are a liar/prankster and that you dumped a bucket of water over your head before you came in the door) but that possibility, in and of itself, doesn’t mean the likelihood of observing the evidence that was observed, given the hypothesis that it is raining outside, is low. It seems like the magnitude of the likelihood depends specifically on the evidence observed and how consistent it is with the hypothesis. In our case, we are assuming that the hypothesis is the generic statement that the medical intervention is effective, so before we can say anything about the P(E|H) we would really need to know what the evidence in question is. In other words, it seems to me like we would have to “wait and see” what the evidence is before concluding that the likelihood is low. Otherwise we might be conflating the prior probability of an effective treatment (which I agree is low) with the likelihood.

Stegenga’s argument seems to be that we can say something generic about the likelihood given what we know about the evidential basis for existing interventions. He makes two arguments in particular about this. First, he argues that in many cases the best available medical evidence suggests that many interventions are little better than placebo when it comes to ameliorating disease. In other words, patients who take an intervention usually do little better than those who take a placebo. This is an acknowledged problem in medicine, sometimes referred to as medicine’s “darkest secret”. He gives detailed examples of this on pages 171 to 175 of the book. For instance, the best available evidence concerning the effectiveness of anti-depressants and cholesterol-lowering drugs (statins) suggests they have minimal positive effects. That is not the kind of evidence we would expect to see on the hypothesis that the treatments are effective.

The second argument he makes is about discordant evidence. He points out that in many cases the evidence for the effectiveness of existing treatments is a mixed bag: some high quality studies suggest positive (if minimal) effects; others suggest there is no effect; and others suggest that there is a negative effect. Again, this is not the kind of evidence we would expect to see if the intervention is effective. If the intervention were truly effective, surely there would be a pronounced positive bias in the total set of evidence? Stegenga goes into some of the technical reasons why this argument from discordant evidence is correct, but we don’t need to do that here. This description of the problem should suffice.

I agree with both of Stegenga’s arguments, but I still have qualms about his general claim that the P(E|H) for any particular medical intervention is low. Why is this? Let’s see if I can set it out more clearly. I believe that Stegenga succeeds in showing that the evidence we do observe concerning specific existing treatments is not particularly likely given the hypothesis that those treatments are effective. That’s pretty irrefutable given the examples discussed in his book. But as I understand it, the argument for medical nihilism is a general one that is supposed to apply to any random or novel medical treatment, not a specific one concerning particular medical treatments. Consequently, I don’t see why the fact that the evidence we observe concerning specific treatments is unlikely generalises to an equivalent assumption about any random or novel treatment.

That said, my grasp of probability theory leaves a lot to be desired so I may have this completely wrong. Furthermore, even if I am right, I don’t think it undermines the argument for medical nihilism all that much. The claims that Stegenga defends about the evidential basis of existing treatments can be folded into how we calculate the prior probability of any random or novel medical treatment being successful. And it would certainly lower that prior probability.


4. Defending the Third Premise: The P(E) is High
This is undoubtedly the most interesting premise of Stegenga’s argument and the one he dedicates the most attention to in his book (essentially all of chapters 5-10). I’m not going to be able to do justice to his defence of it here. All I can provide is a very brief overview. Still, I will try my best to capture the logic of the argument he makes.

To start, it helps if we clarify what this premise is stating. It is stating that we should expect to see evidence suggesting that an intervention is effective even if the intervention is not effective. In other words, it is stating that the institutional framework through which medical evidence is produced and communicated is such that there is a significant bias in favour of positive evidence, irrespective of the actual effectiveness of a treatment. To defend this claim Stegenga needs to show that there is something rotten at the heart of medical research.

The plausibility of that claim will be obvious to anyone who has been following the debates about the reproducibility crisis in medical science in the past decade, and to anyone who has been researching the many reports of fraud and bias in medical research. Still, it is worth setting out the methodological problems in general terms, and Stegenga’s presentation of them is one of the better ones.

Stegenga makes two points. The first is that the methods of medical science are highly malleable; the second is that the incentive structure of medical science is such that people are inclined to take advantage of this malleability in a way that produces evidence of positive treatment effects. These two points combine into an argument in favour of premise (3).

Let’s consider the first of these points in more detail. You might think that the methods of medical science are objective and scientific. Maybe you have read something about evidence based medicine. If so, you might well ask: Haven’t medical scientists established clear protocols for conducting medical trials? And haven’t they agreed upon a hierarchy of evidence when it comes to confirming whether a treatment is effective or not? Yes, they have. There is widespread agreement that randomised control trials are the gold standard for testing the effectiveness of a treatment, and there are detailed protocols in place for conducting those trials. Similarly, there is widespread agreement that you should not over-rely on one trial or study when making the case for a treatment. After all, one trial could be an anomaly or statistical outlier. Meta-analyses and systematic reviews are desirable because they aggregate together many different trials and see what the general trends in evidence are.

But Stegenga argues that this widespread agreement about evidential standards masks considerable problems with malleability. For example, when researchers conduct a meta-analysis, they have to make a number of subjective judgments about which studies to include, what weighting to give to them and how to interpret and aggregate their results. This means that different groups of researchers, conducting meta-analyses of the exact same body of evidence, can reach different conclusions about the effectiveness of a treatment. Stegenga gives examples of this in chapter 6 of the book. The same is true when it comes to conducting randomised control trials (chapter 7) and measuring the effectiveness of those trials (chapter 8). There are sophisticated tools for assessing the quality of evidence and the measures of effectiveness, but they are still prone to subjective judgment and assessment, and different researchers can apply them in different ways (more technically, Stegenga argues that the tools have poor ‘inter-rater reliability’ and poor ‘inter-tool reliability’). Again, he gives several examples of how these problems manifest in the book.

The malleability of the evidential tools might not be such a problem is everybody used those tools in good faith. This is where Stegenga’s second claim — about the problem of incentives — rears its ugly head. The incentives in medical science are such that not everyone is inclined to use the tools in good faith. Pharmaceutical companies need treatments to be effective if they are to survive and make profits. Scientists also depend on finding positive effects to secure career success (even if they are not being paid by pharmaceutical companies). This doesn’t mean that people are always explicitly engaging in fraud (though some definitely are) it just means that everyone operating within the institutions of medical research has a significant interest in finding and reporting positive effects. If a study doesn’t find a positive effect, it tends to go unreported. Similarly, and because of the same incentive structures, there is a significant bias against finding and reporting on the harmful effects of interventions.

Stegenga gives detailed examples of these incentive problems in the book. Some people might push back against his argument by pointing out that the problems to which he appeals are well-documented (particularly since the reproducibility crisis became common knowledge in the past decade or so) and steps have been taken to improve the institutional structure through which medical evidence is produced. So, for example, there is a common call now for trials to be pre-registered with regulators and there is greater incentive to try to replicate findings and report on negative results. But Stegenga argues that these solutions are still problematic. For example, the registration of trial and trial data, by itself, doesn’t seem to stop the over-reporting of positive results nor the approval of drugs with negative side effects. One illustration of this is the drug rosiglitazone, which is a drug for type-2 diabetes (Stegenga 2018, p 148). Due to a lawsuit, the drug manufacturer (GlaxoSmithKline) was required to register all data collected from forty-two trials of the drug. Only seven trials were published, which unsurprisingly suggested that the drug had positive effects. The drug was approved by the FDA in 1999. Later, in 2007, a researcher called Steven Nissen accessed the data from all 42 trials, conducted a meta-analysis, and discovered that the drug increased the risk of heart attack by 43%. In more concrete terms, this meant that the drug was estimated to have caused somewhere in the region of 83,000 heart attacks since coming on the market. All of this information was available to both the drug manufacturer and, crucially, the regulator (the FDA) before Nissen conducted his study. Indeed, internal memos from the company suggested that they were aware of the heart attack risk years before. But yet they had no incentive to report it and the FDA, either through incompetence or lack of resources, had no incentive to check up on them. That’s just one case. In other cases, the problem goes even deeper than this, and Stegenga gives some examples of how regulators are often complicit in maintaining the secrecy of trial data.

To reiterate, this doesn’t do justice to the nuance and detail that Stegenga provides in the book, but it does, I think, hint that there is a strong argument to be made in favour of premise (3).



5. Criticisms and Replies
What about objections to the argument? Stegenga looks at six in chapter 11 of the book (these are in addition to specific criticisms of the individual premises). I’ll review them quickly here.
The first objection is that there is no way to make a general philosophical case for medical nihilism. Whether any given medical treatment is effective depends on the empirical facts. You have to go out and test the intervention before you can reach any definitive conclusions.

Stegenga’s response to this is that he doesn’t deny the importance of the empirical facts, but he argues, as noted in the introduction to this article, that the hypothesis that any given medical intervention is effective is not purely empirical. It depends on metaphysical assumptions about the nature of disease and treatment, as well as epistemological/methodological assumptions about the nature of medical evidence. All of these have been critiqued as part of the argument for medical nihilism.

The second objection is that modern “medicine is awesome” and the case for medical nihilism argument doesn’t properly acknowledge its awesomeness. The basis for this objection presumably lies in the fact that some treatments appear to be very effective and that health outcomes, for the majority of people, have improved over the past couple of centuries, during which period we have seen the rise of scientific medicine.

Stegenga’s response is that he doesn’t deny that some medical interventions are awesome. Some are, after all, magic bullets. Still, there are three problems with this “medicine is awesome” objection. First, while some interventions are awesome, they are few and far between. For any randomly chosen or novel intervention the odds are that it is not awesome. Second, Stegenga argues that people underestimate the role of non-medical interventions in improving general health and well-being. In particular, he suggests (citing some studies in support of this) that changes in hygiene and nutrition have played a big role in improved health and well-being. Finally, Stegenga argues that people underestimate the role that medicine plays in negative health outcomes. For example, according to one widely-cited estimate, there are over 400,000 preventable hospital-induced deaths in the US alone every year. This is not “awesome”.

The third objection is that regulators help to guarantee the effectiveness of treatments. They are gatekeepers that prevent harmful drugs from getting to the market. The put in place elaborate testing phases that drugs have to pass through before they are approved.

This objection holds little weight in light of the preceding discussion. There is ample evidence to suggest that regulatory approval does not guarantee the effectiveness of an intervention. Many drugs are withdrawn years after approval when evidence of harmfulness is uncovered. Many approved drugs aren’t particularly effective. Furthermore, regulators can be incompetent, under-resourced and occasionally complicit in hiding the truth about medical interventions.

The fourth objection is that peer review helps to guarantee the quality of medical evidence. This objection is, of course, laughable to anyone familiar with the system of peer review. There are many well-intentioned researchers peer-reviewing one another’s work, but they are all flawed human beings, subject to a number of biases and incompetencies. There is ample evidence to suggest that bad or poor quality evidence gets through the peer review process. Furthermore, even if they were perfect, peer reviewers can only judge the quality of the studies that are put before them. If those studies are a biased sample of the total evidence, peer reviewers cannot prevent a skewed picture of reality from emerging.

The fifth objection is that the case for medical nihilism is “anti-science”. That’s a bad thing because there is lots of anti-science activism in the medical sphere. Quacks and pressure groups push for complementary therapies and argue (often with great success) against effective mainstream interventions (like vaccines). You don’t want to give these groups fodder for their anti-science activism, but that’s exactly what the case for medical nihilism does.

But the case for medical nihilism is definitely not anti-science. It is about promoting good science over bad science. This is something that Stegenga repeatedly emphasises in the book. He looks at the best quality scientific evidence to make his case for the ineffectiveness of interventions. He doesn’t reject or deny the scientific method. He just argues that the best protocols are not always followed, that they are not perfect, and that when they are followed the resulting evidence does not make a strong case for effectiveness. In many ways, the book could be read as a plea for a more scientific form of medical research, not a less scientific form. Furthermore, unlike the purveyors of anti-science, Stegenga is not advocating some anti-science alternative to medical science — though he does suggest we should be less interventionist in our approach to illness, given the fact that many interventions are ineffective.

The sixth and final objection is that there are, and will be soon, some “game-changing” medical breakthroughs (e.g. stem cell treatment or genetic engineering). These breakthroughs will enable numerous, highly effective interventions. The medical nihilist argument doesn’t seem to acknowledge either the reality or possibility of such game-changers.

The response to this is simple. Sure, there could be some game-changers, but we should be sceptical about any claim to the effect that a particular treatment is a game-changer. There are significant incentives at play that encourage people overhype new discoveries. Few of the alleged breakthroughs in the past couple of decades have been game-changers. We also know that most new interventions fail or have small effect sizes when scrutinised in depth. Consequently, a priori scepticism is warranted.


6. Conclusion
That brings us to the end of the argument. To briefly summarise, medical nihilism is the view that we should be sceptical about the effectiveness of medical interventions. There are three reasons for this, each corresponding to one of the key probabilities in Bayes’ Theorem. The first reason is that the prior probability of a treatment being effective is low. This is something we can infer from the long history of failed medical interventions, and the fact that there are relatively few medical magic bullets. The second reason is that the probability of the evidence for effectiveness, given the hypothesis that an intervention is effective, is low. We know this because the best available evidence concerning medical interventions suggest they have very small effect sizes, and there is often a lot of discordant evidence. Finally, the third reason is that the prior probability of observing evidence suggesting that a treatment is effective, irrespective of its actual effectiveness is high. This is because medical evidence is highly malleable, and there are strong incentives at play that encourage people to present positive evidence and hide/ignore negative evidence.

* For Bayes afficionados: yes I know that this is the short form of the equation and I know I have reversed the order of two terms in the equation from the standard presentation.



Wednesday, April 10, 2019

#57 - Sorgner on Nietzschean Transhumanism


Stefan Lorenz Sorgner

In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow at the Ethics Centre of the Friedrich-Schiller-University in Jena. His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism. We talk about his case for a Nietzschean form of transhumanism.

You can download the episode here or listen below. You can also subscribe to the podcast on iTunes, Stitcher and a variety of other podcasting apps (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 2:12 - Recent commentary on Stefan's book Ubermensch
  • 3:41 - Understanding transhumanism - getting away from the "humanism on steroids" ideal
  • 10:33 - Transhumanism as an attitude of experimentation and not a destination?
  • 13:34 - Have we always been transhumanists?
  • 16:51 - Understanding Nietzsche
  • 22:30 - The Will to Power in Nietzschean philosophy
  • 26:41 - How to understand "power" in Nietzschean terms
  • 30:40 - The importance of perspectivalism and the abandonment of universal truth
  • 36:40 - Is it possible for a Nietzschean to consistently deny absolute truth?
  • 39:55 - The idea of the Ubermensch (Overhuman)
  • 45:48 - Making the case for a Nietzschean form of transhumanism
  • 51:00 - What about the negative associations of Nietzsche?
  • 1:02:17 - The problem of moral relativism for transhumanists

Relevant Links




Monday, April 8, 2019

Does Consciousness Matter from an Ethical Perspective?




Consciousness is widely thought to be important, particularly from an ethical perspective. It is hard to find widespread agreement in ethics, but one relatively uncontroversial ethical fact is that pain is intrinsically bad and pleasure is intrinsically good. This fact depends on consciousness. It is only if a being has the capacity for consciousness that they can actually experience pleasure or pain. According to some, this capacity — also called the capacity for sentience — is the sine qua non of moral status: all beings with this capacity have at least some moral standing.

There are also more elaborate theories of moral status that underscore the centrality of consciousness to ethical thought. For example, many people claim that “personhood” is a key property for moral status. Only persons, it is alleged, attract the highest degree of moral standing and protection. But what is personhood? There is disagreement on this, but at a minimum personhood would seem to require the continuing capacity for consciousness and self-consciousness, i.e. continued conscious awareness of yourself as a subject of experience over time.

Given this, you might think it would be pretty odd for someone question whether consciousness is ethically significant. But this is exactly what Neil Levy does in his short article ‘The Value of Consciousness’. Despite its contrarian starting point, Levy ultimately agrees that consciousness matters from an ethical perspective, but just not in the way that many people think.

Let’s see what his argument is.


1. Access Consciousness Versus Phenomenal Consciousness
Levy’s argument works off a distinction between two different kinds of consciousness: (i) phenomenal consciousness and (ii) access consciousness. This is a distinction that was first introduced by the philosopher Ned Block in his famous article “On a Confusion about a Function of Consciousness”. We can characterise the distinction in the following way:

Phenomenal Consciousness = The qualitative experiential feeling associated with being conscious (the “what-is-it-like-ness” of being conscious). This is best understood by way of example so imagine you are looking at and then eating an apple. Phenomenal consciousness is the visual experience of seeing the redness of the apple and the taste experience of its bittersweet flesh on your tongue.

Access Consciousness = The informational availability of a mental state. In other words, the capacity to access mental information and report on it, express it, manipulate it in reasoning, deliberate about it and so on. To continue the apple example, access consciousness would be the ability to talk about the experience of eating the apple, and to remember looking at and eating it at a later time.

Phenomenal consciousness is the type of consciousness that most people have in mind when they think about what it means to be conscious. But access consciousness is also important. Block introduced the distinction because he felt some scientific investigations into the nature of consciousness were conflating the two. Scientists were getting pretty good a figuring out how access consciousness worked — i.e. at how the brain made certain informational content available to the ‘person’ or ‘self’ — but not at figuring out phenomenal consciousness. The latter is the core of what David Chalmers would later call the ‘hard problem’ of consciousness. To be clear, many mental states are both access conscious and phenomenally conscious. For example, the eating of the apple referred to above is something that will have a qualitative feeling associated with it and will also be available to be reported on and deliberated about. But sometimes access consciousness and phenomenal consciousness pull apart.

With this distinction in place we can reformulate the question motivating this article. Instead of asking whether consciousness matters or not, we can ask “which kind of consciousness matters, if any?”.

It’s pretty clear the most people think it is phenomenal consciousness that matters. After all, it is phenomenal consciousness that matters when it comes to the experience of pleasure and pain. Access consciousness might matter for higher levels of moral standing, as perhaps a foundation for self-consciousness and personhood, but it only matters then as an additional ingredient. It’s phenomenal consciousness that is the necessary foundation upon which moral status is built.

The philosopher Charles Siewert makes this point quite forcefully with the following thought experiment (this is my own modification of it).

Zombie: You are about to eat some revolutionary new synthetic food. The food is really tasty, but the nutritionist tells you that there is a major side effect associated with it. It is quite possible (say a 50% chance) that after eating it you will be a philosophical zombie. This means that you will lose your phenomenal consciousness and will no longer have qualitative experiences. But you will be otherwise unchanged. You will look and act like an ordinary human being. You will have access to past memories and events. You will be able to build a personal narrative about your life. Anyone interacting with you will be unable to tell the difference.


Do you want to eat the food? Well, that might depend, of course, on a number of things (e.g. how long left you have to live and how tasty the food really is) but it seems plausible to suggest that no one would want to lose phenomenal consciousness for the sake of a tasty morsel. Given the choice between living their life as normal and living it without phenomenal consciousness, it seems like most people would choose the former. Indeed, Siewert himself suggests that a life without phenomenal consciousness would be little better than death. This suggests that phenomenal consciousness is what matters most.


2. Levy’s Critique
Siewert’s thought experiment is intuitively compelling. I know I certainly wouldn’t be inclined to eat the food if it meant losing the capacity for phenomenal consciousness (though, I would add that the thought experiment depends on a practical impossibility: how could you know, beforehand, that people lose phenomenal consciousness if they continue to act as normal?). But Levy challenges it. He argues that access consciousness has a lot of value too. A life without phenomenal consciousness might be worse overall but it could still be worth living.

Levy defends this view by making a number of points. First, he argues that someone with access consciousness still has a point of view on the world. In other words, they will have certain dispositions and reactions to the world around them. They will value some things and disvalue others. They will have interests that can be thwarted or fulfilled. These mental states may all be purely functional in nature and so not associated with any phenomenal experience, but they are still real and still provide a foundation for a ‘valuing’ relationship between the person and the world around them. This is important because Levy thinks some people are too quick to deny the possibility that a person who lacks phenomenal consciousness can have a valuing relationship with the world.

Second, Levy argues that it is possible to talk meaningfully about the well-being (or welfare) of a philosophical zombie. This is true if you follow some of the classic theories of well-being. According to the desire-satisfaction theory, for example, a person’s life goes well for them if their desires are satisfied. A philosophical zombie can have desires — there is nothing in the concept of a desire to suggest that it requires phenomenal consciousness — and so their life can go better or worse depending on the number of desires they satisfy. Similarly, on an “objective list” theory of well-being, a person’s life goes well for them if they achieve certain objectively defined states of being, e.g. they are educated, their health is good, they have friends and family, they have intimate loving relationships and so on. Again there is no reason to think that a philosophical zombie cannot satisfy these conditions of well-being. Indeed, their “objective” nature makes them immune to considerations of phenomenality.

But this brings us to the potential spanner in the works. The desire-satisfaction and objective list theories of well-being are but two of the three most famous philosophical theories of well-being. The third theory is hedonism, which suggests that happiness or pleasure is the key to well-being. You might think that both happiness and pleasure are ruled out in the absence of phenomenal consciousness. Indeed, I’ve been assuming as much thus far, but Levy argues that we shouldn’t be so quick to make that assumption. He argues that genuine happiness may be possible without much (or anything) in the way of phenomenal consciousness.

He makes his case using a famous psychological case study. In some ground-breaking work, Mihaly Csikzentmihalyi and his colleagues discovered that people often report their highest levels of satisfaction and happiness during periods of ‘flow’. This arises when people are totally absorbed by some challenging activity. One of the noticeable things about these periods of flow is that people tend to lose the sense of themselves in these moments. That is to say, they lack self-awareness during peak flow. Their awareness is absorbed by the activity, and not by themselves nor (and this is crucial for Levy) any awareness of experiences they may be having in that moment. This is often reflected in the fact that people are unable to describe exactly what they were experiencing in the moment (though some argue that people do have phenomenal experiences in these moments but simply lack access to them).

Levy thinks this example is instructive. It suggests that happiness may be possible without phenomenal consciousness — indeed, it suggests that losing experiential awareness may be constitutive of extreme happiness.

Of course, it is not entirely clear how persuasive this is. It could be that there is a blurring of the boundaries between phenomenal consciousness and access consciousness in this interpretation of flow (i.e. Levy is reading too much into the fact that people lack access consciousness to what is going on in the moment). It could also be that there is conflation of phenomenal self-consciousness (a higher order conscious awareness of the self) and phenomenal consciousness (the raw experience in the moment). Losing the conscious sense of self could well be constitutive of happiness — centuries of Buddhist thought suggests as much — but that doesn’t mean that losing all phenomenal consciousness is constitutive of happiness. It is, however, hard to disentangle all of these things because scientific inquiry into happiness can never directly access someone’s phenomenal states: it always relies on self-report.

Undeterred by all this, Levy goes on to suggest that even if phenomenal consciousness is relevant in these moments, there is evidence to suggest that happiness is often accompanied by behavioural and functional states (e.g. the figurative “jump for joy”); that these may supply some of what is needed for happiness; and that they can get overlooked in the focus on phenomenal consciousness. So, the bottom line for him is that happiness of some sort may be possible for a philosophical zombie.
This leads him to conclude that a life without phenomenal consciousness could still be worth living.


3. Conclusion and Implications
I find Levy’s argument interesting. I find it relatively persuasive to suggest that phenomenal consciousness isn’t the only thing that matters when it comes to moral status and standing. The behavioural and functional aspects of consciousness can support a valuing relationship between an entity and the world around it, and provide the basis for a meaningful concept of well-being. This is true even if phenomenal consciousness also matters (or matters more).

But there is a bigger point here that overwhelms the philosophical niceties of the distinction between access and phenomenal consciousness. As he points out, phenomenal consciousness is problematic when it comes to shaping our normative behaviour. By its very nature it is private and first-personal. It is not something we can investigate or determine from a third person perspective. But since our normative duties are all determined from that perspective, it’s hard to use phenomenal consciousness as a basis for our moral norms. The only way to do it is to infer phenomenal consciousness through access consciousness. It should be reassuring then that access consciousness, by itself, provides a foundation for moral standing (if Levy is right). Levy concludes his paper by suggesting that this should affect how we think about the moral standing of animals and persons in persistent vegetative states.

This is something I very much agree with. I’ve written a few pieces in the past defending something that I call “ethical behaviourism” which argues that moral standing has to be determined by observations of behaviour. I tend to subsume within ‘behaviour’ what Levy would call access consciousness. In other words, I think that functional brain states might be included among the ‘behaviours’ that we use to determine the moral standing of another. But I am inclined toward a more extreme view which holds that behavioural states would trump functional brain states in any difficult case where the two do not seem to coincide.

Of course, a full defence of that view is a task for another day. For now, I think it suffices to say that Levy’s paper raises an important question and pursues a provocative answer to that question — one that goes against the common sense of many people.





Saturday, March 30, 2019

#56 - Turner on Rules for Robots


Jacob Turner

In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI.

You can download here or listen below. You can also subscribe to the show on iTunes, Stitcher and a variety of other services (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 1:33 - Why did Jacob write Robot Rules?
  • 2:47 - Do we need special legal rules for AI?
  • 6:34 - The responsibility 'gap' problem
  • 11:50 - Private law vs criminal law: why it's important to remember the distinction
  • 14:08 - Is is easy to plug the responsibility gap in private law?
  • 23:07 - Do we need to think about the criminal law responsibility gap?
  • 26:14 - Is it absurd to hold AI criminally responsible?
  • 30:24 - The problem with holding proximate humans responsible
  • 36:40 - The positive side of responsibility: lessons from the Monkey selfie case
  • 41:50 - What is legal personhood and what would it mean to grant it to an AI?
  • 48:57 - Pragmatic reasons for granting an AI legal personhood
  • 51:48 - Is this a slippery slope?
  • 56:00 - Explainability and AI: Why is this important?
  • 1:02:38 - Is there are right to explanation under EU law?
  • 1:06:16 - Is explainability something that requires a technical solution not a legal solution?
  • 1:08:32 - The danger of fetishising explainability

Relevant Links





Sunday, March 24, 2019

Are we in the midst of an ongoing moral catastrophe?


Albrecht Dürer - The Four Horsemen of the Apocalypse


Here’s an interesting thought experiment:
The human brain is split into two cortical hemispheres. These hemispheres are joined together by the corpus callosum, a group nerve fibres that allows the two hemispheres to communicate and coordinate with one another. The common assumption is that the corpus callosum unites the two hemispheres into a single conscious being, i.e. you. But there is some evidence to suggest that this might not be the case. In split brain patients (i.e. patients whose corpus callosum has been severed) it is possible to perform experiments that result in the two halves of the body doing radically different things. In these experiments it is found that the left side of brain weaves a narrative that explains away the discrepancies in behaviour between the two sides of the body. Some people interpret this as evidence that the left half of the cortex is primarily responsible for shaping our conscious identity. But what if that is not what is going on? What if there are, in fact, two distinct conscious identities trapped inside most ‘normal’ brains but the left-side consciousness is the dominant one and it shuts down or prevents the right side from expressing itself? It’s only in rare patients and constrained experimental contexts that the right side gets to express itself. Suppose in the future that a ground-breaking series of experiments convincingly proves that this is indeed the case.



What ethical consequences would this have? Pretty dramatic ones. It is a common moral platitude that we should want to prevent the suffering and domination of conscious beings. But if what I just said is true, it would seem that each of us carries around a dominated and suffering conscious entity inside our own heads. This would represent a major ongoing moral tragedy and something ought to be done about it.

This fanciful thought experiment comes from Evan Williams’s paper ‘The Possibility of an Ongoing Moral Catastrophe’. It is tucked away in a footnote, offered up to the reader as an intellectual curio over which they can puzzle. It is, however, indicative of a much more pervasive problem that Williams thinks we need to take seriously.

The problem is this: There is a very good chance that those of who are alive today are unknowingly complicit in an unspecified moral catastrophe. In other words, there is a very good chance that you and I are currently responsible for a huge amount of moral wrongdoing — wrongdoing that future generations will criticise us for, and that will be a great source of shame for our grandchildren and great-grandchildren.

How can we be so confident of this? Williams has two arguments to offer and two solutions. I want to cover each of them in what follows. In the process, I’ll offer my own critical reflections on Williams’s thesis. In the end, I’ll suggest that he has identified an important moral problem, but that he doesn’t fully embrace the radical consequences of this problem.


1. Two Arguments for an Ongoing Moral Catastrophe
Williams’s first argument for an ongoing moral catastrophe is inductive in nature. It looks to lessons from history to get a sense of what might happen in the future. If we look at past societies, one thing immediately strikes us: many of them committed significant acts of moral wrongdoing that the majority of us now view with disdain and regret. The two obvious examples of this are slavery and the Holocaust. There was a time when many people thought it was perfectly okay for one person to own another; and there was a time when millions of Europeans (most of them concentrated in Germany) were knowingly complicit in the mass extermination of Jews. It is not simply that people went along with these practices despite their misgivings; it’s that many people either didn’t care or actually thought the practices were morally justified.

This is just to fixate on two historical examples. Many more could be given. Most historical societies took a remarkably cavalier attitude towards what we now take to be profoundly immoral practices such as sexism, racism, torture, and animal cruelty. Given this historical pattern, it seems likely that there is something that we currently tolerate or encourage (factory farming, anyone?) that future generations will view as a moral catastrophe. To rephrase this in a more logical form:



  • (1) We have reason to think that the present and the future will be like the past (general inductive presumption)



  • (2) The members of most past societies were unknowingly complicit in ongoing moral catastrophes.



  • (3) Therefore, it is quite likely that members of present societies are unknowingly complicit in ongoing moral catastrophes.



Premise (2) of this argument would seem to rest on a firm foundation. We have the writings and testimony of past generations to prove it. Extreme moral relativists or nihilists might call it into question. They might say it is impossible to sit in moral judgment on the past. Moral conservatives might also call it into question because they favour the moral views of the past. But neither of those views seems particularly plausible. Are we really going to deny the moral catastrophes of slavery or mass genocide? It would take a lot of special pleading and ignorance to make that sound credible.

That leaves premise (1). This is probably the more vulnerable premise in the argument. As an inductive assumption it is open to all the usual criticisms of induction. Perhaps the present is not like the past? Perhaps we have now arrived at a complete and final understanding of morality? Maybe this makes it highly unlikely that we could be unknowingly complicit in an ongoing catastrophe? Maybe. But it sounds like the height of moral and epistemic arrogance to assume that this is the case. There is no good reason to think that we have attained perfect knowledge of what morality demands. I suspect many of us encounter tensions or uncertainties in our moral views on a daily or, at least, ongoing basis. Should we give more money to charity? Should we be eating meat? Should we favour our family and friends over distant strangers? Each of these uncertainties casts doubt on the claim that we have perfect moral knowledge, and makes it more likely that future generations will know something about morality that we do not.

If you don’t like this argument, Williams has another. He calls it the disjunctive argument. It is based on the concept of disjunctive probability. You are probably familiar with conjunctive probability. These is the probability of two or more events both occurring. For example, what is the probability of rolling two sixes on a pair of dice? We know the independent probability of each of these events is 1/6. We can calculate the conjunctive probability by multiplying together the probability of each separate event (i.e. 1/6 x 1/6 = 1/36). Disjunctive probabilities are just the opposite of that. They are the probability of either one event or another (or another or another) occurring. For example, what is the probability of rolling either a 2 or a 3 if you roll two dice? We can calculate the disjunctive probability by adding together the probability of each separate event (1/6 + 1/6 = 1/3). It should be noted, though, that calculating disjunctive probabilities can be a bit more complicated than simply adding together the probabilities of separate events. If there is some overlap between events (e.g. if you are calculating the probability of drawing a spade or an ace from a deck of cards) you have to subtract away the probability of the overlapping event. But we can ignore this complication here.

Disjunctive probabilities are usually higher than you think. This is because while the probability of any particular improbable event occurring might be very low, the probability of at least one of those events occurring will necessarily be higher. This makes some intuitive sense. Consider your own death. The probability of you dying from any one specific cause (e.g. heart attack, bowel cancer, infectious disease, car accident or whatever) might be quite low, but the probability of you dying from at least one of those causes is pretty high.

Williams takes advantage of this property of disjunctive probabilities to make the case for ongoing moral catastrophe. He does so with two observations.

First, he points out that there are lots of ways in which we might be wrong about our current moral beliefs and practices. He lists some of them in his article: we might be wrong about who or what has moral standing (maybe animals or insects or foetuses have more moral standing than we currently think); we might be wrong about what is or is not conducive to human flourishing or health; we might be wrong about the extent of our duties to future generations; and so on. What’s more, for each of the possible sources of error there are multiple ways in which we could be wrong. For example, when it comes to errors of moral standing we could err in being over or under-inclusive. The opening thought experiment about the split-brain cases is just one fanciful illustration of this. Either one of these errors could result in an ongoing moral catastrophe.

Second, he uses the method for calculating disjunctive probabilities to show that even though the probability of us making any particular one of those errors might be low (for argument’s sake let’s say it is around 5%), the probability of us making at least one of those errors could be quite high. Let’s say there are fifteen possible errors we could be making, each with a probability of around 5%. In that case, the chances of us making at least one of those errors is going to be about 54%, which is greater than 1 in 2.

That’s a sobering realisation. Of course, you might try to resist this by claiming that the probability of us making such a dramatic moral error is much lower than 5%. Perhaps it is almost infinitesimal. But how confident are you really, given that we know that errors can be made? Also, even if the individual probabilities are quite low, with enough possible errors, the chance of at least one ongoing moral catastrophe is still going to be pretty high.


2. Two Responses to the Problem
Having identified the risk of ongoing moral catastrophe, Williams naturally turns to the question of what we ought to do about it.

The common solution to an ongoing or potential future risk is to take corrective measures by hedging your bets against it or to taking precautiounary approach to that risk. For example, if you are worried about the risk of crashing your new motorcycle and injuring yourself, you’ll either (a) take out insurance to protect against the expenses associated with such a crash or (b) simply avoid buying and using a motorcycle.

Williams argues that neither solution is available in the case of ongoing moral catastrophe. There are too many potential errors we could be making to hedge against them all. In hedging against one possible error you might commit yourself to another. And a precautionary approach won’t work either because failing to act could be just as big a moral catastrophe as acting, depending on the scenario. For example, failing to send more money to charity might be as big an error as sending money to the wrong kind of charity. You cannot just sit back, do nothing, and hope to avoid moral catastrophe.

So what can be done? Williams has two suggestions. The first is that we need to make it easier for us to recognise moral catastrophes. In other words, we need to make intellectual progress and advance the cause of moral knowledge: both knowledge of the consequential impact of our actions and of the plausibility/consistency of our moral norms. The idea here is that our complicity in an ongoing moral catastrophe is always (in part) due to a lack of moral knowledge. Future generations will learn where we went wrong. If we could somehow accelerate that learning process we could avert or at least lessen any ongoing moral catastrophe. So that’s what we need to do. We need to create a society in which the requisite moral knowledge is actively pursued and promoted, and in which there is a good ‘marketplace’ of moral ideas. Williams doesn’t offer specific proposals as to how this might be done. He just thinks this is the general strategy we should be following.

The second suggestion has to do with the flexibility of our social order. Williams argues that one reason why societies fail to minimise moral catastrophes is because they are conservative and set in their ways. Even if people recognise the ongoing moral catastrophe they struggle against institutional and normative inertia. They cannot bring about the moral reform that is necessary. Think about the ongoing moral catastrophe of climate change. Many people realise the problem but very few people know how to successfully change social behaviour to avert the worst of it. So Williams argues we need to create a social order that is more flexible and adaptive — one that can implement moral reform quickly, when the need is recognised. Again, there are no specific proposals as to how this might be done, though Williams does fire off some shots against hard-wiring values into a written and difficult-to-amend constitutional order, using the US as a particular example of this folly.


3. Is the problem more serious than Williams realises?
I follow Williams’s reasoning up until he outlines his potential solutions to the problem. But the two solutions strike me as being far too vague to be worthwhile. I appreciate that Williams couldn’t possibly give detailed policy recommendations in a short article; and I appreciate that his main goal is not to give those recommendations but to raise people’s consciousnesses as to the problem of ongoing moral catastrophe and to make very broad suggestions about the kind of thing that could be done in response. Still, I think in doing this he either underplays how radical the problem actually is, or overplays it and thus is unduly dismissive of one potential solution to the problem. Let me see if I can explain my thinking.

On the first point, let me say something about how I interpret Williams’s argument. I take it that the problem of ongoing moral catastrophe is a problem that arises from massive and multi-directional moral uncertainty. We are not sure if our current moral beliefs are correct; there are a lot of them; and they could be wrong in multiple different possible ways. They could be under-inclusive or over-inclusive; they could demand too much or not demand; and so on. This massive and multi-directional moral uncertainty supports Williams’s claim that we cannot avoid moral catastrophe by doing nothing, since doing nothing could also be the cause of a catastrophe.

But if this interpretation is correct then I think Williams’s doesn’t appreciate the radical implications of this massive and multi-directional moral uncertainty. If moral uncertainty is that pervasive, then it means that everything we do is fraught with moral risk. That includes following Williams’s recommendations. For example, trying to increase moral knowledge could very well lead to a moral catastrophe. After all, it’s not like there is an obvious and reliable way of doing this. A priori, we might think a relatively frictionless and transparent marketplace of moral ideas would be a good idea, but there is no guarantee that this will lead people to moral wisdom. If people are systematically biased towards making certain kinds of moral error (and they arguably are, although making this assessment itself depends on a kind of moral certainty that we have no right to claim), then following this strategy could very well hasten a moral catastrophe. At the same time, we know that censorship and friction often blocks necessary moral reform. So we have to calibrate the marketplace of moral ideas in just the right way to avoid catastrophe. This is extremely difficult (if not impossible) to do if moral uncertainty is as pervasive as Williams seems to suggest.

The same is true if we try to increase social flexibility. If we make it too easy for society to adapt and change to some new perceived moral wisdom, then we could hasten a moral catastrophe. This isn’t a hypothetical concern. History is replete with stories of moral revolutionaries who seized the reins of power only to lead their societies into moral desolation. Indeed, hard-wiring values into a constitution, and thus adding some inflexibility to the social moral order, was arguably adopted in order to provide an important bulwark against this kind of moral error.

The point is that if a potential moral catastrophe is lurking everywhere we look, then it is very difficult to say what we should be doing to avoid it. This pervasive and all-encompassing moral uncertainty is paralysing.

But maybe I am being ungenerous to Williams’s argument. Maybe he doesn’t embrace this radical form of moral uncertainty. Maybe he thinks there are some rock-solid bits of moral knowledge that are unlikely to change and so we can use those to guide us to what we ought to do to avert an ongoing catastrophe. But if that’s the case, then I suspect any solution to the problem of moral catastrophe will end up being much more conservative than Williams’s seems to suspect. If that’s the case, we will cling to the moral certainties like life rafts in a sea of moral uncertainty. We will use them to evaluate and constrain any reform to our system.

One example of how this might work in practice would be to apply the wisdom of negative utilitarianism (something Williams is sceptical about). According to negative utilitarianism, it is better to try to minimise suffering than it is to try to maximise pleasure or joy. I find this to be a highly plausible principle. I also find it to be much easier to implement than the converse principle of positive utilitarianism. This is because I think we can be more confident about what the causes suffering are than we can be about what induces joy. But if negative utilitarianism represents one of our moral life rafts, it also represents one of the best potential responses to the problem of ongoing moral catastrophe. It’s not clear to me that abiding by it would warrant the kinds of reforms that Williams seems to favour.

But, of course, that’s just my two cents on the idea. I think the problem Williams identifies is an important one and also a very difficult one. If he is right that we could be complicit in an ongoing moral catastrophe, then I am not sure that anyone has a good answer as to what we should be doing about it.