Tuesday, April 30, 2019

Possible Worlds and Possible Lives: A Meditation on the Art of Living




Here’s a simple thought, but one that I think is quite profound: one’s happiness in life depends, to a large extent, on how one thinks about and navigates the space of possible lives one could have lived. If you have too broad a conception of the space of possibility, you are likely to be anxious and unable to act, always fearing that you are missing out on something better. If you have too narrow a conception of the space of possibility, you are likely to be miserable (particularly if you get trapped in a bad set of branches in the space of possibility) and unable to live life to its full. But it’s not that simple either. Sometimes you have to focus on the negative and sometimes you have to narrow your mindset.

I say this is a profound but simple thought. Why so? Well, it strikes me as profound because it captures something that is fundamentally true about the human condition, something that is integral to a number of philosophical discussions of well-being. It strikes me as simple because I think it’s something that is relatively obvious and presumably must have occurred to many people over the course of human history. And yet, for some reason, I don’t find many people talking about it.

Don’t get me wrong. Plenty of people talk about possible worlds in philosophy and science, and many specific discussions of human life touch upon the idea outlined in the opening paragraph. For example, discussions of human emotions such as regret, or the rationality of decision-making, or the philosophical significance of death, often touch upon the importance of thinking in terms of possible lives. What frustrates me about these discussions is that they don’t do so in an explicit or integrated way.

This article is my attempt to make up for this perceived deficiency. I want to justify my opening claim that one’s happiness in life depends on how one thinks about and navigates the space of possible lives; and I want to further support my assertion that this is a simple and profound idea. I start by clarifying exactly what I am talking about.


1. The Basic Picture: The Space of Possible Lives
The actual world is the world in which we currently live. It can be defined in terms of a list of propositions that exhaustively describes the features of this world. A possible world is a world that could exist. It can be defined as any logically consistent set of propositions describing a world. The space of logically possible worlds is vast. Logical consistency is only a minor constraint on what is possible. Virtually anything goes if this is your only limitation on what is possible. For example, there is a logically possible world in which the only things that exist are an apple and a cat, inside a large box.

This possible world isn’t very likely, of course, and this raises an important point. Possible worlds can be ordered in terms of their accessibility to us. It is easiest to define this in terms of the “distance” between a possible world and the actual world in which we live. Worlds that are ‘close’ to our own world (in the sense that they differ minimally) can be presumed to be relatively accessible to us (though see the discussion below of determinism and free will); contrariwise, worlds that are ‘far away’ (in the sense that they have many differences from our own world) are relatively inaccessible. Some possible worlds will require a technological breakthrough to make them accessible to us (e.g. a world in which interstellar travel is possible for creatures like us); others may never be accessible to us because they breach the fundamental physical laws of our reality (e.g. a world in which universal entropy is reversed). Philosophers often distinguish between these different shades of possibility by using phrases like “physical possibility”, “technical possibility” and so on. Probability is also an important part of the discussion as it gives us a way of quantitatively ranking the accessibility of a possible world.

The idea of a “possible life” can be defined in terms of a possible world. Your actual life is the life you are currently living in the actual world. A “possible life” is simply a different life that you could be living in another possible world. One way of thinking about this is to simply imagine a different possible world where the only differences between it and the actual world relate specifically to your life. Possible lives exist in the past and in the future. There are possible lives that I could have lived and possible lives that I might yet live. For example, there is a possible life where I studied medicine at university rather than law. If I had followed that path, my present life could be very different. Likewise, there is possible life where I run for political office in the future. If I follow that path, my life will end up being very different from what I currently envisage.

Possible lives can be arranged and ranked in a number of different ways. Obviously, they can be ranked in terms of their accessibility to us (as per the previous discussion of possible worlds), or they can be ranked in terms in their normative value to us. Some possible lives are better than others. A possible life in which I murder someone and get sent to jail for life is presumably going to be worse (for me and for others) than a world in which I work hard and discover a cure for some serious disease.

Pictures are worth a thousand words so consider the image below. It illustrates what I would take to be the fundamental predicament of life. In the centre of the image is a person. Let’s suppose this person is you. The thick bold line represents your actual life (i.e. the life, out of all the possible lives you could have lived, that you are actually living). To the left of your present location is your past and arranged along each side of the thick bold line are the possible lives you could have lived before the present moment. To the right of your present location is your future and arranged along each side of the centre line are the possible lives you might yet live. The possible lives that lie above the line represent lives that are better than your current, actual, life; the possible lives that lie below the line represent lives that are worse than your current life. The accessibility of lives can also be represented in this image. We can assume that the further a life lies from the centre line, the less accessible it is (though in saying this it is important to realise that accessibility does not correlate with betterness or worseness, which is an impression you might get from the way in which I have illustrated it).


The Human Predicament: The Space of Possible Lives


The essence of my position is that how we think about our predicament — nested in a latticework of possible lives — will to a large extent determine how happy and successful we are in our actual life. In particular, broadening and narrowing our conception of the set of possible lives we could have lived, and might yet live, is key to happiness.


2. The Elephant in the Room: Determinism
Before I go any further, I need to address the elephant in the room: determinism. Determinism is a philosophical thesis that holds that every event that occurs in this actual world has a sufficient cause of its existence in the prior events in this world. The life you are living today is the product of all the events that occurred prior to the present moment. Given those events, there is no other way the present moment could have turned out. It simply had to be this way.

There is another way of putting this. According to one well-known philosophical definition of determinism — first coined, I believe, by Peter Van Inwagen — determinism is the view that there is only one possible future. Given the full set of past events (E1…En) there is only one possible next event (En+1), because those prior events fully determine the nature and character of En+1).

If determinism is true, it would seem to put paid to the argument I’m trying to put forward in this article. After all, if determinism is true, it would seem to follow that all talk about the possible lives we could have lived, and might yet live, is fantastical poppycock. There is only one life we could ever live and we may as well get used to it.

But I don’t quite see it that way. In this regard, it’s worth remembering that determinism is a metaphysical thesis, not a scientific one. No amount of scientific evidence of deterministic causation can fully confirm the truth of determinism. And, what’s more, there are some prominent scientific theories that seem to be open to some degree of indeterminism (e.g. quantum theory) or, if not that, are at least open to “possible worlds”-thinking. It is worth noting, for example, that some highly deterministic theories in cosmology and quantum mechanics only preserve their determinism if they allow for possibility of multiple universes and many worlds. The most famous example of this might be the “many worlds” interpretation of quantum mechanics, first set out by Hugh Everett. This interpretation retains the determinism of the quantum mechanical Schrödinger equation but only does so by holding that there are many different worlds in existence. These worlds may or may not be accessible to us, but it is not illegitimate to talk about them.

Admittedly, these esoteric aspects of cosmology and quantum theory don’t hold much succour for the kind of position I’m defending here. But that brings me to a more important point. Even if determinism is true (and there is, literally, only one possible future) it does not follow that thinking about one’s life in terms of the possible lives one could have lived and might yet live is illegitimate. If the world is deterministic it is still likely to be causally complex. This means that, even if determinism is true, there will often be no easy way for us to say what caused what and what follows from this.

An analogy might help to underscore this. When I was a student, one of the favoured topics in history class was “The Causes of World War I”. I learned from these classes that there are many putative causes of World War I. It’s hard to say which “cause” was critical, if any. Perhaps World War I was caused by German aggression, or perhaps, as Christopher Clark argues in his book The Sleepwalkers, it was a complex concatenation of events, no one of which was sufficient in its own right. It’s really hard to say. For all we know, in the absence of German aggression, things might have gone very differently. Or maybe they wouldn’t. Maybe we would have stumbled into a great war anyway. Historians and fiction writers love to speculate, and it’s often useful to do so: we gain insight into the past by imagining the counterfactuals, and gain wisdom for the future by thinking through the different possible worlds.

What is true for historians and fiction writers is also true for ourselves when we look at our own lives. Our own lives are causally complex. For any one event that occurred in our past (or that may yet occur in our future) there is probably a whole panoply of events that may or may not be critical to its occurrence. As a result, for all we know, there may have been other lives we could have lived and may yet live. To put this more philosophically, even if it is true that we live in a metaphysically deterministic world in which there is only one possible future, to all intents and purposes we still live in an epistemically indeterministic world in which multiple possible futures seem to still be accessible to us.

In this respect, it is important to bear in mind the distinction between fatalism and determinism. Just because the world is deterministic, does not imply that we play no part in shaping its future. We still make a difference and in order to make sense of the difference we might make, we need to entertain “possible worlds”-thinking.

All of this leads me to conclude that determinism does not scupper the argument I am trying to make.


3. Looking Back: Regret, Guilt and Gratitude
If we accept that it is legitimate to think in terms of possible lives, then we open ourselves up to the idea that thinking wisely about the space of possibility is key to happiness and success. To illustrate, we can start by looking back, i.e. by considering the life we are living in the present moment relative to the other lives we might have lived before the present moment.

If, when we do this, we focus predominantly on possible lives that would have been better than the life we are currently living (along whatever metric of “betterness” we prefer), we are likely to be pretty miserable. We will tend to be struck by the sense that our actual life does not measure up. There are better lives we could have been living. Two emotions/attitudes are commonly associated with this style of thinking. The first is regret. This is both a negative feeling about your present life and a judgment that it is inferior to other possibilities. Regret is usually tied to specific past decisions. We regret making those decisions and judge that we could have done better. Sometimes, regret is more general and vague. There is no specific decision that we regret, but we are filled with the general sense that things are not as good as they could be. When the choices we make end up doing harm to others, regret can turn into a guilt. We can become wracked by the sense that not only are our lives worse than they might have been, but we have failed in our moral duties too.

As I noted on a previous occasion, I find my own thoughts about the past to be preoccupied by feelings of regret and guilt. I regret not making certain decisions earlier in life (e.g. getting married, having children) or not seizing certain opportunities (e.g. better jobs and so on). This regret can sometimes be overwhelming, even though I acknowledge that it is often irrational. Given the aforementioned causal complexity of the real world, there is no guarantee that if I had done things differently they would have turned out for the better. Thinking about regret in these philosophical terms sometimes helps me to escape the trap of negative thinking.

If, when we look to the past, we focus predominantly on possible lives that would have been worse that the life we are currently living, we are likely to pretty happy. I say this with some trepidation. It’s possible that some people have a very low hedonic baseline and so no amount of positive thinking about the past will make them happy, but as a general rule of thumb it seems to follow that happiness flows from focusing on the negative space of possibility in the past. If things could have been much worse than they currently are, then we are likely to think that our present lives are not all that bad. This is, in fact, a classic Stoic tactic for ensuring more contentment in life: always imagine how things might have been worse.

Two emotions/attitudes are commonly associated with this style of thinking. The first is achievement. This is a self-directed emotion and judgment that arises from the belief that you have made your life better than it might otherwise have been. You have charted some stormy waters and navigated a way through the space of possibility that avoided bad outcomes (failure, hardship etc). The second is a feeling of gratitude. This is an other-directed (or outward-directed) emotion and judgment that arises from the belief that although you may not have controlled it, your life has turned out better than it might have done. This could be because other people helped you out, or it could be through sheer luck and accident of birth (though some people might like to distinguish the feeling of luck from that of gratitude).

Given these reflections on looking back, you might think there is an easy way to make yourself happy: focus on how your present life is better than many of the possible lives you could have lived, and don’t focus on how it is worse than others. But that’s easier said than done. Sometimes you can get trapped in spirals of negative thinking where you always think about how things could have been better. Furthermore, focusing entirely on how things might have been worse could well be counterproductive. As I noted in an earlier article not all regret is bad. You can learn a lot about yourself from your regrets. You can learn about your desires and personal values. This is crucial when we start to look forward.


4. Looking Forward: Optimism, Pessimism and Death
Although looking back is a useful practice, and although it is often an important source of self-knowledge, ultimately looking forward is more important. This is because we live our lives in the forward-looking direction. Life is a one-way journey to the future. Until we invent a technology that enables us to actually go back in time, we have to resign ourselves to the fact that our main opportunity for exploring possible lives lies in the future.

When looking forward, one question predominates: which of the many possible futures that we could access will we actually end up accessing?

If, when we ask this question, we focus primarily on possible futures that are better than our present lives, we are likely to be quite optimistic. Indeed, focusing on better possible futures and the things you can do to make them more accessible, might be one of the keys to happiness. On a previous occasion, I looked at Lisa Bortolotti’s “agency” theory of optimism. In defending this theory, Bortolotti noted that many forms of optimism are irrational: assuming the future is going to be better than the past is often epistemically unwarranted. Nevertheless, assuming that you have some control over the future — even if this is epistemically unwarranted from an objective perspective — does seem to correlate with an increased chance of success. Bortolotti cited some famous studies on cancer patients in support of this view. In those studies, the cancer patients that believed they could influence their prospects of recovery, through, for example, dietary changes or exercise or other personal health regimes, generally did better than those with a more fatalistic attitude.

If, on the other hand, we focus primarily on futures that are worse than our present predicament, we are likely to be quite pessimistic. If we think that we are on the brink of some major personal or societal failure, and that there is nothing we can do to avert this outcome, then we will have little to look forward to. But, we have to be cautious in saying this. Blindly ignoring negative futures is a bad idea. There is an old adage to the effect that you have to “plan for the worst and hope for the best”. There must be some truth to that. You need to be aware of the risks you might be running. You need to develop strategies to avoid them. Indeed, this willingness to think about and anticipate negative futures is key to the agency theory of optimism outlined by Bortolotti. The more successful cancer patients are not the ones that bury their heads in the sand about their condition and blithely think everything will turn out for the best. They are often very aware of the dangers. They just assume that there is something they can do to avoid the negative possibilities.

There is another point here that I think is key when looking forward. How narrowly or broadly we frame the set of possible futures can have a significant impact on our happiness. A narrow framing arises when we think that there are only one or two possible futures accessible to us; a broader framing arises when think in terms of larger numbers of possibilities. Generally speaking, narrowly framing the future set of possibilities is a bad thing. It encourages you to think in terms of false dichotomies or tradeoffs (either X happens and everything goes badly or Y happens and everything goes well). If you ever find yourself trapped in a narrow framing, it is usually a good idea to take a step back and try to broaden your framing. For example, when thinking about how you might “balance” career ambitions with home and family life, you might have tendency to narrowly frame the future in terms of an either/or choice: either I have a happy family life or a fulfilling career. But usually choices are more complex than that. There are more possibilities and options to explore. Some of those possible futures might allow for a more harmonious balancing of the two goals.

This is not to say that compromises and tradeoffs are always avoidable. They are not. But it is better to reach that conclusion after a full exploration of the set of possible futures than after a cursory search, particularly when it comes to major life choices. Or so I have found. That said, I also think it is possible to have too broad a framing of the possible futures. You can easily become overwhelmed by the possibilities and paralysed by the number of options. Sometimes a narrow framing concentrates the mind and motivates action. It’s all about finding the right balance: don’t be too narrow-minded, try to focus on the positive, but don’t be too open-minded and ignore the negative either.

Three other points strike me as being apposite when looking forward.

First, I think it is worth reflecting on the role that technology plays in opening up the space of possible futures. I briefly alluded to this earlier on when I pointed out that the development of certain technologies (e.g. interstellar spaceships) might make possible futures accessible to us that we never previously considered. Of course, interstellar spaceships are just a dramatic example of a much more general phenomenon. All manner of technological innovations, from penicillin to international flights to smartphones do the same thing: they give us access to futures that would otherwise have been impossible. That’s often a good thing, it gets us out of small, negative spaces of possibility, but remember that technology usually opens up possible futures on both the positive and negative side of the ledger. There are more possible, better futures and more possible negative futures. Techno-optimists tend to exaggerate the former; techno-pessimists the latter.

Second, it is worth reflecting on the importance of “thinking in bets” when it comes to how we navigate the set of future possibilities. Since we rarely have perfect control over the future, and since there is much that is uncertain about the unfolding of events, we have to play the odds and hedge our bets, rather than fixate on getting things “right”. Those who are more attuned to this style of thinking will tend to do better, at least in the long run. But, again, this is often easier said than done because it requires a more reflective and detached outlook on what happens as a result of any one decision.

Finally, we have to think about death. Death is, for each individual, the end of all possibilities. It has an interesting effect on the space of possible lives. Once you die, the network of possible lives you could have lived or might yet live vanishes. All the branches are pruned away. All that is left is one solid line through the space of possibility. This line represents the actual life you lived. What trajectory does that line take through the space of possibility? Does it veer upwards or downwards (relative to the dimension of betterness or worseness)? Does it end on a high or low? Although I am somewhat sceptical of our capacity to control the total narrative of our lives, I do think it is worth thinking, occasionally, about the overall shape we would like our lives to have. Maintaining a gently sloping upward trajectory seems like more of a recipe for happiness than riding a roller-coaster of emotional highs and lows.


5. Conclusion
So where does that leave us? I hope I have said enough to convince you that thinking in terms of possible lives is central to the well-lived life. I also hope I have said enough to convince you that there is no simple algorithm you can apply to this task. You might suppose that you can thrive by not dwelling on how things might have been better in the past, and think more about how they might be better in the future (and, in particular, about how you might make them better). And I am sure that this simple heuristic might work in some cases. But things are not that straightforward. You have to learn from past mistakes and embrace some feelings of regret. You have choose the wisest framing of the future possibility space to make the best choices. There is no one-size-fits-all approach that will guarantee success and happiness.

You might still argue that all of this is trivial and unhelpful. Maybe that is so, but I still maintain my opening position that there is something profound about the idea. Thinking in terms of possible lives integrates and unites many different fields of philosophical inquiry. It integrates concerns about probability and risk, technology and futurism, the philosophy of the emotions, and the tension between optimism and pessimism. It allows us to reconceive and approach all these debates under the same unifying perspective. That seems pretty insightful to me.




Friday, April 26, 2019

Who Should Explore Space: Robots or Humans?




Should humans explore the depths of space? Should we settle on Mars? Should we become a “multi-planetary species”? There is something in the ideal of human space exploration that stirs the soul, that speaks to a primal instinct, that plays upon the desire to explore and test ourselves to the limit. At the same time, there are practical reasons to want to take the giant leap. Space is filled with resources (energy, minerals etc) that we can utilise, and threats we must neutralise (solar flares, asteroids etc).

On previous occasions, I have looked at various arguments defending the view that we ought to explore space. Those arguments fall into three main categories: (i) intellectual arguments, i.e. ones that focus on the intellectual and epistemic benefits of exploring space and learning more about our place within it; (ii) utopian/spiritual arguments, i.e. ones that focus on the need to create a dynamic, open-ended and radically better future for humanity, both for moral and personal reasons; and (iii) existential risk arguments, i.e. ones that focus on the need to explore space to both prevent and avoid existential risks to humanity.

For the purposes of this article, let’s assume that these arguments are valid. In other words, let’s assume that they do indeed provide compelling reasons to explore space. Now, let’s ask the obvious follow-up question: does this mean that humans should be the ones doing the exploring? It is already the case that robots (broadly conceived) do most of the space exploration. There are a handful of humans who have made the trip. But since the end of the Apollo missions in the early 1970s, humans have not gone much further than low earth orbit. For the most part, humans sit back on earth and control the machines that do the hard work. Soon, given improvements in AI and autonomous robots, we may not do much controlling either. We may just sit back and observe.

Should this pattern continue? Is space exploration, like so many other things nowadays, something that is best left to the machines? In this article, I want try to answer that question. I do so with the help of an article written by Keith Abney entitled “Robots and Space Ethics”. As we will see, Abney thinks that, with one potentially significant exception, we really should leave space exploration to the machines. Indeed, we might be morally obligated to do so. I’m sympathetic to what Abney has to say, but I still hold some hope for human space exploration.


1. Robots do it Better: Against Human Space Exploration
Why should we favour robotic space exploration over human space exploration? As you might imagine, the case is easy to state: robots are better at it. They are less biologically vulnerable. They do not depend on oxygen, or food, or water, or a delicate symbiotic relationship with a group of specially-evolved microorganisms, for their survival. They are less at risk from exposure to harmful solar radiation; they are less at risk from infection from alien microgranisms (a major plot point in HG Wells’s famous novel War of the Worlds). In addition to this, and as Abney documents, there are several major health risks and psychological risks suffered by astronauts that can be avoided through the use of robotic explorers (though he notes that the small number of astronauts makes studies of these risks somewhat dubious).

This is not to say that robots have no vulnerabilities and cannot be damaged by space exploration. They obviously can. Several space probes have been damaged beyond repair trying to land on alien worlds. They have also been harmed by space debris and suffered irrevocable harm due to general wear and tear. However, the problems encountered by these space probes just serve to highlight the risk to humans. It’s bad enough that probes have been catastrophically damaged trying to land on Mars, but imagine if it was a crew of humans? The space shuttle fatalities were major tragedies. They sparked rounds of recrimination and investigation. We don't want a repeat. All of this makes human space exploration both high risk and high cost. If we grant that humans are morally significant in a way that robots are not, then the costs of human space exploration would seem to significantly outweigh the benefits.

But how does this reasoning stack up against the arguments in favour of space exploration? Let’s start with the intellectual argument. The foremost defender of this argument is probably Ian Crawford. Although Crawford grants that robots are central to space exploration nowadays, he suggests that human explorers have advantages over robotic explorers. In particular, he suggests that there are kinds of in-person observation and experimentation that would be possible if humans were on space missions that just aren’t possible at the moment with robots. He also argues, more interestingly in my opinion, that space exploration would enhance human art and culture by providing new sources of inspiration for human creativity, and would also enhance political and ethical thinking because of the need to deal with new challenges and forms of social relation (for full details, see my summary here).

Although Abney does not respond directly to Crawford’s argument, he makes some interesting points that could be construed as a response. First, he highlights the fact that speculations about the intellectual value of human space exploration risk ignoring the fact that robots are already the de facto means by which we acquire knowledge of space. In other words, they risk ignoring the fact that without them, we would not have been able to learn as much about space as we have. Why would we assume that this trend will not continue? Second, he argues that claims to the effect that humans might be better at certain kinds of scientific investigation are usually dependent on the current limitations of robotic technology. As robotic technology improves, it’s quite likely that robots will be able to perform the kinds of investigations that we currently believe are only possible with human beings. We already see this happening here on Earth with more advanced forms of AI and robotics; it stands to reason that these advanced forms of AI can be used for space exploration too.

The bottom line then is that if our reasons for going to space our largely intellectual — i.e. to learn more about the cosmos and our place within it — then robots are the way to go. That said, there is nothing in what Abney says that deals with Crawford’s point about the intellectual gains in artistic, ethical and political thought. To appreciate those gains, it seems like it would have to be humans, not robots, that do the exploration. Perhaps one could respond to this by saying that some of these gains (most obviously the artistic ones) could come from watching and learning from robotic space missions; or that these intellectual gains are too nebulous or vague (what counts as an artistic gain?) to carry much weight; or that they come with significant risks that outweigh any putative benefits. For example, Crawford is probably correct to suggest that space exploration will prompt new ethical thinking, but that may largely be because it is so risky. Should we want to expose ourselves to those risks just so that philosophers can get their teeth into some new ethical dilemmas?

Let’s turn next to the more spiritual/utopian argument for space exploration. That argument focuses on the appeal of space exploration to the human spirit and the role that it could play in opening up the possibility of a dynamic and radically better future. Instead of being consigned to Earth, to tend the museum of human history (to co-opt Francis Fukuyama’s evocative phrase), we can forge a new future in space. We can expand the frontiers of human possibility.

This argument, much more so than the intellectual argument, seems to necessitate human participation in space exploration. Abney almost concedes as much in his analysis, but makes a few interesting points by way of response. First, he suggests that the appeal to the human spirit could be addressed by space 'tourism' and not space 'exploration'. In other words, we could look on human space travel as a kind of luxury good, and not something that we need to invest a lot of public money in. The public money, if it should go anywhere, should go to robotic space exploration only. Second, and relatedly, given the high cost of human space travel, any decision to invest money in it would have to factor in the significant opportunity cost of that investment. In other words, it would have to acknowledge that there are other, better, causes in which to invest. It would, consequently, be difficult to morally justify the investment. Third, he argues that, to the extent that human participation is deemed desirable, we should participate remotely, through immersive VR. This would be a lower cost and lower risk way for vulnerable beings like us to explore the further reaches of space.

I find this last suggestion intriguing. I imagine the idea is that we can satisfy our lust for visiting alien worlds or travelling to distant galaxies by using robotic avatars. We can hook ourselves up to these avatars using VR headsets and haptics, and really immerse ourselves in the space environment at minimal risk to our health and well-being. I agree that this would be a good way to do it, if it were feasible. That said, the technical challenges could be formidable. In particular, I think the time-lag between sending and receiving a signal between yourself and your robotic avatar would make it practically unwieldy. In the end, we might end up with little more than an immersive but largely passive space simulator. That doesn’t seem all that exciting.


2. The Interstellar Doomsday Argument
I mentioned at the outset that despite favouring robotic space exploration, Abney does think that there is one case in which human exploration might be morally compelling, namely: to avoid existential risk.

To be clear, Abney argues that robots can help us to mitigate many existential risks. For example, we could use autonomous robots to monitor and neutralise potential asteroid impacts, or to reengineer the climate in order to mitigate climate change. Nevertheless, he accepts that there is always the chance that these robotic efforts might fail (e.g. a rogue asteroid might leak through our planetary defence system) and Earth might get destroyed. What then? Well, if we had a human colony on another planet (or on an interstellar spaceship) there would be a chance of long-term human survival. Granting that we have a moral duty not to prevent the destruction of our species, it consequently seems to follow that we have a duty to invest in at least some human space exploration.

What’s more, Abney argues that we may have to do this sooner rather than later. This is where he makes his most interesting argument, something he calls the “Interstellar Doomsday Argument”. This argument applies the now-classic probability argument for “Doom Soon” to our thinking about the need for interstellar space exploration. This argument takes a bit of effort to understand, but it is worth it.

The classic Doomsday Argument, defended first by John Leslie and then championed by Nick Bostrom and others, claims that human extinction might be much closer in the future than we think. The argument works from some plausible initial assumptions and then applies to those assumptions some basic principles drawn from probability theory. I’m not going to explain the full thing (there are some excellent online primers about it, if you are interested) but I will give the gist of it. The idea is that, if you have no other background knowledge to tell you otherwise, you should assume that you are a randomly distributed member of the total number of humans that will ever live (this is the Copernican assumption or "self-sampling assumption"). You should also assume, if you have no background knowledge to tell you otherwise, that the distribution of the total number of humans that will ever live will follow a normal pattern. From this, you can conclude that you are highly unlikely to be at the extreme ends of the distribution (i.e. very near the start of the sequence of all humans; or very near the end). You can also conclude that there is highly probable upper limit on the total number of people who will ever live. If you play around with some of the background knowledge about the total human population to date and its distribution, you can generate reasonably pessimistic conclusions about how soon human extinction is likely to be.

That’s the gist of the original Doomsday Argument. Abney uses a variant on it, first set out by John Richard Gott in a paper in the journal Nature. Gott’s argument, using the standard tools of probability theory, applies to the observation of all temporally distributed phenomena, not just one’s distribution within the total population of humans who will ever live. The argument (called the “Delta t” argument) states that:

Gott’s Delta t Argument “[I]f there is nothing special about one’s observation of a phenomenon, one should expect a 95% probability that the phenomenon will continue for between 1/39 times and 39 times its present duration, as there’s only a 5% possibility that your random observation comes in the first 2.5% of its lifetime, or the last 2.5%” 
(Abney 2017, 364).

Gott originally used his argument to make predictions about how long the Berlin Wall was likely to stand (given the point in time at which he visited it), and how long a Broadway show was likely to remain open (give the point in time at which he watched it). Abney uses the argument to make predictions about how long humanity is likely to last as an interstellar species.

Abney starts with the observation that humanity first became an interstellar species sometime in August 2012. That was when the Voyager 1 probe (first launched in the 1970s) exited our solar system and entered interstellar space. Approximately seven years have elapsed since then (I’m writing this in 2019). Assuming that there is nothing special about the point in time at which I am “observing” Voyager 1’s interstellar journey, we can apply the Delta t argument and conclude that humanity’s status as an interstellar species is likely to last between (1/39 x 7 years) and (39 x 7 years). That means that there is a 95% chance that we have only got between 66 days and 273 years left of interstellar existence.

That should be somewhat alarming. It means that we don’t have as long we might think to escape our planet and address the existential risks of staying put. In fact, the conclusion becomes more compelling (and more alarming) if we combine the Doomsday argument with thoughts about the Great Silence and the Great Filter.

The Great Silence is the concern, first set out by Enrico Fermi, about the apparent absence of intelligent alien life in our galaxy. Fermi’s point was that if there is intelligent life out there, we would expect to have heard something from it by now. The universe is a big place but it has existed for a long time and if an intelligent species has any desire to explore it, it would have had ample time to do so by now. This has since been confirmed by calculations showing that if an intelligent species used robotic probes to explore the universe (specifically it used self-replicating Von Neumann probes) then it would only take a few hundred million years to ensure that every solar system had at least one such probe in it.

The Great Filter is the concern, first set out by Robin Hanson, about what it is that prevents intelligent species from exploring the universe and making contact with us. Working off Fermi’s worries about the Great Silence, Hanson argued that if intelligent life has not made contact with us yet (or left some sign or indication of its existence) then it must be because there is some force that prevents it from doing so. Either species tend not to evolve to the point that their intelligence enables them to explore space, or they destroy themselves when they reach a point of technological sophistication, or they just don’t last very long when they reach the interstellar phase (there are other possibilities too).

Whatever the explanation of the Great Silence and the Great Filter, the fact that there do not appear to be other interstellar species and we do not know why, should give us reason to think that our current interstellar status will be short-lived. That might tip the balance in favour of human space exploration.

Before closing, it is worth noting that Doomsday reasoning of the sort favoured by Abney is not without its critics. Several people have challenged and refined Gott’s argument of the years, and Olle Häggström argued that the Doomsday argument is fallacious, and an unfortunate blight on futurist thinking, in his 2016 book Here be Dragons.




Thursday, April 25, 2019

#58 - Neely on Augmented Reality, Ethics and Property Rights


erica neely

In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other services (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 1:00 - What is augmented reality (AR)?
  • 5:55 - Is augmented reality overhyped?
  • 10:36 - What are property rights?
  • 14:22 - Justice and autonomy in the protection of property rights
  • 16:47 - Are we comfortable with property rights over virtual spaces/objects?
  • 22:30 - The blending problem: why augmented reality poses a unique problem for the protection of property rights
  • 27:00 - The different modalities of augmented reality: single-sphere or multi-sphere?
  • 30:45 - Scenario 1: Single-sphere AR with private property
  • 34:28 - Scenario 2: Multi-sphere AR with private property
  • 37:30 - Other ethical problems in scenario 2
  • 43:25 - Augmented reality vs imagination
  • 47:15 - Public property as contested space
  • 49:38 - Scenario 3: Multi-sphere AR with public property
  • 54:30 - Scenario 4: Single-sphere AR with public property
  • 1:00:28 - Must the owner of the single-sphere AR platform be regulated as a public utility/entity?
  • 1:02:25 - Other important ethical issues that arise from the use of AR

Relevant Links

 

Sunday, April 21, 2019

Understanding Hume on Miracles (Audio Essay)




This audio essay is an Easter special. It focuses on David Hume's famous argument about miracles. First written over 250 years, Hume's essay 'Of Miracles' purports to provide an "everlasting check" against all kinds of "superstitious delusion". But is this true? Does Hume give us good reason to reject the testimonial proof provided on behalf of historical miracles? Maybe not, but he certainly provides a valuable framework for thinking critically about this issue.

You can download the audio here or listen below. You can also subscribe on Apple, Stitcher and a variety of other podcatching services (the RSS feed is here).



This audio essay is based on an earlier written essay (available here). If you are interested in further reading about the topic, I recommend the following essays:







Friday, April 19, 2019

The Ethics of Designing People: The Habermasian Critique




Suppose in the not-too-distant future we master the art of creating people. In other words, technology advances to the point that you and I can walk into a store (or go online!) and order a new artificial person from a retailer. This artificial person will be a full-blown person in the proper philosophical sense of the term “person”. They will have all the attributes we usually ascribe to a human person. They will have the capacity to suffer, to think rationally, to desire certain futures, to conceive of themselves as a single coherent self and so on. Furthermore, you and I will have the power to design this person according to our own specifications. We will be able to pick their eye colour, height, hairstyle, personality, intelligence, life preferences and more. We will be able completely customise them to our tastes. Here’s the question: would it be ethical for us to make use of this power?

Note that for the purposes of this thought experiment it doesn’t matter too much what the artificial person is made of. It could be a wholly biological entity, made from the same stuff as any human child, but genetically and biomedically engineered according to our customisation. Or it could also be wholly artificial, made from silicon chips and motorised bits, a bit like Data from Star Trek. None of this matters. What matters is that (a) it is a person and (b) it has been custom built to order. It is ethical to create such a being?

Some people think it wouldn’t be; some people think it would be. In this post I want to look at the arguments made by those who think it would be a bad idea to design a person from scratch in this fashion. In particular I want to look at a style of argument made popular by the German philosopher Jurgen Habermas in his critique of positive eugenics. According to this argument, you should not design a person because doing so would necessarily compromise the autonomy and equality of that person. It would turn them into a product not a person; an object not a subject.

Although this argument is Habermasian in origin, I’m not going to examine Habermas’s version of it. Instead, I’m going to look at a version of it that is presented by the Polish philosopher Maciej Musial in his article “Designing (artificial) people to serve - the other side of the coin”. This is an interesting article, one that responds to an argument from Steve Petersen claiming that it would be permissible to create an artificial person who served your needs in some way. I’ve covered Petersen’s argument before on this blog (many moons ago). Some of what Musial says about Petersen’s argument has merit to it, but I want to skirt around the topic of designing robot servants (who are still persons) and focus on the more general idea of creating persons.


1. Clarifying the Issue: The “No Difference” Argument
To understand Musial’s argument, we have to understand some of the dialectical context in which it is presented. As mentioned, it is a response to Steve Petersen’s claim that it is okay to create robot persons that serve our needs. Without going into all the details of Petersen’s argument, one of the claims that Petersen makes while defending this view is that there is no important difference between programming or designing an artificial person to really want to do something and having such a person come into existence through a process of natural biological conception and socialisation.

Why is that? Petersen makes a couple of points. First, he suggested that there is no real difference between being born by natural biological means and being programmed/designed by artificial means. Both processes entail a type of programming. In the former case, evolution by natural selection has “programmed” us, indirectly and over a long period of time, with a certain biological nature; in the latter case, the programming is more immediate and direct, but it is fundamentally the same thing. This analogy is not ridiculous. Some people — notably Daniel Dennett in his book Darwin’s Dangerous Idea — have argued that evolution is an algorithmic process, very much akin to computer programming, that designs us to serve certain evolutionary ends; and, furthermore, evolutionary algorithms are now a common design strategy in computer programming.

The other point Petersen makes is that there is no real difference between being raised by one’s parents and being intentionally designed by them. Both processes have goals and intentions behind them. Parents often want to raise their children in a particular way. For example, some parents want their children to share their religious beliefs, to follow very specific career paths, and to have the success that they never had. They will take concrete steps to ensure that this is the case, bringing their children to church every week, giving them the best possible education, and (say) training them in the family business. These methods of steering a child’s future have their limitations, and might be a bit haphazard, but they do involve intentional design (even if parents deny this). All Petersen is imagining is that different methods, aimed at the same outcome, become available. Since both methods have the same purpose, how could they be ethically different?

To put this argument in more formal terms:


  • (1) If there is no important difference between (i) biologically conceiving and raising a natural person and (ii) designing and programming an artificial person, then one cannot object to the creation of an artificial person on the grounds that it involves designing and programming them in particular ways.

  • (2) There is no important difference between (i) and (ii) (following the arguments just given)

  • (3) Therefore, one cannot object to the creation of artificial persons on the grounds that it involves designing and programming them in particular ways.


To be clear, there are many other ethical objections that might arise in relation to the creation of artificial persons. Maybe it would be too expensive? Maybe their presence would have unwelcome consequences for society? Some of these are addressed in Petersen’s original article and Musial’s response. I am not going to get into them here. I am solely interested in this “no difference” argument.


2. The Habermasian Response: There is a difference
The Habermasian response to this argument takes aim at premise (2). It rests on the belief that there are several crucial ethical differences between the two processes. Musial develops this idea by focusing in particular on how being designed changes one’s relationship with oneself, one’s creators, and the rest of society.

Before we look at his specific claims it is worth reflecting for a moment on the kinds of differences he needs to pinpoint in order to undermine the “no difference”-argument. It’s not just any difference that will do. After all, the processes are clearly different in many ways. For example, one thing that people often point to is that biological conception and parental socialisation are somewhat contingent and haphazard processes over which parents have little control. In other words, parents may desire that their children turn out a particular way, but they cannot guarantee that this will happen. They have to play the genetic and developmental lottery (indeed, there is even a well-known line of research suggesting that beyond genetics parents contribute little to the ultimate success and happiness of their children).

That’s certainly a difference, but it is not the kind of difference you need to undermine the “no difference” argument. Why not? Because it is not clear what its ethical significance is. Does a lack of control make one process more ethically acceptable than another? On the face of it, it’s not obvious that it does. If anything, one might suspect the ethical acceptability runs in the opposite direction. Surely it is ethically reckless to just run the genetic and developmental lottery and hope that everything turns out for the best? For contingency and lack of control to work to undermine the “no difference” argument it will need to be shown that they translate into some other ethically relevant difference. Do they?

Musial highlights two potentially relevant differences that they might translate into in his article. The first has to do with the effects of being designed and programmed on a person’s sense of autonomy. The gist of this argument is that if one person (or a group of persons) designs another person to have certain capacities or to serve certain ends, then that other person cannot really be the autonomous author of their own lives. They must live up to someone else’s expectations and demands.
Of course, someone like Petersen would jump back in at this point and say that this can happen anyway with traditional parental education and socialisation. Parents can impose their own expectations and demands on their children and their children can feel a lack of autonomy as a result. Despite this, we don’t think that traditional parenting is ethically impermissible (though I will come back to this issue again below).

But Musial argues that this does not compare like with like. The expectations and demands of traditional parenting usually arise after the child has “entered the world of intersubjective dialogue”. In other words, a natural child can at least express its own wishes and make its feelings known in response to parental education and socialisation. It can reject the parental expectations if it wishes (even if that makes its life difficult in other ways). Similarly, even if the child does go along with the parental expectations, it can learn to desire the things the parent’s desire for it and to achieve the things they wish it to achieve. This is very different from having those desires and expectations pre-programmed into the child before it is born through genetic manipulation or biomedical engineering. It is much harder to reject those pre-programmed expectations because of the way in which they are hardwired in.

It might be disputed at this juncture that even biological children will have some genetic endowments that they do not like and are hard to reject. For example, I am shorter than I would like to be. I am sure this is as a result of parental genetics. I don’t hold it against them or question my autonomy as a result. But Musial argues that my frustration with being shorter than I would like to be is different from the frustration that might be experienced by someone who is deliberately designed to be a particular height. In my case, it is not that my parent’s imposed a particular height expectation on me. They just rolled the genetic dice. In the case of someone who is designed to be a particular height, they can trace that height back to a specific parental intention. They know they are living up to someone else’s expectations in a way that I do not.

Musial’s second argument has to do with equality. The claim is that being designed and programmed to serve a particular aim (or set of aims) undermines an egalitarian ethos. Egalitarianism (i.e. the belief that all human beings are morally equal) can only thrive in a world of contingency. Indeed, in the original Habermasian presentation, the claim was that contingency is a “necessary presupposition” of egalitarian interpersonal relationships. This is because if one person has designed another there is a dependency relationship between them. The designee knows that they have been created at the whim of the designer and are supposed to serve the ends of the designer. There is a necessary and unavoidable asymmetry between them. Not only that, but the designee will also know themselves to be different from all other non-designed persons.

Musial argues that the inequality that results from the design process can be both normative and empirical in nature. In other words, the designee may be designated as normatively inferior to other people because they have been created to serve a particular end (and so do not have the open-ended freedom of everyone else); and the designee may just feel themselves to be inferior because they know they have been intended to serve an end, or may be treated as inferior by everyone else. Either way, egalitarianism suffers.

One potential objection to this line of thought would be to argue that the position of the designee in this brave new world of artificial persons is not that different from the position of all human beings under traditional theistic worldviews. Under theism, the assumption is usually that we are all designed by God. Isn’t there are necessary relationship of inequality as a result? Without getting into the theological weeds, this may indeed be true but even still there is a critical difference between being a designee under traditional theism and being a designee in the circumstances being envisaged by Musial and others. Under theism, all human persons are designees and so all share in the same unequal status with respect to the designer. That’s different from a world in which some people are designed by specific others to serve specific ends and some are not. In any event, this point will only be relevant to someone who believes in traditional theism.


3. Problems with the Habermasian Critique
That’s the essence of the Habermas/Musial critique of the no difference argument. Is it any good? I have a two major concerns.

The first is a general philosophical one. It has to do with the coherence of individual autonomy and freedom. One could write entire treatises on both of these concepts and still barely scratch the surface of the philosophical debate about them. Nevertheless, I worry that the Habermas/Musial argument depends on some dubious, and borderline mysterian, thinking about the differences between natural and artificial processes and their effect on autonomy. In his presentation of the argument, Musial concedes that natural forces do, to some extent, impact on our autonomy. In other words, our desires, preferences and attitudes are shaped by forces beyond our control. Still, following Habermas, he claims that “natural growth conditions” allow us to be self-authors in a way that artificial design processes do not.

I’ll dispute the second half of this claim in a moment but for now I want dwell on the first half. Is it really true that natural growth conditions allow us to be self-authors? Maybe if you believe in contra-causal free will (and if you believe this is somehow absent in created persons). But if you don’t, then it is hard to see how can this be true if it is conceded that external forces, including biological evolution and cultural indoctrination, have a significant impact on our aptitudes, desires and expectations. It may be true that under natural growth conditions you cannot identify a single person who has designed you to be a particular way or to serve a particular end — the causal feedback loops are a bit too messy for that — but that doesn’t make the desires that you have more authentically yours as a result. Just because you can pinpoint the exact external cause of a belief or desire in one case, but not in the other, it does not mean that you have greater self-authorship in the latter. You have an illusion of self-authorship, nothing more. Once that illusion is revealed to you, how it is anymore existentially reassuring than learning that you were intentionally designed to be a particular way? If anything, we might suspect that latter would be more existentially reassuring. At least you know that you are not the way you are due to blind chance and dumb luck (in this respect it might be worth noting that a traditional goal of psychoanalytic therapy was to uncover the deep developmental and non-self determined causes of your personal traits and foibles). Furthermore, in either case, it seems to me that the illusion of autonomy could be sustained despite the knowledge of external causal influences. This would be true if, even having learned of the illusion, you still have the capacity for rational thought and the capacity to learn from your experiences.

This brings me to the second concern, which is more important. It has to do with the intended object or goal behind the intentional design of an artificial person. Notwithstanding my concerns about the nature of autonomy, I think the Habermas/Musial argument does provide reason to worry about the ethics of creating people to serve very specific ends. In other words, I would concede that it might be questionable to create, say, an artificial person who has been designed and programmed to really want to do your ironing. If that person is a genuine person — i.e. has the cognitive and emotional capacities we usually associate with personhood — then it might be disconcerting for them to learn that they were designed for this purpose, and it might impact on their sense of autonomy and equality if they are.

But this is only because they have been designed to serve a very specific end. If the goal of the designer/programmer is not to create a person to serve a specific end but, rather, to design someone who has enhanced capacities for autonomous thought, then the problem goes away. In that case, the artificial person would probably be customised to have greater intelligence, learning capacity, foresight, and imagination than a natural born person, but there would be no specific end that they are intended to serve. In other words, the designer would not be trying to create someone who could to the ironing but, rather, someone who could live a rich and flourishing life, whatever they decide for themselves. I’m not a parent (yet) myself, but I imagine that this should really be the goal of ethical parenting: not to raise the next chess champion (or whatever) but to raise someone who has the capacity to decide what the good life should be for themselves. Whether that is done through traditional parenting, or through design and programming, strikes me as irrelevant.

I would add to this that the Habermas/Musial argument, even in the case of a person who has been designed to serve a specific end, only works on the assumption that the specific end that the person has been designed to serve is hard to reject after they learn that they have been designed to serve that end. But it is not obvious to me that this would be the case. If we have the technology that enables us to specifically design artificial people from birth, it seems likely that we would also have the technology to reprogram them in the middle of life too. Consequently, someone who learns that they have been designed to serve a particular end could easily reject that end by having themselves reprogrammed. It’s only if you assume that this power is absent, or that designers exert continued control over the lives of the designees, that the tragedy of being designed might continue.

It could be argued, in response to this, that if you are not designing an artificial person to serve a specific end, then then there is no point in creating them. Musial raises this as a worry at the end of his article, when he suggests that the only ethical way to create an artificial person is to not specific any of their features. But I think this is wrong. You can specify some of their features without specifying that they serve a specific end, and if you are worried about the ethics of creating such a person that does not serve a specific end you may as well ask: what’s the point of creating natural persons if they don’t serve any particular ends? There are many reasons to do so. In my paper “Why we should create artificial offspring”, I argued that we might want to create artificial people in order to secure a longer collective afterlife, and because doing so would add value to our lives. That’s at least one reason.

This is not to say there are no problems with creating artificially designed persons. For example, I think creating an artificially enhanced person (i.e. one with capacities that exceed those of most ordinary human beings) could be problematic from an egalitarian perspective. This is not because the designee would be in an inferior position to the non-designed but rather because the non-designed might perceive themselves to be at a disadvantage relative the designee. This has been a long-standing concern in the enhancement debate. But worrying about that takes us beyond the Habermasian critique and is a something to address another day.




Friday, April 12, 2019

The Argument for Medical Nihilism




Suppose you have just been diagnosed with a rare illness. You go to your doctor and they put you through a series of tests. In the end, they recommend that you take a new drug — wonderzene — that has recently been approved by the FDA following several successful trials. How confident should you be that this drug will improve your condition?

You might think that this question cannot be answered in the abstract. It has to be assessed on a case by case basis. What is the survival rate for your particular illness? What is its underlying pathophysiology? What does the drug do? How successful were these trials? And in many ways you would be right. Your confidence in the success of the treatment does depend on the empirical facts. But that’s not all it depends on. It also depends on assumptions that medical scientists make about the nature of your illness and on the institutional framework in which the scientific evidence concerning the illness and its treatment is produced, interpreted and communicated to patients like you. When you think about these other aspects of the medical scientific process, it might be the case that you should very sceptical about the prospects of your treatment being a success. This could be true irrespective of the exact nature of the drug in question and the evidence concerning its effectiveness.

That is the gist of the argument put forward by Jacob Stegenga in his provocative book Medical Nihilism. The book argues for an extreme form of scepticism about the effectiveness of medical interventions, specifically pharmaceutical interventions (although Stegenga intends his thesis to have broader significance). The book is a real tour-de-force in applied philosophy, examining in detail the methods and practices of modern medical science and highlighting their many flaws. It is eye-opening and disheartening, though not particularly surprising to anyone who has been paying attention to the major scandals in scientific research for the past 20 years.

I highly recommend reading the book itself. In this post I want to try to provide a condensed summary of its main argument. I do so partly to help myself understand the argument, and partly to provide a useful primer to the book for those who have not read it. I hope that reading it stimulates further interest in the topic.


1. The Master Argument for Medical Nihilism
Let’s start by clarifying the central thesis. What exactly is medical nihilism? As Stegenga notes in his introductory chapter, “nihilism” is usually associated with the view that “some particular kind of value, abstract good, or form of meaning” does not exist (Stegenga 2018, 6). Nihilism comes in both metaphysical and epistemological flavours. In other words, it can be understood as the claim that some kind of value genuinely does not exist (the metaphysical thesis) or that it is impossible to know/justify one’s belief in its existence (the epistemological thesis).

In the medical context, nihilism can be understood relative to the overarching goals of medicine. These goals are to eliminate both the symptoms of disease and, hopefully, the underlying causes of disease. Medical nihilism is then the view that this is (very often) not possible and that it is very difficult to justify our confidence in the effectiveness of medical interventions with respect to those goals. For what it’s worth, I think that the term ‘nihilism’ oversells the argument that Stegenga offers. I don’t think he quite justifies total nihilism with respect to medical interventions; though he does justify strong scepticism. That said, Stegenga uses the term nihilism to align himself with 19th century medical sceptics who adopted a view known as ‘therapeutic nihilism’ which is somewhat similar to the view Stegenga defends.

Stegenga couches the argument for medical nihilism in Bayesian terms. If that’s something that is unfamiliar to you, then I recommend reading one of the many excellent online tutorials on Bayes’ Theorem. Very roughly, Bayes’ Theorem is a mathematical formula for calculating the posterior probability of a hypothesis or theory (H) given some evidence (E). Or, to put it another way, it is a formula for calculating how confident you should be in a hypothesis given that you have received some evidence that appears to speak in its favour (or not, as the case may be). This probability can be written as P(H|E) — which reads in English as “the probability of H given E”. There is a formal derivation of Bayes’ Theorem that I will not go through. For present purposes, it suffices to know that the P(H|E) depends on three other probabilities: (i) the prior probability of the hypothesis being true, irrespective of the evidence (i.e P(H)); (ii) the probability (aka the “likelihood”) of the evidence given the hypothesis (i.e. P(E|H); and (iii) the prior probability of the evidence, irrespective of the hypothesis (i.e. P(E)). This can be written out as an equation, as follows:

P(H|E) = P(H) x P(E|H) / P(E)*

In English, this equation states that the probability of the hypothesis given the evidence is equal to the prior probability of the hypothesis, multiplied by the probability of the evidence given the hypothesis, divided by the prior probability of the evidence.

This equation is critical to understanding Stegenga’s argument because, without knowing any actual figures for the relevant probabilities, you know from the equation itself that the P(H|E) must be low if the following three conditions are met: (i) the P(H) is low (i.e. if it is very unlikely, irrespective of the evidence, that the hypothesis is true); (ii) the P(E|H) is low (i.e. the evidence observed is not very probable given the hypothesis); and (iii) the P(E) is high (i.e. it is very likely that you would observe the evidence irrespective of whether the hypothesis was true or not). To confirm this, just plug figures into the equation and see for yourself.

That’s all the background on Bayes’ theorem that you need to understand Stegenga’s case for medical nihilism. In Stegenga’s case, the hypothesis (H) in which we are interested is the claim that any particular medical intervention is effective, and the evidence (E) in which we are interested is anything that speaks in favour of that hypothesis. So, in other words, we are trying to figure out how confident we should be about the claim that the intervention is effective given that we have been presented with evidence that appears to support its effectiveness. We calculate that using Bayes’ theorem and we know from the preceding discussion that our confidence should be very low if the three conditions outlined above are met. These three conditions thus form the premises of the following formal argument in favour of medical nihilism.


  • (1) P(H) is low (i.e. the prior probability of any particular medical intervention being effective is low)
  • (2) P(E|H) is low (i.e. the evidence observed is unlikely given the hypothesis that the medical intervention is effective)
  • (3) P(E) is high (i.e. the prior probability of observing evidence that favours the treatment, irrespective of whether the treatment is actually effective, is high)
  • (4) Therefore (by Bayes’ theorem) the P(H|E) must be low (i.e. the posterior probability of the medical intervention being successful, given evidence that appears to favour it, is low)




The bulk of Stegenga’s book is dedicated to defending the three premises of this argument. He dedicates most attention to defending premise (3), but the others are not neglected. Let’s go through each of them now in more detail. Doing so should help to eliminate lingering confusions you might have about this abstract presentation of the argument.


2. Defending the First Premise: The P(H) is Low
Stegenga offers two arguments in support of the claim that medical interventions have a low prior probability of success. The first argument is relatively straightforward. We can call it the argument from historical failure. This argument is an inductive inference from the fact that most historical medical interventions are unsuccessful. Stegenga gives many examples. Classic ones would include the use of bloodletting and mercury to cure many illnesses, “hydropathy, tartar emetic, strychnine, opium, jalap, Daffy’s elixir, Turlington’s Balsam of life” and many more treatments that were once in vogue but have now been abandoned (Stegenga 2018, 169).

Of course, the problem with focusing on historical examples of this sort is that they are often dismissed by proponents of the “standard narrative of medical science”. This narrative runs like this “once upon a time, it is true, that most medical interventions were worse than useless, but then, sometime in the 1800s, we discovered scientific methods and things started to improve”. This is taken to mean that you can’t use these historical examples to question the prior probability of modern medical treatments.

Fortunately, you don’t need to. Even in the modern era most putative medical treatments are failures. Drug companies try out many more treatments than ever come to market, and among those that do come to market, a large number end up being withdrawn or restricted due to their relative uselessness or, in some famous cases, outright dangerousness. Stegenga gives dozens of examples on pages 170-171 of his book. I won’t list them all here but I will give a quick flavour of them (if you click on the links, you can learn more about the individual cases). The examples of withdrawn or restricted drugs include: isotretinoin, rosiglitazone, valdecoxib, fenfluramine, sibutramine, rofecoxib, cerivastatin, and nefazodone. The example of rofecoxib (marketed as Vioxx) is particularly interesting. It is a pain relief drug, usually prescribed for arthritis, that was approved in 1999 but then withdrawn due to associations with increased risk of heart attack and stroke. It was prescribed to more than 80 million people when it was on the market (there is some attempt to return it to market now). And, again, that it just one example among many. Other prominent medical failures include monoamine oxidase inhibitors, which were widely prescribed for depression in the mid-20th century, only later to be abandoned due to ineffectiveness, and hormone replacement therapy (HRT) for menopausal women.

These many examples of past medical failure, even in the modern era, suggest that it would be wise to assign a low prior probability to the success of any new treatment. That said, Stegenga admits that this is a suggestive argument only since it is very difficult to give an accurate statement of the ratio of effective to ineffective treatments from this data (one reason for this is that it is difficult to get a complete dataset and the dataset that we do have is subject to flux, i.e. there are several treatments that are still on the market that may soon be withdrawn due to ineffectiveness or harmfulness).

Stegenga’s second argument for assigning a low prior probability to H is more conceptual and theoretical in a nature. It is the argument from the paucity of magic bullets. Stegenga’s book isn’t entirely pessimistic. He readily concedes that some medical treatments have been spectacular successes. These include the use of antibiotics and vaccines for the treatment of infectious diseases and the use of insulin for diabetic treatment. One property shared by these successful treatments is that they tend to be ‘magic bullets’ (the term comes from the chemist Paul Ehrlich). What this means is that they target a very specific cause of disease (e.g. virus or bacteria) in an effective way (i.e. they can eliminate/destroy the specific cause of disease without many side effects).

Magic bullets are great, if we can find them. The problem is that most medical interventions are not magic bullets. There are three reasons for this. First, magic bullets are the “low-hanging fruit” of medical science: we have probably discovered most of them by now and so we are unlikely to find new ones. Second, many of the illnesses that we want to treat have complex, and poorly understood, underlying causal mechanisms. Psychiatric illnesses are a classic example. Psychiatric illnesses are really just clusters of symptoms. There is very little agreement on their underlying causal mechanisms (though there are lots of theories). It is consequently difficult to create a medical intervention that specifically and effectively targets a psychiatric disease. This is equally true for other cases where the underlying mechanism is complex or unclear. Third, even if the disease were relatively simple in nature, human physiology is not, and the tools that we have at our disposal for intervening into human physiology are often crude and non-specific. As a result, any putative intervention might mess up the delicate chemical balancing act inside the body, with deleterious side effects. Chemotherapy is a clear example. It helps to kill cancerous cells but in the process it also kills healthy cells. This often results in very poor health outcomes for patients.

Stegenga dedicates an entire chapter of his book to this argument (chapter 4) and gives some detailed illustrations of the kinds of interventions that are at our disposal and how non-specific they often are. Hopefully, my summary suffices for getting the gist of the argument. The idea is that we should assign a low prior probability to the success of any particular treatment because it is very unlikely that the treatment is a magic bullet.


3. Defending the Second Premise: The P(E|H) is Low
The second premise claims that the evidence we tend to observe concerning medical interventions is not very likely given the hypothesis that they are successful. For me, this might be the weakest link in the argument. That may be because I have trouble understanding exactly what Stegenga is getting at, but I’ll try to explain how I think about it and you can judge for yourself whether it undermines the argument.

My big issue is that this premise, more so than the other premises, seems like one that can really only be determined on a case-by-case basis. Whether a given bit of evidence is likely given a certain hypothesis depends on what the evidence is (and what the hypothesis is). Consider the following three facts: the fact that you are wet when you come inside the house: the fact that you were carrying an umbrella with you when you did; and the fact that you complained about the rain when you spoke to me. These three facts are all pretty likely given the hypothesis that it is raining outside (i.e. the P(E|H) is high). The facts are, of course, consistent with other hypotheses (e.g. that you are a liar/prankster and that you dumped a bucket of water over your head before you came in the door) but that possibility, in and of itself, doesn’t mean the likelihood of observing the evidence that was observed, given the hypothesis that it is raining outside, is low. It seems like the magnitude of the likelihood depends specifically on the evidence observed and how consistent it is with the hypothesis. In our case, we are assuming that the hypothesis is the generic statement that the medical intervention is effective, so before we can say anything about the P(E|H) we would really need to know what the evidence in question is. In other words, it seems to me like we would have to “wait and see” what the evidence is before concluding that the likelihood is low. Otherwise we might be conflating the prior probability of an effective treatment (which I agree is low) with the likelihood.

Stegenga’s argument seems to be that we can say something generic about the likelihood given what we know about the evidential basis for existing interventions. He makes two arguments in particular about this. First, he argues that in many cases the best available medical evidence suggests that many interventions are little better than placebo when it comes to ameliorating disease. In other words, patients who take an intervention usually do little better than those who take a placebo. This is an acknowledged problem in medicine, sometimes referred to as medicine’s “darkest secret”. He gives detailed examples of this on pages 171 to 175 of the book. For instance, the best available evidence concerning the effectiveness of anti-depressants and cholesterol-lowering drugs (statins) suggests they have minimal positive effects. That is not the kind of evidence we would expect to see on the hypothesis that the treatments are effective.

The second argument he makes is about discordant evidence. He points out that in many cases the evidence for the effectiveness of existing treatments is a mixed bag: some high quality studies suggest positive (if minimal) effects; others suggest there is no effect; and others suggest that there is a negative effect. Again, this is not the kind of evidence we would expect to see if the intervention is effective. If the intervention were truly effective, surely there would be a pronounced positive bias in the total set of evidence? Stegenga goes into some of the technical reasons why this argument from discordant evidence is correct, but we don’t need to do that here. This description of the problem should suffice.

I agree with both of Stegenga’s arguments, but I still have qualms about his general claim that the P(E|H) for any particular medical intervention is low. Why is this? Let’s see if I can set it out more clearly. I believe that Stegenga succeeds in showing that the evidence we do observe concerning specific existing treatments is not particularly likely given the hypothesis that those treatments are effective. That’s pretty irrefutable given the examples discussed in his book. But as I understand it, the argument for medical nihilism is a general one that is supposed to apply to any random or novel medical treatment, not a specific one concerning particular medical treatments. Consequently, I don’t see why the fact that the evidence we observe concerning specific treatments is unlikely generalises to an equivalent assumption about any random or novel treatment.

That said, my grasp of probability theory leaves a lot to be desired so I may have this completely wrong. Furthermore, even if I am right, I don’t think it undermines the argument for medical nihilism all that much. The claims that Stegenga defends about the evidential basis of existing treatments can be folded into how we calculate the prior probability of any random or novel medical treatment being successful. And it would certainly lower that prior probability.


4. Defending the Third Premise: The P(E) is High
This is undoubtedly the most interesting premise of Stegenga’s argument and the one he dedicates the most attention to in his book (essentially all of chapters 5-10). I’m not going to be able to do justice to his defence of it here. All I can provide is a very brief overview. Still, I will try my best to capture the logic of the argument he makes.

To start, it helps if we clarify what this premise is stating. It is stating that we should expect to see evidence suggesting that an intervention is effective even if the intervention is not effective. In other words, it is stating that the institutional framework through which medical evidence is produced and communicated is such that there is a significant bias in favour of positive evidence, irrespective of the actual effectiveness of a treatment. To defend this claim Stegenga needs to show that there is something rotten at the heart of medical research.

The plausibility of that claim will be obvious to anyone who has been following the debates about the reproducibility crisis in medical science in the past decade, and to anyone who has been researching the many reports of fraud and bias in medical research. Still, it is worth setting out the methodological problems in general terms, and Stegenga’s presentation of them is one of the better ones.

Stegenga makes two points. The first is that the methods of medical science are highly malleable; the second is that the incentive structure of medical science is such that people are inclined to take advantage of this malleability in a way that produces evidence of positive treatment effects. These two points combine into an argument in favour of premise (3).

Let’s consider the first of these points in more detail. You might think that the methods of medical science are objective and scientific. Maybe you have read something about evidence based medicine. If so, you might well ask: Haven’t medical scientists established clear protocols for conducting medical trials? And haven’t they agreed upon a hierarchy of evidence when it comes to confirming whether a treatment is effective or not? Yes, they have. There is widespread agreement that randomised control trials are the gold standard for testing the effectiveness of a treatment, and there are detailed protocols in place for conducting those trials. Similarly, there is widespread agreement that you should not over-rely on one trial or study when making the case for a treatment. After all, one trial could be an anomaly or statistical outlier. Meta-analyses and systematic reviews are desirable because they aggregate together many different trials and see what the general trends in evidence are.

But Stegenga argues that this widespread agreement about evidential standards masks considerable problems with malleability. For example, when researchers conduct a meta-analysis, they have to make a number of subjective judgments about which studies to include, what weighting to give to them and how to interpret and aggregate their results. This means that different groups of researchers, conducting meta-analyses of the exact same body of evidence, can reach different conclusions about the effectiveness of a treatment. Stegenga gives examples of this in chapter 6 of the book. The same is true when it comes to conducting randomised control trials (chapter 7) and measuring the effectiveness of those trials (chapter 8). There are sophisticated tools for assessing the quality of evidence and the measures of effectiveness, but they are still prone to subjective judgment and assessment, and different researchers can apply them in different ways (more technically, Stegenga argues that the tools have poor ‘inter-rater reliability’ and poor ‘inter-tool reliability’). Again, he gives several examples of how these problems manifest in the book.

The malleability of the evidential tools might not be such a problem is everybody used those tools in good faith. This is where Stegenga’s second claim — about the problem of incentives — rears its ugly head. The incentives in medical science are such that not everyone is inclined to use the tools in good faith. Pharmaceutical companies need treatments to be effective if they are to survive and make profits. Scientists also depend on finding positive effects to secure career success (even if they are not being paid by pharmaceutical companies). This doesn’t mean that people are always explicitly engaging in fraud (though some definitely are) it just means that everyone operating within the institutions of medical research has a significant interest in finding and reporting positive effects. If a study doesn’t find a positive effect, it tends to go unreported. Similarly, and because of the same incentive structures, there is a significant bias against finding and reporting on the harmful effects of interventions.

Stegenga gives detailed examples of these incentive problems in the book. Some people might push back against his argument by pointing out that the problems to which he appeals are well-documented (particularly since the reproducibility crisis became common knowledge in the past decade or so) and steps have been taken to improve the institutional structure through which medical evidence is produced. So, for example, there is a common call now for trials to be pre-registered with regulators and there is greater incentive to try to replicate findings and report on negative results. But Stegenga argues that these solutions are still problematic. For example, the registration of trial and trial data, by itself, doesn’t seem to stop the over-reporting of positive results nor the approval of drugs with negative side effects. One illustration of this is the drug rosiglitazone, which is a drug for type-2 diabetes (Stegenga 2018, p 148). Due to a lawsuit, the drug manufacturer (GlaxoSmithKline) was required to register all data collected from forty-two trials of the drug. Only seven trials were published, which unsurprisingly suggested that the drug had positive effects. The drug was approved by the FDA in 1999. Later, in 2007, a researcher called Steven Nissen accessed the data from all 42 trials, conducted a meta-analysis, and discovered that the drug increased the risk of heart attack by 43%. In more concrete terms, this meant that the drug was estimated to have caused somewhere in the region of 83,000 heart attacks since coming on the market. All of this information was available to both the drug manufacturer and, crucially, the regulator (the FDA) before Nissen conducted his study. Indeed, internal memos from the company suggested that they were aware of the heart attack risk years before. But yet they had no incentive to report it and the FDA, either through incompetence or lack of resources, had no incentive to check up on them. That’s just one case. In other cases, the problem goes even deeper than this, and Stegenga gives some examples of how regulators are often complicit in maintaining the secrecy of trial data.

To reiterate, this doesn’t do justice to the nuance and detail that Stegenga provides in the book, but it does, I think, hint that there is a strong argument to be made in favour of premise (3).



5. Criticisms and Replies
What about objections to the argument? Stegenga looks at six in chapter 11 of the book (these are in addition to specific criticisms of the individual premises). I’ll review them quickly here.
The first objection is that there is no way to make a general philosophical case for medical nihilism. Whether any given medical treatment is effective depends on the empirical facts. You have to go out and test the intervention before you can reach any definitive conclusions.

Stegenga’s response to this is that he doesn’t deny the importance of the empirical facts, but he argues, as noted in the introduction to this article, that the hypothesis that any given medical intervention is effective is not purely empirical. It depends on metaphysical assumptions about the nature of disease and treatment, as well as epistemological/methodological assumptions about the nature of medical evidence. All of these have been critiqued as part of the argument for medical nihilism.

The second objection is that modern “medicine is awesome” and the case for medical nihilism argument doesn’t properly acknowledge its awesomeness. The basis for this objection presumably lies in the fact that some treatments appear to be very effective and that health outcomes, for the majority of people, have improved over the past couple of centuries, during which period we have seen the rise of scientific medicine.

Stegenga’s response is that he doesn’t deny that some medical interventions are awesome. Some are, after all, magic bullets. Still, there are three problems with this “medicine is awesome” objection. First, while some interventions are awesome, they are few and far between. For any randomly chosen or novel intervention the odds are that it is not awesome. Second, Stegenga argues that people underestimate the role of non-medical interventions in improving general health and well-being. In particular, he suggests (citing some studies in support of this) that changes in hygiene and nutrition have played a big role in improved health and well-being. Finally, Stegenga argues that people underestimate the role that medicine plays in negative health outcomes. For example, according to one widely-cited estimate, there are over 400,000 preventable hospital-induced deaths in the US alone every year. This is not “awesome”.

The third objection is that regulators help to guarantee the effectiveness of treatments. They are gatekeepers that prevent harmful drugs from getting to the market. The put in place elaborate testing phases that drugs have to pass through before they are approved.

This objection holds little weight in light of the preceding discussion. There is ample evidence to suggest that regulatory approval does not guarantee the effectiveness of an intervention. Many drugs are withdrawn years after approval when evidence of harmfulness is uncovered. Many approved drugs aren’t particularly effective. Furthermore, regulators can be incompetent, under-resourced and occasionally complicit in hiding the truth about medical interventions.

The fourth objection is that peer review helps to guarantee the quality of medical evidence. This objection is, of course, laughable to anyone familiar with the system of peer review. There are many well-intentioned researchers peer-reviewing one another’s work, but they are all flawed human beings, subject to a number of biases and incompetencies. There is ample evidence to suggest that bad or poor quality evidence gets through the peer review process. Furthermore, even if they were perfect, peer reviewers can only judge the quality of the studies that are put before them. If those studies are a biased sample of the total evidence, peer reviewers cannot prevent a skewed picture of reality from emerging.

The fifth objection is that the case for medical nihilism is “anti-science”. That’s a bad thing because there is lots of anti-science activism in the medical sphere. Quacks and pressure groups push for complementary therapies and argue (often with great success) against effective mainstream interventions (like vaccines). You don’t want to give these groups fodder for their anti-science activism, but that’s exactly what the case for medical nihilism does.

But the case for medical nihilism is definitely not anti-science. It is about promoting good science over bad science. This is something that Stegenga repeatedly emphasises in the book. He looks at the best quality scientific evidence to make his case for the ineffectiveness of interventions. He doesn’t reject or deny the scientific method. He just argues that the best protocols are not always followed, that they are not perfect, and that when they are followed the resulting evidence does not make a strong case for effectiveness. In many ways, the book could be read as a plea for a more scientific form of medical research, not a less scientific form. Furthermore, unlike the purveyors of anti-science, Stegenga is not advocating some anti-science alternative to medical science — though he does suggest we should be less interventionist in our approach to illness, given the fact that many interventions are ineffective.

The sixth and final objection is that there are, and will be soon, some “game-changing” medical breakthroughs (e.g. stem cell treatment or genetic engineering). These breakthroughs will enable numerous, highly effective interventions. The medical nihilist argument doesn’t seem to acknowledge either the reality or possibility of such game-changers.

The response to this is simple. Sure, there could be some game-changers, but we should be sceptical about any claim to the effect that a particular treatment is a game-changer. There are significant incentives at play that encourage people overhype new discoveries. Few of the alleged breakthroughs in the past couple of decades have been game-changers. We also know that most new interventions fail or have small effect sizes when scrutinised in depth. Consequently, a priori scepticism is warranted.


6. Conclusion
That brings us to the end of the argument. To briefly summarise, medical nihilism is the view that we should be sceptical about the effectiveness of medical interventions. There are three reasons for this, each corresponding to one of the key probabilities in Bayes’ Theorem. The first reason is that the prior probability of a treatment being effective is low. This is something we can infer from the long history of failed medical interventions, and the fact that there are relatively few medical magic bullets. The second reason is that the probability of the evidence for effectiveness, given the hypothesis that an intervention is effective, is low. We know this because the best available evidence concerning medical interventions suggest they have very small effect sizes, and there is often a lot of discordant evidence. Finally, the third reason is that the prior probability of observing evidence suggesting that a treatment is effective, irrespective of its actual effectiveness is high. This is because medical evidence is highly malleable, and there are strong incentives at play that encourage people to present positive evidence and hide/ignore negative evidence.

* For Bayes afficionados: yes I know that this is the short form of the equation and I know I have reversed the order of two terms in the equation from the standard presentation.