Saturday, March 30, 2019

#56 - Turner on Rules for Robots


Jacob Turner

In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI.

You can download here or listen below. You can also subscribe to the show on iTunes, Stitcher and a variety of other services (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 1:33 - Why did Jacob write Robot Rules?
  • 2:47 - Do we need special legal rules for AI?
  • 6:34 - The responsibility 'gap' problem
  • 11:50 - Private law vs criminal law: why it's important to remember the distinction
  • 14:08 - Is is easy to plug the responsibility gap in private law?
  • 23:07 - Do we need to think about the criminal law responsibility gap?
  • 26:14 - Is it absurd to hold AI criminally responsible?
  • 30:24 - The problem with holding proximate humans responsible
  • 36:40 - The positive side of responsibility: lessons from the Monkey selfie case
  • 41:50 - What is legal personhood and what would it mean to grant it to an AI?
  • 48:57 - Pragmatic reasons for granting an AI legal personhood
  • 51:48 - Is this a slippery slope?
  • 56:00 - Explainability and AI: Why is this important?
  • 1:02:38 - Is there are right to explanation under EU law?
  • 1:06:16 - Is explainability something that requires a technical solution not a legal solution?
  • 1:08:32 - The danger of fetishising explainability

Relevant Links





Sunday, March 24, 2019

Are we in the midst of an ongoing moral catastrophe?


Albrecht Dürer - The Four Horsemen of the Apocalypse


Here’s an interesting thought experiment:
The human brain is split into two cortical hemispheres. These hemispheres are joined together by the corpus callosum, a group nerve fibres that allows the two hemispheres to communicate and coordinate with one another. The common assumption is that the corpus callosum unites the two hemispheres into a single conscious being, i.e. you. But there is some evidence to suggest that this might not be the case. In split brain patients (i.e. patients whose corpus callosum has been severed) it is possible to perform experiments that result in the two halves of the body doing radically different things. In these experiments it is found that the left side of brain weaves a narrative that explains away the discrepancies in behaviour between the two sides of the body. Some people interpret this as evidence that the left half of the cortex is primarily responsible for shaping our conscious identity. But what if that is not what is going on? What if there are, in fact, two distinct conscious identities trapped inside most ‘normal’ brains but the left-side consciousness is the dominant one and it shuts down or prevents the right side from expressing itself? It’s only in rare patients and constrained experimental contexts that the right side gets to express itself. Suppose in the future that a ground-breaking series of experiments convincingly proves that this is indeed the case.



What ethical consequences would this have? Pretty dramatic ones. It is a common moral platitude that we should want to prevent the suffering and domination of conscious beings. But if what I just said is true, it would seem that each of us carries around a dominated and suffering conscious entity inside our own heads. This would represent a major ongoing moral tragedy and something ought to be done about it.

This fanciful thought experiment comes from Evan Williams’s paper ‘The Possibility of an Ongoing Moral Catastrophe’. It is tucked away in a footnote, offered up to the reader as an intellectual curio over which they can puzzle. It is, however, indicative of a much more pervasive problem that Williams thinks we need to take seriously.

The problem is this: There is a very good chance that those of who are alive today are unknowingly complicit in an unspecified moral catastrophe. In other words, there is a very good chance that you and I are currently responsible for a huge amount of moral wrongdoing — wrongdoing that future generations will criticise us for, and that will be a great source of shame for our grandchildren and great-grandchildren.

How can we be so confident of this? Williams has two arguments to offer and two solutions. I want to cover each of them in what follows. In the process, I’ll offer my own critical reflections on Williams’s thesis. In the end, I’ll suggest that he has identified an important moral problem, but that he doesn’t fully embrace the radical consequences of this problem.


1. Two Arguments for an Ongoing Moral Catastrophe
Williams’s first argument for an ongoing moral catastrophe is inductive in nature. It looks to lessons from history to get a sense of what might happen in the future. If we look at past societies, one thing immediately strikes us: many of them committed significant acts of moral wrongdoing that the majority of us now view with disdain and regret. The two obvious examples of this are slavery and the Holocaust. There was a time when many people thought it was perfectly okay for one person to own another; and there was a time when millions of Europeans (most of them concentrated in Germany) were knowingly complicit in the mass extermination of Jews. It is not simply that people went along with these practices despite their misgivings; it’s that many people either didn’t care or actually thought the practices were morally justified.

This is just to fixate on two historical examples. Many more could be given. Most historical societies took a remarkably cavalier attitude towards what we now take to be profoundly immoral practices such as sexism, racism, torture, and animal cruelty. Given this historical pattern, it seems likely that there is something that we currently tolerate or encourage (factory farming, anyone?) that future generations will view as a moral catastrophe. To rephrase this in a more logical form:



  • (1) We have reason to think that the present and the future will be like the past (general inductive presumption)



  • (2) The members of most past societies were unknowingly complicit in ongoing moral catastrophes.



  • (3) Therefore, it is quite likely that members of present societies are unknowingly complicit in ongoing moral catastrophes.



Premise (2) of this argument would seem to rest on a firm foundation. We have the writings and testimony of past generations to prove it. Extreme moral relativists or nihilists might call it into question. They might say it is impossible to sit in moral judgment on the past. Moral conservatives might also call it into question because they favour the moral views of the past. But neither of those views seems particularly plausible. Are we really going to deny the moral catastrophes of slavery or mass genocide? It would take a lot of special pleading and ignorance to make that sound credible.

That leaves premise (1). This is probably the more vulnerable premise in the argument. As an inductive assumption it is open to all the usual criticisms of induction. Perhaps the present is not like the past? Perhaps we have now arrived at a complete and final understanding of morality? Maybe this makes it highly unlikely that we could be unknowingly complicit in an ongoing catastrophe? Maybe. But it sounds like the height of moral and epistemic arrogance to assume that this is the case. There is no good reason to think that we have attained perfect knowledge of what morality demands. I suspect many of us encounter tensions or uncertainties in our moral views on a daily or, at least, ongoing basis. Should we give more money to charity? Should we be eating meat? Should we favour our family and friends over distant strangers? Each of these uncertainties casts doubt on the claim that we have perfect moral knowledge, and makes it more likely that future generations will know something about morality that we do not.

If you don’t like this argument, Williams has another. He calls it the disjunctive argument. It is based on the concept of disjunctive probability. You are probably familiar with conjunctive probability. These is the probability of two or more events both occurring. For example, what is the probability of rolling two sixes on a pair of dice? We know the independent probability of each of these events is 1/6. We can calculate the conjunctive probability by multiplying together the probability of each separate event (i.e. 1/6 x 1/6 = 1/36). Disjunctive probabilities are just the opposite of that. They are the probability of either one event or another (or another or another) occurring. For example, what is the probability of rolling either a 2 or a 3 if you roll two dice? We can calculate the disjunctive probability by adding together the probability of each separate event (1/6 + 1/6 = 1/3). It should be noted, though, that calculating disjunctive probabilities can be a bit more complicated than simply adding together the probabilities of separate events. If there is some overlap between events (e.g. if you are calculating the probability of drawing a spade or an ace from a deck of cards) you have to subtract away the probability of the overlapping event. But we can ignore this complication here.

Disjunctive probabilities are usually higher than you think. This is because while the probability of any particular improbable event occurring might be very low, the probability of at least one of those events occurring will necessarily be higher. This makes some intuitive sense. Consider your own death. The probability of you dying from any one specific cause (e.g. heart attack, bowel cancer, infectious disease, car accident or whatever) might be quite low, but the probability of you dying from at least one of those causes is pretty high.

Williams takes advantage of this property of disjunctive probabilities to make the case for ongoing moral catastrophe. He does so with two observations.

First, he points out that there are lots of ways in which we might be wrong about our current moral beliefs and practices. He lists some of them in his article: we might be wrong about who or what has moral standing (maybe animals or insects or foetuses have more moral standing than we currently think); we might be wrong about what is or is not conducive to human flourishing or health; we might be wrong about the extent of our duties to future generations; and so on. What’s more, for each of the possible sources of error there are multiple ways in which we could be wrong. For example, when it comes to errors of moral standing we could err in being over or under-inclusive. The opening thought experiment about the split-brain cases is just one fanciful illustration of this. Either one of these errors could result in an ongoing moral catastrophe.

Second, he uses the method for calculating disjunctive probabilities to show that even though the probability of us making any particular one of those errors might be low (for argument’s sake let’s say it is around 5%), the probability of us making at least one of those errors could be quite high. Let’s say there are fifteen possible errors we could be making, each with a probability of around 5%. In that case, the chances of us making at least one of those errors is going to be about 54%, which is greater than 1 in 2.

That’s a sobering realisation. Of course, you might try to resist this by claiming that the probability of us making such a dramatic moral error is much lower than 5%. Perhaps it is almost infinitesimal. But how confident are you really, given that we know that errors can be made? Also, even if the individual probabilities are quite low, with enough possible errors, the chance of at least one ongoing moral catastrophe is still going to be pretty high.


2. Two Responses to the Problem
Having identified the risk of ongoing moral catastrophe, Williams naturally turns to the question of what we ought to do about it.

The common solution to an ongoing or potential future risk is to take corrective measures by hedging your bets against it or to taking precautiounary approach to that risk. For example, if you are worried about the risk of crashing your new motorcycle and injuring yourself, you’ll either (a) take out insurance to protect against the expenses associated with such a crash or (b) simply avoid buying and using a motorcycle.

Williams argues that neither solution is available in the case of ongoing moral catastrophe. There are too many potential errors we could be making to hedge against them all. In hedging against one possible error you might commit yourself to another. And a precautionary approach won’t work either because failing to act could be just as big a moral catastrophe as acting, depending on the scenario. For example, failing to send more money to charity might be as big an error as sending money to the wrong kind of charity. You cannot just sit back, do nothing, and hope to avoid moral catastrophe.

So what can be done? Williams has two suggestions. The first is that we need to make it easier for us to recognise moral catastrophes. In other words, we need to make intellectual progress and advance the cause of moral knowledge: both knowledge of the consequential impact of our actions and of the plausibility/consistency of our moral norms. The idea here is that our complicity in an ongoing moral catastrophe is always (in part) due to a lack of moral knowledge. Future generations will learn where we went wrong. If we could somehow accelerate that learning process we could avert or at least lessen any ongoing moral catastrophe. So that’s what we need to do. We need to create a society in which the requisite moral knowledge is actively pursued and promoted, and in which there is a good ‘marketplace’ of moral ideas. Williams doesn’t offer specific proposals as to how this might be done. He just thinks this is the general strategy we should be following.

The second suggestion has to do with the flexibility of our social order. Williams argues that one reason why societies fail to minimise moral catastrophes is because they are conservative and set in their ways. Even if people recognise the ongoing moral catastrophe they struggle against institutional and normative inertia. They cannot bring about the moral reform that is necessary. Think about the ongoing moral catastrophe of climate change. Many people realise the problem but very few people know how to successfully change social behaviour to avert the worst of it. So Williams argues we need to create a social order that is more flexible and adaptive — one that can implement moral reform quickly, when the need is recognised. Again, there are no specific proposals as to how this might be done, though Williams does fire off some shots against hard-wiring values into a written and difficult-to-amend constitutional order, using the US as a particular example of this folly.


3. Is the problem more serious than Williams realises?
I follow Williams’s reasoning up until he outlines his potential solutions to the problem. But the two solutions strike me as being far too vague to be worthwhile. I appreciate that Williams couldn’t possibly give detailed policy recommendations in a short article; and I appreciate that his main goal is not to give those recommendations but to raise people’s consciousnesses as to the problem of ongoing moral catastrophe and to make very broad suggestions about the kind of thing that could be done in response. Still, I think in doing this he either underplays how radical the problem actually is, or overplays it and thus is unduly dismissive of one potential solution to the problem. Let me see if I can explain my thinking.

On the first point, let me say something about how I interpret Williams’s argument. I take it that the problem of ongoing moral catastrophe is a problem that arises from massive and multi-directional moral uncertainty. We are not sure if our current moral beliefs are correct; there are a lot of them; and they could be wrong in multiple different possible ways. They could be under-inclusive or over-inclusive; they could demand too much or not demand; and so on. This massive and multi-directional moral uncertainty supports Williams’s claim that we cannot avoid moral catastrophe by doing nothing, since doing nothing could also be the cause of a catastrophe.

But if this interpretation is correct then I think Williams’s doesn’t appreciate the radical implications of this massive and multi-directional moral uncertainty. If moral uncertainty is that pervasive, then it means that everything we do is fraught with moral risk. That includes following Williams’s recommendations. For example, trying to increase moral knowledge could very well lead to a moral catastrophe. After all, it’s not like there is an obvious and reliable way of doing this. A priori, we might think a relatively frictionless and transparent marketplace of moral ideas would be a good idea, but there is no guarantee that this will lead people to moral wisdom. If people are systematically biased towards making certain kinds of moral error (and they arguably are, although making this assessment itself depends on a kind of moral certainty that we have no right to claim), then following this strategy could very well hasten a moral catastrophe. At the same time, we know that censorship and friction often blocks necessary moral reform. So we have to calibrate the marketplace of moral ideas in just the right way to avoid catastrophe. This is extremely difficult (if not impossible) to do if moral uncertainty is as pervasive as Williams seems to suggest.

The same is true if we try to increase social flexibility. If we make it too easy for society to adapt and change to some new perceived moral wisdom, then we could hasten a moral catastrophe. This isn’t a hypothetical concern. History is replete with stories of moral revolutionaries who seized the reins of power only to lead their societies into moral desolation. Indeed, hard-wiring values into a constitution, and thus adding some inflexibility to the social moral order, was arguably adopted in order to provide an important bulwark against this kind of moral error.

The point is that if a potential moral catastrophe is lurking everywhere we look, then it is very difficult to say what we should be doing to avoid it. This pervasive and all-encompassing moral uncertainty is paralysing.

But maybe I am being ungenerous to Williams’s argument. Maybe he doesn’t embrace this radical form of moral uncertainty. Maybe he thinks there are some rock-solid bits of moral knowledge that are unlikely to change and so we can use those to guide us to what we ought to do to avert an ongoing catastrophe. But if that’s the case, then I suspect any solution to the problem of moral catastrophe will end up being much more conservative than Williams’s seems to suspect. If that’s the case, we will cling to the moral certainties like life rafts in a sea of moral uncertainty. We will use them to evaluate and constrain any reform to our system.

One example of how this might work in practice would be to apply the wisdom of negative utilitarianism (something Williams is sceptical about). According to negative utilitarianism, it is better to try to minimise suffering than it is to try to maximise pleasure or joy. I find this to be a highly plausible principle. I also find it to be much easier to implement than the converse principle of positive utilitarianism. This is because I think we can be more confident about what the causes suffering are than we can be about what induces joy. But if negative utilitarianism represents one of our moral life rafts, it also represents one of the best potential responses to the problem of ongoing moral catastrophe. It’s not clear to me that abiding by it would warrant the kinds of reforms that Williams seems to favour.

But, of course, that’s just my two cents on the idea. I think the problem Williams identifies is an important one and also a very difficult one. If he is right that we could be complicit in an ongoing moral catastrophe, then I am not sure that anyone has a good answer as to what we should be doing about it.




Wednesday, March 20, 2019

The Optimist's Guide to Schopenhauer's Pessimism (Audio Essay)




Schopenhauer was a profoundly pessimistic man. He argued that all life was suffering. Was he right or is there room for optimism? This audio essay tries to answer that question. It is based on an earlier written essay. You can listen below or download here.



These audio essays are released as part of the Philosophical Disquisitions podcast. You can subscribe to the podcast on Apple Podcasts, Player FM, Podbay, Podbean, Castbox, Overcast and more. Full details available here.


Monday, March 18, 2019

Is there such a thing as moral progress?


Picture taken from William Murphy on Flickr


We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.


1. Morality and Change: Clarifying the Idea of Progress
Before we talk about the idea of moral progress, it will help if we clarify what morality is and how it changes. This makes sense since moral progress is just a specific kind of moral change. I’ll talk about this in relatively abstract terms, but I think that is appropriate because moral progress is a relatively abstract phenomenon.

Morality is concerned with good and bad and right and wrong. A complete moral theory consists of an axiology — which identifies what is good and what is bad — and a deontology — which identifies what is right and what is wrong (and some other qualities of moral action too). Moral concepts and principles are essential to building a moral theory. The concepts will identify core values (like freedom, pleasure, equality, welfare etc.). The principles will tell us how we should act in order to protect and promote those core values (“you ought to give 10% of your income to charity” etc.). Moral theories will also usually identify groups of moral subjects and moral agents. Moral subjects are the beings or entities to whom moral value can accrue (and who may themselves possess intrinsic value) and so have to factored into our moral calculus. Moral agents are the beings or entities to whom principles of right and wrong apply. They are the ones that have to uphold the moral standards.

When morality changes, this means that there is some change in one or more of the constituent elements of our moral theories. We recognise a new value or discard an old on;, we expand the scope of an old moral principle, or drop it completely; we identify new moral subjects or exclude those we previously recognised as having moral status. And so on. All manner of changes have taken place over the course of human history. The challenge is to figure out whether any of those changes has been progressive or not.

There have been a few interesting articles written about this over the years. Michelle Moody Adams’s article “The Idea of Moral Progress” is widely cited. In it, Adams suggests that there is such a thing as moral progress, but that it is always local in form. Progress can only be assessed relative to a particular moral standard or concept (or set of moral standards and concepts). So, for example, we can talk about the world becoming more free or more equal, relative to some particular conception of freedom or equality, but we can’t talk about the world becoming better or worse simpliciter. Adams claims that this localised form of progress is a process ‘semantic deepening”, where we develop an enriched understanding of what a moral concept means and to whom it might apply over time.

An example might help. Consider the changes in our understanding of morally salient harm over the past couple of hundred years. Initially, we recognised a very narrow subset of harms as being morally salient, usually only physical harms experienced by a conscious being. Over time, we realised that harm was a broader phenomenon and started to accept psychological harms as being morally salient. This led philosophers to formulate general and abstract theories of harm, claiming that harm was a ‘serious setback to life interests’, and allowing for some open-endedness in what might count as a life interest. Some push for even further broadening, arguing that environmental or property-related damage should be seen a kind of harm. Some resist this. Nevertheless, following Adams, there is a clear sense in which the broadening of the concept represents a localised form of moral progress, i.e. progress in how we understand and apply the concept of harm. And what is true for harm is true for other concepts too, such as freedom, equality, and well-being.

Adams’s localised understanding of progress has been endorsed by others. Nigel Pleasants, for instance, in his article on ‘The Structure of Moral Revolutions’, rejects the claim that there is a single universal understanding of moral progress, but accepts that there can be progress relative to particular moral traditions. I think this is correct and that Adams’s localised understanding of moral progress should be relatively uncontroversial. I like to think about it in visual terms. I like to think about moral concepts and principles having a scope of application (i.e. there are groups of people, actions, events, and states of affairs to whom they apply); and I like to think that progress takes place when that scope of application expands. For example, we might recognise a right like the right to vote. Initially, this right is granted to a narrow group of people. Over time, the number of people included within the scope of the right expands. This represents progress. I have illustrated this approach to moral progress below.



The problem is that this definition of moral progress seems pretty thin. Sure, there is progress relative to a particular concept, but does this allow us say that the world is getting better or worse in general? Do we have to be relativists and sceptics about moral progress if we accept this localised definition?


2. The Challenge of Moral Progress
Patrick Stokes discusses this problem rather well in his article “Towards an Epistemology of Moral Progress”. I mentioned earlier that moral change is an indisputable historical fact. But not all moral change takes the form of progressive scope expansions. Indeed, sometimes moral change takes the form of dropping or rejecting certain bloated moral concepts. Take, sexual purity as an example. This was once highly morally-valued. Society condemned or outlawed sexually impure activities. Though this ‘purity’ mentality lingers to some extent, it is rejected by most people of my generation living in advanced economies. We favour sexual liberty over purity. In fact, we think that this preference for liberty over purity represents progress.

But, as Stokes points out, the fact that principles and concepts change in this way — that some get dropped or added to the mix over time — should cause some pessimism when it comes to our belief in moral progress. To be more precise, he argues that moral change of this sort presents an epistemological challenge to the belief in moral progress. How can we know that the moral concepts we are currently using to measure progress are not themselves going to be cast away in the next moral revolution? And if they might be, doesn’t this have certain radical consequences for morality more generally? Doesn’t it mean that we should feel no strong sense of moral obligation to our currently favoured moral concepts and principles?

Stokes has his own specific solution to this puzzle, which I will get back to later, but in essence he suggests that relativism and scepticism can be avoided if we accept that there are some basic, unchangeable moral concepts and principles. Though there are those who reject this idea, it does not seem like a huge stretch to me. Protecting and promoting basic values such as well-being, freedom and equality probably won’t go out of fashion any time soon, and while specific conceptions of these values might deepen, expand and contract over time, the commitment to them probably won’t. If so, then it may be possible to argue for a consistent, historically-stable theory of moral progress.

Michelle Moody Adams seems to endorse this view in her article. She suggests that the ideal of equality, for example, always contained within it the notion that women and slaves deserved to be treated as moral equals. This insight was available to Aristotle and others living in Ancient Greece. If he and those others had just thought a bit more deeply about what their moral concepts demanded, we might have arrived at a more equal society much sooner. There are, no doubt, interesting psychological, cultural and economic explanations for why this did not happen, but it was a latent possibility nonetheless, hidden right there in the basic moral concepts.

I agree with this to some extent. I think there are, indeed, basic moral values that are relatively fixed and stable (though I think this stability is dependent on features of human biology and sociality that may ultimately be malleable). But I don’t think this stability, in and of itself, gets us past the problem identified by Stokes. While it may be possible to measure progress in terms of expansions in how we understand stable moral concepts such as freedom, well-being and equality, the really hard cases arise when those expansions conflict.

Go back to the earlier example of sexual purity versus sexual liberty. The expansion in our understanding of sexual liberty (which resulted in more sexual acts being deemed permissible) seems to have come at the expense of sexual purity. In other words, we couldn’t expand sexual liberty without at the same time contracting (and eventually abandoning) sexual purity. The same is true in other cases. Consider the conflicts between freedom and equality, or welfare and equality. Economists like to remind us of these conflicts all the time. They suggest that equalising the distribution of economic gains sometimes comes at the expense of preventing an increase in the overall size of those gains. There are cases where we can expand one but not the other. In these cases, the obvious question arises: in which direction does moral progress lie? Can we say that favouring expanded equality over expanded welfare represents progress?

The most plausible answer to that question is to establish some hierarchy of basic values. This hierarchy would allows us to clearly identify one form of expansion as being more progressive than the other (because it serves a higher good). But this is not always going to be an acceptable strategy. It is often hard to pick and choose between basic values like freedom, equality and well-being. Some people would argue that they are all equally important, or that they are interdependent in sometimes counterintuitive ways. And it is not like the conflicts between these values are marginal cases either. It is often the preferred resolution to these conflicts that gets weaponised in debates about moral progress. It may be that there is no overarching definition of progress in these cases; there is just arbitrary preference.


3. The Expanding Moral Circle: The Uncontroversial Case?
To sum up, I tend to agree with Adams and Pleasants that moral progress is possible, but can only be assessed relative to certain moral concepts and principles. This does not, however, mean we have to be radical moral sceptics or relativists about progress. There may be some historically stable moral concepts which allow us to talk meaningfully about consistent forms moral progress. There is no guarantee that history will bend in the direction of moral progress — there will often be cases of moral regression — but it does mean we can talk about progress without shame. That said, there will be tough cases where basic moral values conflict, and where we cannot progress along one dimension one without contracting along another. In these cases, it may not be meaningful to talk about moral progress at all.

Let me conclude on a more optimistic note. There does seem to be one form of moral progress that philosophers have been willing to endorse: the expanding circle of moral concern. Accepting that basic human rights apply to all human beings, irrespective of gender, colour and creed, and that animals have at least some degree of moral considerability, even if it is not equivalent to that of human beings, is generally taken to be a mark of progress (at least among philosophers; clearly many people are fearful of the expanding circle of moral concern). This is why the retrenchment towards cultural chauvinism, racism and sexism is widely viewed as regressive, and why many people regret historical moments when we had a narrower circle of moral concern.

In his discussion of moral progress, Patrick Stokes suggests that there may be a good reason for the widespread acceptance of this as a form of moral progress. Using the work of the Danish philosopher K.E Løgstrup as his guide, he argues that the core of morality is our response to the ‘Other’. We have to encounter Others in our daily lives (other people, other beings) and we have to decide whether or not to respond to them ethically or selfishly. Ethics demands that we project ourselves out of our own predicaments and consider the potential needs of these Others. Do they matter? Do they count? Stokes has a complicated story to tell about this core ethical demand, but in the end he argues that all moral progress is assessed relative to it. Does a change in moral attitudes respect the core ethical demand or not? If it does, then it may count as progressive; if it does not, then it is more likely to be regressive.

So, on this theory, being other-regarding is the core of morality and is the metre stick against which all moral progress is measured. Consequently, it kind of makes sense that expanding the circle of moral concern is generally viewed as progressive. After all, what could be more respectful of the core ethical demand than to recognise Others as a moral beings with moral status? And what could be more progressive than continually expanding outward that circle of moral concern?




Wednesday, March 13, 2019

#55 - Baum on the Long-Term Future of Human Civilisation


Seth_Baum

In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global Catastrophic Risk Institute. He is also a Research Affiliate of the University of Cambridge Centre for the Study of Existential Risk. We talk about the importance of studying the long-term future of human civilisation, and map out four possible trajectories for the long-term future.

You can download the episode here or listen below. You can also subscribe on a variety of different platforms, including iTunes, Stitcher, Overcast, Podbay, Player FM and more. The RSS feed is available here.



Show Notes

  • 0:00 - Introduction
  • 1:39 - Why did Seth write about the long-term future of human civilisation?
  • 5:15 - Why should we care about the long-term future? What is the long-term future?
  • 13:12 - How can we scientifically and ethically study the long-term future?
  • 16:04 - Is it all too speculative?
  • 20:48 - Four possible futures, briefly sketched: (i) status quo; (ii) catastrophe; (iii) technological transformation; and (iv) astronomical
  • 23:08 - The Status Quo Trajectory - Keeping things as they are
  • 28:45 - Should we want to maintain the status quo?
  • 33:50 - The Catastrophe Trajectory - Awaiting the likely collapse of civilisation
  • 38:58 - How could we restore civilisation post-collapse? Should we be working on this now?
  • 44:00 - Are we under-investing in research into post-collapse restoration?
  • 49:00 - The Technological Transformation Trajectory - Radical change through technology
  • 52:35 - How desirable is radical technological change?
  • 56:00 - The Astronomical Trajectory - Colonising the solar system and beyond
  • 58:40 - Is the colonisation of space the best hope for humankind?
  • 1:07:22 - How should the study of the long-term future proceed from here?
 

Relevant Links


   

Thursday, March 7, 2019

The Moral Problem of Accelerating Change (Audio Essay)




(Subscribe here)

This is an experiment. For a number of years, people have been asking me to provide audio versions of the essays that I post on the blog. I've been reluctant to do this up until now, but I have recently become a fan of the audio format and I appreciate its conveniences. Also, I watched an interview with Michael Lewis (the best-selling non-fiction author in the world) just this week where he suggested that audio essays might be the future of the essay format. So, in an effort to jump ahead of the curve (or at least jump onto the curve before it pulls away from me), I'm going to post a few audio essays over the coming months.

They will all be based on stuff I've previously published on the blog, with a few minor edits and updates. I'll send them out on the regular podcast feed (which you can subscribe to in various formats here). I'm learning as I go. The quality and style will probably evolve over time, and I'm quite keen on getting feedback from listeners too. Do you like this kind of thing or would you prefer I didn't do it?

This first audio essay is based on something I previously wrote on the moral problem of accelerating change. You can find the original essay here. You can listen below or download at this link.






Tuesday, March 5, 2019

LOVE: ROBOTS Video (Medicine Unboxed 2018)

Medicine Unboxed 2018 LOVE - ROBOTS - John Danaher from Medicine Unboxed on Vimeo.

Here's a video of an interview I did with Dr Sam Gulgani at the Medicine Unboxed Festival in November 2018. We talk about the ethics of sex technology, specifically (though not exclusively) sex robots. The Medicine Unboxed Festival takes place every year in Cheltenham. It's kind of a literary/science/arts/philosophy festival, with a specific focus on medicine. It was a great privilege to take part. Dr Gulgani and I cover a lot of territory in our conversation. As I explain at the outset, I think my role was to lower the tone of an otherwise highbrow festival.



Monday, March 4, 2019

The Ambitious Academic: A Moral Evaluation



"Ambition makes you look pretty ugly”
(Radiohead, Paranoid Android)

In Act 1, Scene VII of Macbeth, Shakespeare acknowledges the dark side of ambition. Having earlier received a prophecy from a trio of witches promising that he would ‘be king hereafter’, Macbeth, with some prompting from his wife, has resolved to kill the current king (Duncan) and take the throne for himself. But then he gets cold feet. In a poignant soliloquy he notes that he has no real reason to kill Duncan. Duncan has been a wise and generally good king. The only thing spurring Macbeth to do the deed is his own insatiable ambition:

I have no spur
To prick the sides of my intent but only
Vaulting ambition, which o'erleaps itself 
And falls on the other.
(Macbeth, Act 1, Scene VII, lines 27-29)

Despite this, Macbeth ultimately succumbs to his ambition, kills Duncan, and reigns Scotland with increasing despotism and cruelty. His downfall is a warning to us all. It suggests that ambition is often the root of moral collapse.

I have a confession to make. I am deeply suspicious of ambition. When I think of ambitious people, my mind is instantly drawn to Shakespearean examples like Macbeth and Richard III: to people who let their own drive for success cloud their moral judgment. But I appreciate that there is an irony to this. I am often accused (though ‘accusation’ might be too strong) of being ambitious. People perceive my frequent writing and publication, and other scholarly activities, as evidence of some deep-seated ambition. I often tell these people that I don’t think of myself as especially ambitious. In support of this, I point out that I have frequently turned down opportunities for raising my profile, including higher status jobs, and more money. Surely that’s the opposite of ambition?

Whatever about my own case, I find that ambition is viewed with ambivalence among my academic colleagues. When they speak of ambition they speak with forked tongues. They comment about the ambition of their peers with a mixture of suspicion and envy. They begrudgingly admire the activity and industriousness of the ambitious academic, but then suspect their motives. Perhaps the ambitious academic doesn’t really care about their research? Perhaps their research isn’t that good but this is masked beneath a veil of hyper-productivity? Maybe they are in it for the (admittedly limited) fame and glory? And yet, despite the ambivalence about ambition, they all seem to agree that idleness would be worse. The idle academic is seen as a pariah, living off the backs of others and taking up space that could be occupied by any one of the large number of ambitious, unemployed and freshly-minted PhDs.

All of which sets me thinking: am I right to be suspicious of ambition? Does ambition make us all look pretty ugly? Or is there some virtue in it? I’ll try to answer these questions in what follows.


1. What is ambition?
It would help if we had a clearer definition of what ambition is. As I see it, there are two ways to define ambition. The first is relatively neutral and sees ambition as a combination of desire and action; the second is more value-laden and sees ambition as a combination of specific desires and character traits. I’ll use the common philosophical terminology and refer to these two different senses of ambition as being ‘thin’ and ’thick’. Here’s a more precise characterisation of both:

Thin Ambition = A strong desire to succeed in some particular endeavour(s) or enterprise(s), that is backed up by some committed action.

Thick Ambition: A strong desire for certain conventionally recognised forms of personal success (e.g. money, fame, power), that is backed up by a certain style of committed action (particularly ruthless and uncompromising action).

A couple of words about these definitions. Thin ambition has two elements to it: the desire to succeed, and the translation of that desire into some committed action plan. The second element is included in order to distinguish ambition from wishful thinking (Pettigrove 2006). The first element is, as noted, content neutral. It is a desire for success of some kind without any specification of what the object of that desire must be. In other words, following this definition, it would be possible to be ambitious about anything. I might, for example, be a really ambitious stamp collector. I might want to amass the world’s largest and most impressive collection of stamps. This could be the sole focus of my every waking hour. I would still deserve to be called ‘ambitious’, even though the object of my desire (stamp collecting) is not something we usually talk about in terms of ambition. Thin ambition is a pure, pared-down form of ambition.

Given this, you may think that ‘thin’ ambition constitutes the essence of ambition and that we don’t need the thicker, value-laden form of ambition. But I disagree. I think we need the thicker form because when people generally talk about ambition — ‘X is really ambitious’ — they seem to have the thicker form of ambition in mind. In that context, the word ‘ambition’ carries lots of connotations, many of them quite negative. This negativity stems from the fact that people associate the desire to succeed with particular kinds of objects (usually: the desire for money, fame and power) and with a particular kind of ruthlessness and single-mindedness in service of the desire. This is why my mind is instantly drawn to the examples of Macbeth and Richard II when I hear the word ‘ambition’. It’s also why I probably recoil from being called ‘ambitious’ and feel the need to argue that I am not.

This distinction between ‘thin’ and ‘thick’ ambition appears to give us an easy answer to the question of whether ambition is a good or bad thing. If you are talking about thick ambition, then it is more than likely a bad thing. If you are talking about thin ambition, then it is less clearcut. It all really depends on what the ambition is about, i.e. on the object of the desire to succeed. If my ambition is directed purely at securing political power for myself (like Macbeth), then it might be a bad thing. In that case, the power itself is the sole motivation for my actions and I would be willing to do anything in service of that goal, up to and including murdering or crushing my rivals. But if my ambition is directed at being the most effective altruist in the world, then it might not be a bad thing. In that case, my ambition might coincide with a set of outcomes that is likely to make the world a better place. My ambition could be quite virtuous in that scenario.

But this is too quick. The thin and thick distinction doesn’t give us all we need to conduct a proper moral evaluation of ambition.


2. The Six Evaluators of Ambition
In his article, “Ambitions”, Glen Pettigrove argues that we cannot simply evaluate ambition by focusing on the objects of the desire to succeed. Instead, we have to focus on six different elements of ambition, each one of which plays a part in how we evaluate the ambitious project or individual. Pettigrove’s main point is that there is a good and bad form of each the elements and this then impacts on whether the ambition itself is a good or bad thing.

The first element is the aforementioned “object” of the desire to succeed. What is the ambitious person trying to do? At the risk of repeating myself, some objects are good and some are bad. The desire to succeed at being a despotic dictator or serial killer is bad; the desire to succeed at curing heart disease or cancer is good. Some desires could also be value neutral and hence unobjectionable. If we could direct ambition toward positive objects, then we might welcome ambition. If ambition tends to get sucked up by negative objects, then we might not. In the latter respect, Pettigrove suggests that there is a tendency for ambition to be directed toward certain “bottomless” or “unending desires”. In other words, ambitious people have a tendency to want things that they can never get enough of, e.g. fame or money. This might have negative repercussions for the individual (and for society) if it means that they never feel satisfied and don’t know when to quit. That said, bottomless desires are not always a bad thing. The desire to do more and more good deeds, or acquire more and more knowledge, for example, doesn’t strike me as a bad thing and might provide the basis for a good, yet insatiable, form of ambition.

The second element is the individual’s knowledge of the object of the ambition. Do they know whether the object of their desire is good or bad? All else being equal, it is better if the person knows the moral quality of what it is they are doing (if it is good), and doesn’t know it (if it is bad). If the ambitious despot doesn’t know that what they want is bad for others then it might provide some grounds for excuse (though, of course, this depends on other factors). If the ambitious cancer doctor has no idea whether what she is doing is good or bad, then it might lower our estimation of what they are doing. Of course, most of us act under various conditions of uncertainty or probability, which complicates the evaluation. I think this is a real problem for academics. At least it is for me. For the vast majority of things that I do (teaching, research, writing etc), I either have no idea whether it is good or bad, or I am very unsure of this. I’m often throwing darts into the dark.

The third element is the individual’s motivation for doing the ambitious thing. Suppose that the object of the ambition is good (e.g. as in the case of the ambitious cancer doctor). What actually motivates the person to pursue that object? Most of us act for multiple reasons: because we value the goal/outcome of our actions, because we are bored, because we want money, because we are afraid to fail, because our friends and family told us to, because we want to be better than others, and so on. Pettigrove argues that it is generally better when (a) the motivation is intrinsic to the object, i.e. the object is pursued for its own sake and (b) the motivation is authentic to the individual, i.e. not something imposed upon them from the outside. The problem is that many ambitious people act for other reasons. Gore Vidal famously said that “it is not enough to succeed; others must fail”, and Morrissey echoed him by singing that “we hate it when our friends become successful”. I suspect both could serve as slogans for ambitious people. Oftentimes ambitious projects are pursued out of the fear of failure and the desire to be better than others. This is hardly laudable. That said, Pettigrove argues that we shouldn’t be too quick to judge on this score. Since people have multiple motivations, they could act for several at the same time, some good and some bad. Furthermore, some motivations that might seem bad at first glance (e.g. competitiveness) could be judged good following a deeper investigation (e.g. because some forms of competition are harmless and a spur to innovation).

The fourth element is the actual outcome of the ambition. How does it change the world? Obviously enough, if the outcome is very bad, then this might affect our evaluation of the ambition. This is true even if the intended object of the ambition was good. A cancer doctor who pushes for a new breakthrough treatment may have the best of intentions, but what if the treatment has very bad effects in the world? That might change how we think about their ambition. Maybe they were misguided by their ambition? Maybe it clouded their judgment and prevented them for appreciating all the negative effects their treatment was having? This is not an uncommon story. However, it also goes without saying that many times we are not able to fully judge the goodness or badness of an outcome: it might be good from some perspectives and bad from others. Furthermore, some outcomes might be effectively neutral.

The fifth element is a great film by Luc Besson…just kidding…the fifth element has to do with the actions that might be required by the ambition. What does the individual have to do to achieve their ambitions? If the means are bad, then this might affect our evaluation, even if the outcome and object are good. This gets us back to the problem of dirty hands/ruthlessness that was outlined earlier on. One of the major indictments of Macbeth is that he has to use ‘dirty hands’ tactics to achieve his ambition. The big question is whether ambition always requires some degree of ruthlessness and ‘dirty hands’-tactics. I think there is a real is danger of this happening. The ambitious cancer doctor, for example, may become consumed by the goal of curing cancer and start to think that the ends justify the means. They might cut corners on ethical protocols, ignore outlying data, and rail against institutional norms and regulations. Perhaps sometimes this is justified, but many times it will simply be a case of unhinged ambition causing them to lose sight of what is right.

The sixth and final element has to do with the role that ambition plays in the individual’s life. How does the ambitious project structure and give shape to the individual’s life? Pettigrove thinks that ambition often plays a positive role in people’s lives. It provides them with a focus and purpose. It gives them a sense of meaning. This is all to the good. Pettigrove suggests that this is still true even if the other aspects of ambition are all bad. In other words, he suggests that even if ambition is on net bad (based on the other five elements), it will always at least play a positive role in someone’s life by giving it some structure and purpose. That said, I think there is an obvious flipside to this: the case of someone with too many ambitions. They become fragmented across multiple projects, some of which might even be incompatible with each other. Also, being too committed to an ambitious project might be bad if it means you can’t adapt and keep up with changes in both your own life and the world around you. I’ve talked a bit about this before in my posts on hypocrisy and life plans.

The takeaway message from Pettigrove’s analysis of ambition is: it’s complicated. There is no easy way to evaluate ambition. You have to consider all six elements and then come up with some relative weighting for the different elements. In many cases, ambition will be neither wholly good nor wholly bad. It will be a mix of good and bad.


3. Implications
So where does that leave me? How should I feel about ambition? On balance, I think it means that I should relax my suspicion of ambition. Ambition definitely has a dark side: it can be directed at the wrong things and become an all-consuming passion that causes us to lose sight of what is right and wrong. But it also, potentially, has a good side. This is a point that Pettigrove repeatedly makes in his article. He suggests that ambition is responsible for a lot of the good things that happen in human history, as well as the bad. It’s very difficult to come up with an objective balance sheet that determines which side of ambition wins out. The most we can do is try to harness ambition in the right direction (or else give up, but that might be worse).

I reflect on this in particular in relation to academia. As I was writing this post, I started to realise that my suspicion of ambition, and my critical reflections on it, are, perhaps, something of a luxury. I have a relatively privileged position in academia. I have a stable, permanent job at a decent university. I have spent years ‘proving’ myself to others through industrious scholarship. I can now afford some time to reflect on the merits of what I am doing. Many of my colleagues and peers are not so lucky. They have no permanent jobs. They stumble from temporary gig to temporary gig. They have to be ambitious to get noticed and to get employment. The system demands it from them. They cannot afford to be idle. As noted above, the idle academic is viewed as the ultimate pariah.

I don’t think we should be sanguine or fatalistic about this state of affairs. I think that the performance management culture in modern universities often encourages and rewards the worst kinds of ambition. In particular, I think that it often incentivises and rewards a destructive and non-virtuous competitiveness among academics. Still, given the demand for ambition and the luxury of idleness, I think it might be possible resist the negative forms of ambition and focus on the good kinds of ambition. After all, success is very difficult to measure in academia. There are many metrics out there, and most people don’t really know how they should be weighted or evaluated. As a result, it might be possible to channel ambition in positive directions and avoid the worst excesses.

I live in hope.