Monday, March 5, 2018

The Extended Mind, Ethical Parity and the Replaceability Criterion: A Critical Analysis




I was recently watching Netflix’s sci-fi series Altered Carbon. The basic premise of the show — which is based on a series of books by Richard Morgan — is that future humans develop a technology for uploading their minds to digital ‘stacks’. These stacks preserve the identity (“soul”) of the individual and can be transferred between different physical bodies, even after one of them has been ‘killed’. This has many social repercussions, one of which is that biological death — i.e. the destruction or fatal mutilation of the body — becomes a relatively trivial event. An inconvenience rather than a tragedy. As long as the stack is preserved, the individual can survive by being transplanted to another body.

The triviality of biological form is explored in many ways in the show. Violence is common. There are various clubs that allow people to destroy one another’s bodies for sport. There is also considerable inequality when it comes to access to new bodies. The wealthy can afford to clone their preferred body types and routinely transfer between them; the poor have to rely on social distribution schemes, often ending up in the bodies of condemned prisoners. Bodies in general have become commodities: things to be ogled, prodded, bought and sold. The show has been criticised for its gratuitous nudity — the male and female performers are frequently displayed partially or fully nude — but the showrunner has defended this, arguing that it is what you would expect in a world in which the body has become disposable. I think there is some truth to this. I think our attitude toward our bodies would be radically different if they were readily ‘fungible’ (i.e. capable of being replaced by an identical or ‘as good’ item).

What if the same were true of our minds? What if we could swap out parts of our minds as readily as we swap out the batteries in an old remote control? Destroying a part of someone’s mind is currently held to be a pretty serious moral offence. If I intentionally damaged the part of your brain that allowed you to remember faces, you’d hardly take it in your stride. But suppose that as soon as I destroyed the face-recognition part you could quickly replace it with another, functionally equivalent part? Would it be so bad then?

These are not purely speculative questions. Neuroscientists and neurotechnologists are hard at work at ‘brain prosthetics’ that could enable us to swap out brain systems. Furthermore, there are plenty of philosophers and cognitive scientists who claim that we already routinely do this with parts of our minds. They take a broad interpretation of what counts as a part of a ‘mind’, claiming that our ‘minds’ extend beyond the boundaries of our bodies, arguing they are distributed between our brains, our bodies, and our surrounding environments. Some of them argue that if we take this cognitive extension seriously, it leads us to an ‘ethical parity’ thesis (Levy 2007). This thesis holds that interferences with the non-neural parts of our minds carries just as much moral weight as interfering with a neural part. This has two possible consequences, depending on the context and the nature of the interference: (i) it could mean that we ought to take non-neural interferences more seriously than we currently do; or (ii) that we should be less worried about neural interferences than we currently are.

In this post, I want to look at some arguments for taking the ethical parity thesis seriously. I do so by investigating an article by Jan-Hendrik Heinrichs which is skeptical of strong claims to ethical parity. I agree with much of what Heinrichs has to say, but his argument rests a lot of weight on the ‘replaceability’ criterion that I alluded to above and I’m not sure that this is a good idea. I want to explain why in what follows.


1. Understanding the Case for Ethical Parity
The ethical parity thesis (EPP) was originally formulated by Neil Levy in his 2007 book Neuroethics. It came in two forms (2007, 67), both of which were premised on accepting that mental processes/systems are not confined to the brain:

Strong Parity: Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.

Weak Parity: Alterations of external props are (ceteris paribus) ethically on a par with alterations of the brain, to the precise extent to which our reasons for finding alterations of the brain problematic are transferable to alterations of the environment in which it is embedded.

The Strong EPP works from something called the ‘extended mind hypothesis’, which holds that mental states can be constituted by a combination of the brain and the environment in which it is embedded. To use a simple example, the mental act of ‘remembering to pick up the milk’ could, according to the extended mind hypothesis, be constituted by the combined activity of my eyes/brain decoding the visual information on the screen of my phone and the device itself displaying a reminder that I need to pick up the milk. The use of the word ‘constituted’ is important here. The extended mind hypothesis doesn’t merely claim that the mental state of remembering to pick up the milk is caused by or dependent upon my looking at the phone; it claims that it is emergent from and grounded in the combination of brain and smartphone. It’s more complicated than that, of course, and I have examined the hypothesis in detail in previous blogposts. Suffice to say, proponents of the hypothesis don’t allow just any external prop to form part of the mind; they have some criteria for determining whether an external prop really is part of the mind and not just something that plays a causal role in it. I’ll return to this below.

Heinrichs thinks there is a major problem with the Strong EPP. He says that the argument for it is flawed. If you look back at the formulation given above, you’ll see that it presents an enythymeme. It claims that because the mind is extended, external mental ‘realisers’ (to use the jargon common in this debate) carry the same moral weight as internal mental realisers. But that inference can only be drawn if you accept another, hidden premise, as follows:


  • (1) The mind extends into the external environment, i.e. external props contribute (in a constitutive way) to mental processes.

  • [Hidden premise: (2) All contributors to mental processes are on a par when it comes to their moral value]

  • (3) Therefore, alterations of external mental props that contribute to mental processes are ethically on a par with alterations of the brain.



The problem is that the hidden premise is not persuasive. Not all contributors to mental processes are morally equivalent. Some contributors could be redundant, trivial or easily replaceable and that seems like it could make a difference. I could destroy your smartphone, but you might have another one with the exact same information recorded in it. You might have suffered some short-term harm from the destruction but to claim that it is on a par with, say, destroying your hippocampus, and thereby preventing you from ever remembering where you recorded the information about buying the milk, would seem extreme. So parity cannot be assumed, even if we accept the extended mind hypothesis.

The Weak EPP corrects for this problem with the Strong EPP by making moral reasons part and parcel of the parity claim. Although not stated clearly, the Weak EPP effectively says that (because of mental extension) the reasons for finding interferences with internal mental parts problematic transfer over to external mental parts, and vice versa. Furthermore, the Weak EPP doesn’t require the extended mind hypothesis, which many find implausible. It can work from more modest distributed/embodied theories of cognition, which hold that both the body and its surrounding environment play a critical causal role in certain mental processes, even if they aren’t technically part of the mind. An example here might be the use of a pen and paper while solving a mathematical problem. While in use, the pen and paper are critical to the resolution of the puzzle, so much so that it makes sense to say that the cognitive process of solving the puzzle is not confined to the brain but is rather distributed between the brain and the two external props. This is true even if you don’t think the pen and paper are part of the mind. There is, in other words, an important dependency relation between the two such that if you find it problematic to disrupt someone’s internal, math-solving, brain-module while they were trying to solve a problem, you should also find it problematic to do the same thing to their pen and paper when they are mid-solution (And vice versa).

But even the Weak EPP has its problems. When exactly do the reasons transfer over? What reasons could we have for finding internal and external interferences in mental processes problematic? In short: when might some form of weak ethical parity arise?


2. Three Criteria for Parity: Original Value, Integration and Replaceability
Heinrich’s article focuses on three criteria that he thinks are relevant when considering whether there is ethical parity. I want to consider each in turn.

The first criterion focuses on the distinction between original value and derivative value. It’s easy to explain the distinction; harder to defend it. Go back to the pen and paper example from the previous section. You could argue that the pen and paper have no original/intrinsic value in this scenario. The value that they have is entirely derivative: it derives from the fact that they are currently playing an important part in your mathematical problem-solving process. If you had acted differently, they would have no value. For example, if you transferred to a different pen and paper because you made a critical error the first time round, the original pen and paper would have no value; or if tried to solve the problem in your head, they would never acquire any value. In other words, you, and all your constituent parts, have original/intrinsic value: the value of the external props and artifacts is entirely dependent on the uses to which you put them. Focusing on this distinction scuppers most claims to ethical parity. External props can never have quite the same moral weight as internal mental realisers because they will always lack original value.

Original Value Criterion: You and your constituent parts have original/intrinsic value but external props and artifacts have merely derivative value. Thus there will always be an important ethical distinction between what’s internal to you and what’s not.

The criterion is easy to explain because it has intuitive pull: we probably do think about ourselves (and what is a proper part of ourselves) in this way. But I think it is slightly more difficult to defend because it depends on a number of contested claims. The first contested claim concerns what actually counts as a proper part of ourselves. If we accept the extended mind hypothesis, then external props could count as proper parts of our selves and hence could have the same original value as internal parts (Heinrichs seems to accept this point). The second contested claim follows from this and concerns whether or not internal parts are always intrinsically valuable. If we cast a more critical eye over our internal parts, we might find that some of them do not really count as proper parts of our selves because they do not form some essential or integral element of who were are. In that case their destruction could be ethically trivial. For example, the destruction of one of my neurons is hardly an ethical tragedy: I can survive perfectly fine without it. This suggests, to me, that a single neuron lacks intrinsic value: it is not an essential part of who I am, even if it is internal to my body/brain. The third contested claim concerns whether or not all external props lack intrinsic value. I think this could be challenged. Some external props might have their own, independent value, e.g. aesthetic beauty. Admittedly, this is a tangential point in this debate, but it is worth bearing in mind.

The second criterion for assessing ethical parity is the degree of integration between the user and the external prop. Even though Heinrichs makes much of the original/derivative distinction he acknowledges that some external props could be so closely integrated with a person’s cognitive processes that their value, even though derivative, could be very high. Consider the surgeon who relies on robotic arms to help her complete a delicate operation; or the blind person who uses a cane to help them navigate. There is a high degree of integration between the external props and the user in both of these cases. If you broke down the robotic arms, or stole the cane, you would be doing something with a lot of moral disvalue. This is because the user depends so heavily on the prop that you would seriously disrupt their mental/cognitive processes by interfering with it.

Integration Criterion: When a user is highly integrated with an external prop it can have a high degree of moral value.

But how do you assess degrees of integration? Various sub-criteria have been proposed over the years. Richard Heersmink has argued that there are eight sub-criteria of integration, including (i) the amount information that flows between the user and prop; (ii) the reliability of that information; (iii) the durability of the prop; (iv) the degree of trust placed in the prop; (v) the procedural transparency of the prop; (vi) the informational transparency of the prop; (vii) the degree of individualisation/customisation of the prop and (viii) how much the prop transforms the capabilities of the user. All of these seem sensible, and I agree that the more integrated a user is with the external prop the higher the moral value attached to it. But as Heinrichs points out, Heersmink’s criteria work best when we are dealing with information technologies, and not with other kinds of external props (e.g. brain stimulation devices).

This leads Heinrichs to consider another criterion, one that he thinks is particularly important: the replaceability criterion. To explain how he thinks about it, I will quote directly from his article:

Replaceability Criterion: “Generally, an irreplaceable contributor to one and the same cognitive process is, ceteris paribus, more important [i.e. carries more value] than a replaceable one.: (Heinrichs 2017, 11)

Using this criterion, Heinrichs suggests that many internal parts are irreplaceable and so their destruction carries a lot of moral weight, whereas many external props are replaceable and so their destruction carries less weight. That said, he also accepts that some external props could be irreplaceable which means that destroying them would mean doing a serious wrong to an agent. However, he argues that such irreplaceability needs to be assessed over two different timescales. An external prop might be irreplaceable in the short-term — when mid-activity — but not in the long-term. Someone could steal a blind person’s cane while they are walking home, thereby doing significant harm to them with respect to the performance of that activity, but the cane could be easily replaced in the long-term. The question is whether this kind of long-term replaceability makes any moral difference? Intuitively, it seems like it might. Destroying something that is irreplaceable in both the short and long-term would seem to be much worse than destroying something that is replaceable in the long-term. Both are undoubtedly wrong, but they are not ethically on a par.

This brings us, at last, to the question posed in the introduction. If technology continues to advance, and if we develop more external props that allow us to easily replace parts of our brains and bodies — if, in some sense, the component parts of all mental processes are readily fungible — will that mean that there is something trivial about the destruction of the original biological parts? Here’s where the replaceability criterion starts to get into trouble. If you accept that the degree of replaceability makes a moral difference, you start slipping down a slope to a world in which many of our commonsense moral beliefs lose traction. The destruction of limbs and brain parts could be greeted with equanimity because they can be easily replaced. The counterintuitive nature of this world has led others to argue that the replaceability criterion should be deployed with some caution in this context. It clearly doesn’t capture everything we care about when it comes to understanding interpersonal wrongs: there are intrinsic wrongs/harms associated with destroying parts of someone’s body or mental processes that need to be taken very seriously, even if those parts are easily replaceable. Replaceability cannot erase all wrongdoing.

But this suggests to me that more needs to said about when replaceability really matters and when it doesn’t. It’s possible, after all, that our moral intuitions about right and wrong should not be trusted in a world of perfect technological fungibility. One suggestion I have is that the intrinsic/instrumental value distinction could play an important role in determining when replaceability really matters. Some things are intrinsically value: any replacement with a functional equivalent will fail to provide the same level of value. Consider a beloved family pet. You could replace it with another pet, but it wouldn’t be the same. Other things are instrumentally valuable: replacement with a functional equivalent would provide the same level of value. Consider a knife or fork that you use to eat your food. If one falls on the ground and is replaced by functional equivalent we don’t lament the loss of the original.

So I think the critical question, then, is whether the parts of our cognitive/mental processes (or biological systems) have some intrinsic value, such that any replacement would fail to provide the same level of value, or whether they are merely instrumentally valuable: they matter because they help to sustain us in certain activities and, ultimately, sustain our personal identities. I tend to favour the instrumental view, which would imply that nostalgia for our original biological parts is irrational in a future of perfect technological fungibility. This does not mean that there is nothing wrong with attacking someone and interfering with those parts. It just means that it might be less wrong than it is in our current predicament.

That might be a disturbing conclusion for some.





Saturday, March 3, 2018

Episode #37 - Yorke on the Philosophy of Utopianism

s200_christopher.yorke.jpg

In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a 'utopia' is, why space exploration is associated with utopian thinking, and whether Bernard Suits' is correct to say that games are the highest ideal of human existence.   You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 2:00 - Why did Christopher choose to study utopianism?
  • 6:44 - What is a 'utopia'? Defining the ideal society
  • 14:00 - Is utopia practically achievable?
  • 19:34 - Why are dystopias easier to imagine that utopias?
  • 23:00 - Blueprints vs Horizons - different understandings of the utopian project
  • 26:40 - What do philosophers bring to the study of utopia?
  • 30:40 - Why is space exploration associated with utopianism?
  • 39:20 - Kant's Perpetual Peace vs the Final Frontier
  • 47:09 - Suits's Utopia of Games: What is a game?
  • 53:16 - Is game-playing the highest ideal of human existence?
  • 1:01:15 - What kinds of games will Suits's utopians play?
  • 1:14:41 - Is a post-instrumentalist society really intelligible?
 

Relevant Links




Wednesday, February 28, 2018

But is it cheating? Some thoughts on robots and sexual infidelity




[This is short article that I wrote in collaboration with Neil McArthur, for promotional reasons, when the Robot Sex book (pictured above) was coming out. Since it is unlikely to be published now, I thought I would share it here. It's not the most rigorous piece I've ever written, but I think the core insight is worthwhile.]

The comedian Richard Herring recently created a series of sketches centred around the question ‘is it cheating if you have sex with a robot?’ As someone who has been researching the topic of sexual relationships with robots for several years, I am dismayed to find that this is the most common question I get asked. The advent of sophisticated sex robots raises a number of important ethical questions for society, but the cheating question does not seem to be among them.

But since others think it is important it probably behooves me to provide an answer. Here’s my best shot.

First, I presume that people have something like the following in mind when they ask the question:

“Given that I am in a serious and committed relationship with another human being, if I have sex with a robot, does this count as cheating on my human partner?”

To answer that, you need to think about what it means to ‘cheat’. There are at least two distinct meanings of the word in common parlance. The first, which is specific to intimate relationships, is that cheating occurs when one person engages in sexual contact with someone other than their ‘official’ intimate partner. The second, which applies more generally, is that cheating occurs when you break the rules of a given practice to gain an advantage. To avoid confusion, we can refer to the first as ‘cheating*’ and the second as simply ‘cheating’.

When people ask the cheating-question, they usually focus on cheating*, but I argue that this is a mistake. They really should focus on cheating. Why? Well, for one thing, cheating* is not an issue in some relationships. Some people have ‘arrangements’ that open the doors to intimate sexual contact with third parties. They have established internal ground rules for their relationships that say that this is permissible, under certain circumstances. They care about breaking those rules, not about infidelity per se.

Furthermore, the focus on cheating* forces us into endless debates about which forms of intimate contact count as cheating*. Do you cheat* if you kiss someone else? What about if you send them explicit text messages? This encourages people to take an overly technical and legalistic approach to their relationships — to hope that they can avoid their partner’s ire or disappointment by falling outside the technical definition of cheating*. They consequently overlook or ignore what’s really bothering their partner about their conduct: the sense of betrayal or emotional harm. They care about cheating* when they should care about cheating.

If we accept this shift in focus, the cheating-question becomes relatively easy to resolve. It’s simply a question of whether the internal rules of the relationship forbid the use of sex robots. That’s something that the parties to the relationship should determine for themselves, through negotiation and agreement. In a liberal society, this seems like the right approach: intimate partners should be able to determine the rules of engagement for their own relationships without being dictated to by societal norms (provided that their own rules don’t breach other legitimate laws).

But, of course, it’s not that simple. Most people don’t set down explicit rules of engagement for their relationships -- even though they probably should. It could save a lot of heartache and upset if they did. Instead, they figure things out as they go along and rely on general social conventions to fill the gaps in any rules they may have agreed. This is not an unusual practice. In education, for example, official assessments are often governed by explicit rules that determine what counts as cheating, but those rules don’t cover every possible form of cheating or address novel technologies that enable new forms of cheating. Assessors rely on background norms of fair play to fill in the gaps. These background norms form the basis for ‘default rules’ that apply until they are overridden (or confirmed) by explicitly articulated rules.

This means that even if we do focus on cheating rather than cheating* we cannot completely avoid the technical questions as to whether having sex with a robot counts as cheating. We have to also consider whether society’s default rules forbid the use of sexbots in relationships. This is tricky since it is a novel and emerging technology and we don’t have agreed upon societal expectations in relation to them. We only have analogies. At the moment, the default rule in most Western societies seems to favour monogamy. Things may be changing in this respect, of course, and certain pockets of society may have clearly adopted non-monogamous default rules, but within the 'pockets' that I frequent I don't see anything happening to shift the presumption in favour of monogamy. This default rule holds that having intimate contact with a person other than your ‘official’ partner is a form of cheating. If you are entering into a relationship within someone, you would need to explicitly agree to deviate from this default rule. But what about other tricky cases that threaten the default rules. For example, what are society’s default rules in relation to the use of masturbatory sex toys and pornography? Things seem much fuzzier here. Historically, I suspect that the use of both would have counted as a form of intimate betrayal (i.e. cheating). Nowadays, I am not so sure. People now often take it as a given that their partners will watch pornography or use sex toys without their explicit consent.

Using these two examples as a guide suggests that whether a couple needs an explicit override or not with respect to sexbots depends on whether they think they sexbot is more like a person or like a sex toy. For the foreseeable future, sexbots will not be persons, but they will look and act in person-like ways. And because they look and act like persons, it’s unlikely that sexbots will be viewed simply as another sex toy. They will lie in a zone of uncertainty. That means, for the foreseeable future, if you plan on using a sex robot whilst in a committed relationship, you should probably explicitly negotiate for this with your partner. That’s the only way to be sure you are not cheating. But in the long-term, as sex robots proliferate, their use may be normalised, and so not something that needs to be explicitly negotiated.




Monday, February 12, 2018

Taking the Relational Turn: How should we think about the moral status of animals, robots and Others?




How should we think about the moral status of non-human (or pre-human) entities? Do animals/robots/foetuses have moral status? If so, why? It is important to get the answer right. Entities with moral status are objects of moral concern. We typically owe duties to them and they may have rights against us. Furthermore, we don’t want to make any moral errors. We don’t want to mistreat a proper object of moral concern or impose burdensome and unnecessary duties. How can we avoid this?

David Gunkel and Mark Coeckelbergh try to provide some answers in their paper ‘Facing Animals: A Relational, Other-Oriented Approach to Moral Standing’. As you might guess from the title, the paper is primarily about the moral status of animals, but the position defended therein is of broader ethical significance. In essence, Gunkel and Coeckelbergh argue that when thinking about the moral status of animals (and other entities) we should take the ‘relational turn’:

The Relational Turn: When thinking about the moral status of non-human entities we should focus less on their intrinsic metaphysical properties and more on how we relate to them.

In the remainder of this post, I want to set out Gunkel and Coeckelbergh’s case for the relational turn, explaining what that means in more concrete terms, and offering some critical reflections of my own.


1. Against the Properties Approach
Gunkel and Coeckelbergh present the relational turn as an alternative to what they claim is the dominant approach in the field of animal ethics: the properties approach. The properties approach answers questions of moral status of by focusing on the ontological properties of the entity in question. To give an example, two of the most famous voices in the field of animal ethics are Peter Singer and Tom Regan. Both make strong arguments in favour of the moral status of animals, but do so from different moral traditions. Singer is a utilitarian; Regan is a Kantian. Nevertheless, both build their arguments from claims about the ontological properties of animals. For Singer, what matters for moral status is the capacity for suffering. If animals have this capacity, then they are proper objects of moral concern, and we have a duty to prevent their suffering. For Regan, what matters is whether animals can be the ‘subject of a life’. If they can express this property, then they are proper objects of moral concern and we have corresponding duties toward them.

Both arguments are quintessential examples of the properties approach in action. To put that approach on a more formal footing, we can say that the following represents the Singer/Regan-style argument for moral status:


  • (1) Any entity that exemplifies property P [‘capacity for suffering’/‘being the subject of a life’] has moral status.

  • (2) Animals exemplify property P.

  • (3) Therefore, animals have moral status.



This is an abstract template. Singer and Regan fill it out in particular ways, and those ways have proven quite influential, but you could dispute their take on it. Perhaps it is some other property (or combination of properties) that really matters when it comes to determining moral status (e.g. the capacity for conversation/speech or the capacity for religious belief)? It is important to stress the flexibility of the properties approach here. It becomes important below.

Despite this flexibility, Gunkel and Coeckelbergh argue that this properties approach to animal ethics is fundamentally misguided. They offer four main criticisms. These criticisms do not target particular premises of the Singer/Regan-style argument; instead, they take issue with the entire Singer/Regan enterprise.

The first criticism is that the properties approach proceeds from an unexamined anthropocentric bias. In other words, proponents of the approach start with properties that humans clearly exemplify, such as sentience or self-awareness, and then work outwards from those properties to determine the moral status of animals. If animals are sufficiently human-like with respect to those properties, they will be welcomed into the community of moral concern. If they are not, they are excluded. This, then, is a critique of the reasoning procedure followed by proponents of the properties approach.

The second criticism is that the properties approach faces significant epistemological problems. Many of the properties favoured in Singer/Regan-style arguments are epistemically opaque. How can we know if an animal suffers or is the subject of a life? We don’t have direct epistemic access to these states of being? We have to infer them from outward behaviour, and this leads us to many interminable disputes. Is the dog really suffering because it yelps? Does it have the concept of itself as a continuing being? We can never know for sure. Of course, if this is really a problem, then it is a problem for how we determine the moral status of humans too. After all, we don’t have direct access to another human being’s inner mental life. But Gunkel and Coeckelbergh argue that there is just much more ambiguity and doubt in the case of animals.

The third criticism is that the properties approach creates an illusion of neutrality when it comes to determining moral status. The idea is that the presence or absence of the relevant properties can be objectively and neutrally determined. It is a matter of fact whether or not an animal can suffer; it is a matter of fact whether they are the subject of a life. These are matters to be determined by scientists and animal behaviourists, not ethicists. But this ignores how deeply moral/ethical the determination of moral status really is.

The fourth criticism is that the properties approach often involves sticking with a traditional and defective method for determining moral status. The decisions as to which properties ‘count’ are ones that are typically made before we are born and are deeply embedded in social norms and practices. This is why, historically, women and slaves were excluded from moral communities. To persist with the properties approach is to persist with the dubious social and cultural norms.

I have to say I have some problems with each of these criticisms. I certainly don’t think any of them poses a fatal problem for the properties approach. On the contrary, most of them seem to be either unavoidable (the first and second criticisms) or just problems with how the method has or might be employed (the third and fourth criticisms). These problems seem surmountable to me. But I’ll set those concerns to the side for now and consider the merits of the ‘relational’ alternative.



2. What does the relational turn entail?
As noted in the intro, taking the relational turn involves focusing on how other beings relate to us and enter into our lives, and not on their metaphysical properties. It is our relations to these ‘Others’ that raise ethical questions about their status, not some prior knowledge of their metaphysical properties. In promoting this relational approach, Gunkel and Coeckelbergh are heavily influenced by the work of the phenomenologist Emmanuel Levinas. He argued that ontology does not precede morality. On the contrary, the primary fact of existence was its relationality, i.e. the fact that we are in the world with others who intrude upon us in various ways. This intrusion necessitates a moral response and as part of that response we start to parse our relations into ontological categories. Moral engagement with the Other is the more fundamental fact of existence.

Here’s where things get a little obscure and linguistically challenging. Levinas (and others) explain this way of thinking by asking whether other beings in the world ‘take on a face’. This ‘taking on a face’ seems to be the equivalent of taking on ‘moral status’. Gunkel and Coeckelbergh like this terminology and argue that the ‘face-taking’ question is the central one in animal ethics because it is distinct from the properties question . They formulate the ‘face-taking’ question in the following terms:

Face-taking question: What does it take for an animal (or an ‘Other) to supervene and be revealed as having face? Or, to put in another way, under what practical conditions does an animal get included in a moral community?

Asking this question takes us away from the properties-oriented mindset. To further explain the shift, Gunkel and Coeckelbergh reference the work of Donna Haraway, who argued that the crucial question in animal ethics is not whether animals can suffer but whether they can ‘work’ or ‘play’, whether we can enter into an embodied interpersonal interactions with them, and so on. Gunkel and Coeckelbergh then focus on the conditions under which animals start enter into meaningful and morally significant relations with us. Their discussion gets quite detailed, but they single out two things that seem to be quite important in determining whether animals get included or not.

The first is the ‘naming’ of an animal. Giving an animal a proper name is a speech act with moral consequences. It draws the animal inside your moral circle. This is an idea that often features in media representations of animals. I recall many TV shows from when I was a younger that involved plotlines in which a child named some farm animal that they were later told was going to be slaughtered and eaten. The prior naming gave the subsequent awareness of slaughter a moral seriousness that it would otherwise have lacked. We feel closer to the animals we name and care more about their fate.

The second important condition is the physical location of the animal. Animals that live outside our homes — in the fields and countryside — are different from animals that share our homes. By inviting them into our homes we invite them into our moral circles:

For an animal, it matters a great deal where it is, in which place it is, and what techniques and technologies have been used to position it. For example, a ‘‘pet’’ is in the house. This means it is part of the human domicile, the sphere of the ‘‘who-s’’ as opposed to the ‘‘what-s.’ 
(Gunkel and Coeckelbergh 2014, 727)

These are two clear examples of how we might answer the face-taking question. They give us a sense of the conditions under which animals can ‘take on a face’. But where does it actually get us? Gunkel and Coeckelbergh acknowledge that taking the relational turn does not necessarily give us clear ethical guidance:

Note that this analysis of conditions of possibility for relations does not in itself advance a straightforward normative position; it does not say that we should treat domesticated farm animals in a more personal way. 
(2014, 730)

But they claim that this was not their goal. Their goal was to get us to think differently about the question of moral status.


3. Criticisms and Reflections
Gunkel and Coeckelbergh’s case for the relational turn is an interesting one, and I think the basic idea of the relational turn is worth taking seriously, but I have some concerns about their whole project. I would like to close by offering these up for consideration.

First, I’m not convinced that taking the relational turn draws us that far away from the properties approach. I guess it all depends on what you mean by a ‘property’, but I would argue that the face-taking question posed by Gunkel and Coeckelbergh is very much cut from the same cloth as the properties approach they are so keen to criticise. As noted above, the properties approach follows an abstract argument template. The kinds of properties that are relevant to determinations of moral status could be different from those appealed to by Singer and Regan. There is some in-built flexibility. Indeed, I think it could include the relational properties (does the Other ‘have a name’ or ‘live in close proximity to us’) mentioned by Gunkel and Coeckelbergh. If there is any real distinction between the approaches it is that the Singer/Regan approach focuses on properties that are (allegedly) intrinsic to the animal, whereas the Gunkel/Coeckelbergh approach focuses on properties that arise from the relations between the animal and its environment. But I don’t see why that distinction means the two approaches have to be in opposition to one another; both sets of properties could be crucial when determining moral status.

Second, in not offering any normative guidance, and in claiming this was not their intention, Gunkel and Coeckelbergh are doing something that I find a little bit disingenuous. After all, surely it is the normative question that motivates this entire inquiry? We want to know when and whether we are making errors in the ascription of moral status. That’s certainly what motivates the Singer/Regan-style argument. To shift focus to the more descriptive question — ‘under what conditions do animals enter our moral communities’ — is at best an interesting diversion and, at worst, a distraction from what really matters. I suspect that if we want to answer the normative questions, we will need to stick with the more traditional properties-style of reasoning.

Finally, although Gunkel and Coeckelbergh criticise the properties approach on the grounds that it is anthropocentric and (potentially) premised on defective moral traditions, it seems to me that the relational approach is equally susceptible to these critiques. Indeed, focusing on relational properties in preference to intrinsic properties makes human beings far more central to ascriptions of moral status than the properties approach of Singer and Regan does. Furthermore, the relational approach also risks morally ossifying traditional conceptions of how we ought to relate to animals and other non-human entities. For example, we might continue to think that we don’t need to care about the cows in the fields because (a) we don’t give them names and (b) we don’t invite them to live in our homes. If anything, the Singer/Regan approach is more potentially disruptive of this traditional moral complacency about animals. Again, I appreciate that Gunkel and Coeckelbergh don’t make normative claims on behalf of the conditions they identify, but in not doing so I fear they make such complacency more excusable.




Wednesday, February 7, 2018

The Quantified Relationship: Target Article with Replies




Along with Sven Nyholm and Brian Earp, I have just published a target article in the American Journal of Bioethics on the use of quantified self technologies in intimate relationships. There are eleven response papers, including one from us responding to the responses.  You can access the issue here (and -- shhh! -- a version of the target article here). Full details of each paper below:


The Quantified Relationship (2018) AJOB 18(2): 3-19  by Danaher, Nyholm and Earp
Abstract: The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies.












 





Monday, February 5, 2018

Procreative Beneficence and the Non-Identity Problem




Julian Savulescu has long-defended the idea that if you are going to procreate then you have a duty to procreate the child with the best expected quality of life that it is possible for you to procreate. He calls this the ‘principle of procreative beneficence’ (PPB for short). The PPB is highly controversial and counterintuitive. It seems to go against the norm that we currently operate when it comes to procreative choice — viz. that parents can leave it to the reproductive lottery when it comes to procreation — and it has many academic critics.

One critic is Rebecca Bennett. She argues against the PPB by using Derek Parfit’s famous ‘non-identity problem’ (NIP). Savulescu’s argument suggests that there is an obligation to procreate in the best possible way, which in turn implies that it is wrong to not comply with this obligation. But how exactly can it be wrong? According to the NIP, it is normally the case that in order for an action to be wrong there must be an actual subject who is harmed by that action. In the case of procreative choices, there is no subject that is harmed by the choice: the child who is the object of the choice has not yet come into existence and so cannot be harmed by the choice. This leads Bennett to the view that procreative choices are not moral choices at all. They are ‘mere preferences’.

Ben Saunders has pushed back against Bennett’s view. He makes three main arguments: (i) that Bennett partly misconstrues the NIP by overlooking other ways in which it can be solved; (ii) that there can, in fact, be cases of harmless wrongdoing; and (iii) even if procreative choices are not, strictly-speaking ‘moral’ in nature, they may nevertheless not be ‘mere’ preferences.

In what follows, I want to consider all three of Saunders’s arguments. This will give me an excuse to describe the NIP at some length, which is something that I have surprisingly failed to do in the past eight years of this blog.


1. Understanding the Non-Identity Problem
The NIP was originally formulated as a problem in the field of population ethics. It works like this: intuitively, it seems like we have some moral obligations to future generations. When I think about my own conduct, I certainly have a strong sense that I should not despoil the planet for my own benefit if this comes at the expense of the well-being of future generations. But there is a problem with this intuition: it clashes with another widely held belief about the preconditions of wrongdoing. According to the ‘person-affecting view’, an action is only wrong if it actually harms someone. Future generations do not yet exist — indeed, their existence is dependent on what I and others of my generation do — so they cannot be harmed by anything I do today. It seems to follow from this that I have no obligations to future generations.

From this description, it should be clear that the NIP is not an argument for a particular point of view. It is, rather, a contradiction/puzzle about our commonly-held moral beliefs. Furthermore, although the NIP is firmly grounded in population ethics, it can be formulated in more generic and abstract terms. This is the characterisation that Saunders favours in his article. He argues that the NIP arises from an inconsistent triad of propositions. The triad is as follows:


  • (1) An action cannot be wrong unless it does harm.
  • (2) Some risky policy (X) does no harm.
  • (3) The risky policy (X) is wrong.


(If you need this to be cashed out in more concrete terms, then simply replace ‘risky policy (X)’ in the above formulation with ‘the decision to have a child’ and you see how it applies to Savulescu’s PPB.)
The NIP arises because you cannot simultaneously accept all three of these propositions. You can, at most, simultaneously accept two of them and reject the third. You need to figure out which one has to go. That’s how you solve the NIP. Are you going to stick with (1) and (2) and reject (3)? Or accept (2) and (3) and reject (1)? Or even accept (1) and (3) and reject (2)? In the diagram below, I illustrate how this inconsistent triad works, showing the various combinations of views that it is possible to accept.



Bennett, in her critique of the PPB, favours one particular solution to the NIP. She accepts (1) and (2) and rejects (3). In doing so, she is in good company since the NIP is often interpreted in those terms. Indeed, Parfit himself seemed to favour this interpretation, arguing that those who wished to reject this interpretation were obliged to come up with some theory to explain how you could have harmless wrongdoing. As such, Bennett sees the NIP as a ‘burden of proof’-shifter.

But it’s not entirely clear that this is a fair interpretation of the NIP, and it is important to remember that you can resolve the problem in other ways than rejecting the belief that the risky policy is wrong. That’s Saunders’s first key argument.


2. The Possibility of Harmless Wrongdoing
Saunders then goes on to consider various interpretations of ‘harm’ that might allow for the possibility that procreative choices cause harm. As Saunders points out, a broad enough interpretation of what counts as harm (e.g. harm to self) would encompass procreative choices, but some people might reject this as being an insufficiently moralised interpretation of harm. An alternative solution is to adopt an impersonal account of harm, according to which there does not need to be a subject of harm.

I’ll skip over these aspects of Saunders’s article to focus on what I take to be the more important argument, namely: that harmless wrongdoings are a genuine possibility. If he’s right, we would have reason to reject (1). He defends this possibility with a series of thought experiments. I’ll list them all here since they are relatively short and they help to make the point:

Drunk-driving: “Someone drives along the public highway after excessive alcohol consumption, thereby endangering road-users. Fortunately they do no hit anyone.”

Trespass: “Someone breaks into your house and takes a nap in your bed. They do not damage anything and you do not find out.”

Deathbed Promise: “You promise your dying mother that you will perform an easy task that is of great importance to her, but you do not do so.”

Scrooge: “A wealthy individual gives nothing to charity, though they could benefit the needy at no real cost to themselves.”
(All taken from Saunders 2015, 504)

Saunders argues that each of these actions is wrong, but they are not harmful — at least not obviously so. Some people might argue that some of the actions are wrong because they create a risk of harm (e.g. drunk-driving) and so you can ultimately explain their wrongness by reference to harm, but this does not account for all the examples and, in any event, it would not explain the wrongness of all instances of drunk-driving. Furthermore, there is a danger that in responding to these cases people try to gerrymander the concept of harm so that it can explain all instances of alleged wrongdoing, due to some prior theoretical commitment to the person-affecting view. But this would be to ‘put the cart before the horse’ (Saunders 2015, 504) and simply assume that harmfulness always explains wrongfulness. Saunders thinks it is better to stick with the pre-theoretical judgment that these cases involve wrongs, but that these wrongs are not dependent on harm.

Of course, Saunders still hasn’t come up with a theory of wrongdoing that accounts for this, but he argues that this is not his burden to discharge. Intuitive judgments commonly precede ethical theories. It’s not clear why you should be forced to come up with a theory to explain every case of wrongdoing if the intuitive judgment of wrongdoing seems more robust than your theoretical commitments.


3. Non-moral Categorical Preferences?
Saunders final argument concerns the possibility of non-moral categorical preferences. This is not a direct response to the NIP, but it can be used to call into question one of the alleged implications of the NIP. Recall, that Bennett thinks that because procreative choices do not involve wrongs they cannot be moral in nature. They are, consequently, ‘mere’ preferences. This carries the implication that there can be no meaningful evaluations or criticisms of people for acting on those preferences. Saunders argues that this conclusion is too hasty. It does not follow from the conclusion that procreative choices are non-moral that they are also ‘mere’ preferences.

Why not? Well, Saunders argues that the space of possible preferences is broader than Bennett presumes. Mere preferences lie at one extreme and moral preferences lie at the other. In between, there are other kinds of preferences that have some ‘weight’ or ‘heft’ and that can be criticised and evaluated. To understand why, we need to figure out what differentiates mere preferences from moral preferences. Saunders argues that the key to this lies in the property of ‘universality’. Mere preferences make no claim to universality. When I say that I prefer tea to coffee, I am not claiming that everyone should prefer tea to coffee. Moral preferences do make claims to universality. When I say that I should prefer not-killing to killing, I also think that you should have that preference. Moral preferences are, consequently, ‘categorical’ preferences: everyone in their right mind should share them.

The question is whether there are any preferences that are non-moral but still categorical. Saunders thinks that there are. For example, he thinks that certain aesthetic preferences can make claims to universality. If I say that ‘Mozart is a better composer than Salieri’, I am not just stating a mere preference for the former over the latter. I’m saying that there is something about Mozart’s musical compositions that means everyone in their right mind should see them as being superior to Salieri’s. The set of categorical preferences is broader than the set of moral preferences and is distinct from the set of mere preferences.



This is important because categorical preferences ‘purport to describe truths about what we have reason to value’ (2015, 505) and we are rightly ‘socially invested’ in their content. This has significance for the debate about procreative beneficence because it may be the case that procreative preferences are non-moral, but still categorical in nature. Someone who chooses to procreate a severely disabled child in preference to a child that is free from severe disability might not be doing anything morally wrong, but they might be contravening a categorical preference that we should all favour. Of course, putting it like this makes plain the controversial implications of the PPB, even if it is interpreted as a non-moral categorical preference.

This brings us to the end of this post. Nothing that has been said in this post should be taken as an endorsement of the PPB. That principle has many other hurdles to clear and I am not personally invested in defending it. But what has been said does suggest that one particular style of criticism, grounded in the NIP, has less bite than might first appear to be the case.




Sunday, February 4, 2018

The Moral Duty to Explore Space (2): Criticisms and Replies




(Part One)

Do we have a duty to explore space? In part one, I looked at Schwartz’s positive case for the existence of such a duty. That positive case rested on three main arguments. The first argument claimed that we have a duty to explore space in order to access scarce resources. The second argument claimed that we have a duty to explore space in order to avoid existential catastrophe (e.g. meteorite impacts). The third argument claimed that we have a duty to explore space in order to avoid the eventual solar burnout that will end life in our solar system. If you haven’t read part one, I would recommend doing so now.

In this post, we are going to consider some objections to this positive case. These objections settle into three main families:


Practical Objections: These objections take issue with the feasibility of space exploration and hence with the alleged duty to explore space.

Environmental Objections: These objections take issue with the environmental costs/consequences of space exploration.

Attitudinal Objections: These objections take issue with the attitude inherent in the belief that we have a duty to explore space (Schwartz also refers to these as ‘non-practical’ environmental objections).


There are two members of each of these three families, giving us six objections in total. This is illustrated in the diagram below. We will consider all six in what follows.





1. Practical Objections
There is an old Kantian claim that ‘ought implies can’. In other words, that you cannot be obliged to do something that it is impossible to do. This is potentially a major objection to any positive case for a duty to explore space — so much so that it was briefly discussed at the end of part one. If it turns out that space exploration is not practically achievable, then it is hard to see how we could be obliged to pursue it. But how exactly might it not be achievable?

One possibility is that it is not achievable due to environmental pessimism. In other words, we are doing such damage to our planet right now that it is unlikely that we will survive long enough to build the technologies we need in order to explore space. Schwartz is, consequently, trafficking in false hope.

But Schwartz thinks there is an obvious response to this concern. If you go back to the arguments in part one, you’ll see that most of them are premised on different forms of environmental pessimism. The general idea behind them all is that if we hang around on Earth we will destroy our environment (or be destroyed by it) and so we won’t be able to survive in the long-term. This means that we have much more to fear from sticking around than we do from expanding into the solar system. To put it another way, space exploration seems like a good way to mitigate environmental pessimism, not something that will necessarily be hampered by it.

Another way in which to run the practical objection is to focus on technological impossibility. This is a more formidable version of the objection since certain forms of space exploration certainly depend on highly speculative and imaginary forms of technology. But other forms are possible with today’s technology and with plausible extrapolations from those technologies. So, for example, asteroid mining in order to access scarce resources, or manned solar system exploration would seem to be within our grasp. Interstellar exploration might be a different story, but it is not completely beyond the bounds of possibility (e.g. through increased cyborgisation) and we have a lot of time left to run out on the clock before it becomes necessary.


2. Environmental Objections
Building spacecraft and launching them into space is, undoubtedly, a costly endeavour. It requires a huge investment of time and energy. And building the new technologies that will enable us to explore the furthest reaches of space will require even greater investments of time and energy. These investments are costly and, to the extent that they are premise on building our capacity to avoid environmental collapse, we might question their wisdom. Perhaps there are other things we should be investing in that are less costly? Again, Schwartz looks at two different ways of running this objection.

The first focuses on the moral hazard involved in space exploration. I’ll leave Schwartz to explain the idea:

If humanity realizes that there are ample resources available in the wider solar system, we may decide to deplete the resources of the planet at a significantly increased clip. This possibility comes as a consequence of our being insulated from the risks associated with the reckless consumption of Earth’s limited resources. 
(Schwartz 2011, 79)

‘Moral hazard’ is a concept that is drawn from economics. The idea is that in certain circumstances insuring a person against a risk creates an incentive to be reckless with respect to that risk. So, for example, guaranteeing someone a bailout in the event that they are unable to pay back their loans might be thought to give them an incentive to do things that prevent them from being able to pay back the loans. Applied to the present context, people might view feasible space exploration as an insurance against the risk of environmental depletion here on Earth and so be more willing to engage in that environmental depletion. So it’s better not to invest in space exploration so that people are not encouraged to wreck the environment.

Is this objection persuasive? Well, here’s an analogy that suggests it is not. Imagine you are very wealthy and own a yacht. You like to invite people to party on your yacht. Someone warns you that having lots of parties increases the risk of fire on your yacht. Consequently, they advise you to invest in lots of lifeboats. But you argue, in response, that having the lifeboats will just encourage your guests to be more reckless with respect to the risk of starting a fire because they will then know they can escape if things start to heat up. So you decide not to invest in any lifeboats. Are you right not to do so? Schwartz says ‘no’. One reason why it’s not is because fires can start for reasons that are beyond anyone’s control: no matter how cautious and safe they are, there is always the chance of a fire starting. It would be silly not to have any lifeboats to insure you against the risk of dying in the fire. But then that’s exactly equivalent to what a proponent of the moral hazard argument is claiming in the present context. Even though the environmental risks to Earth are not all within our control (e.g. solar burnout; meteorite impact) we should not build any lifeboats to the stars. Surely that’s not right?

A second version of the objection just focuses on the actual environmental and economic costs of space exploration and argues that the money invested could go towards conserving and protecting the environment. Schwartz’s response to that is to argue that people tend to vastly overstate the true environmental and economic costs of space exploration. He gives some figures in his article. To give a couple of examples, NASA’s plan to send astronauts to Mars was expected to contribute 0.0012 % to global ozone depletion and 0.004% to US carbon emissions. Furthermore, NASA’s entire budget represents around 0.5% of the total US Federal Budget. The reality is that other industrial and economic activities are far more costly. On top of this, transferring economic activity to space (e.g. industrial processing in space; asteroid mining) could actually reduce the environmental cost to the Earth. So the long-term benefits of space exploration for the environment are worth factoring in.


3. Attitudinal Objections
The final set of objections are the most philosophical in nature. They are concerned with the attitude inherent in the desire to explore space. The clearest exponent of these objections is probably Robert Sparrow, who has made them specifically in relation to the idea of terraforming other planets. He views this as a potential form of cosmic vandalism, and hence something to be lamented and avoided if possible. Schwartz looks at two variations of Sparrow’s objection.

The first variation focuses on the aesthetic insensitivity implicit in the desire to explore and exploit the space environment. The starting premise of this objection is that we all ought to have an appropriate appreciation and reverence for that which is beautiful. This includes human artistic creations and also aspects of the natural environment. Someone who did not appreciate natural beauty could be rightly criticised for not having the right aesthetic sensitivity. To underscore this point, Sparrow asks us to imagine a hiker who is out walking one day and sees a beautiful arrangement of icicles on the cutting point of creek. The hiker knows that the icicles will melt by tomorrow but, in a way, this fragility makes their arrangement all the more beautiful. Now, imagine, that the hiker decides to kick-in the icicles. How should we judge their actions? Sparrow thinks that there is something wrong about the decision to kick-in the icicles; it does not display the right kind of aesthetic sensitivity. By analogy, the space explorer who wishes to exploit other planets or asteroids lacks aesthetic sensitivity.

Schwartz argues that while there is something to Sparrow’s objection, it is not obvious or straightforward in its application to space exploration. One initial problem, which plagues all aesthetic discussions, is that beauty can be very much in the eye of the beholder. What one person finds beautiful and worth preserving, another person can find ugly and worthy of destruction. There is a risk of triviality and impracticality if you assume an overly subjectivised understanding of beauty. In any event, assuming we do agree that certain aspects of the space environment are beautiful and worth conserving in their natural form, this does not tell against the general project of space exploration. Schwartz’s argues that we could have conservation zones and ‘national’ parks in space that allow people to appreciate the beauty of the space environment. Furthermore, he argues that it is plausible to suppose that we might have a ‘duty’ to appreciate things that are aesthetically beautiful and so travelling to space to see the wonders of nature might be one way of discharging that duty. In other words, the space explorer can exhibit plenty of aesthetic sensitivity.

The other way to run Sparrow’s objection is to focus on the hubris involved in the decision to explore space. The idea here is that humans have a ‘proper place’ and that proper place is here on Earth. We are adapted to live on Earth. We can flourish here without too much difficulty. We are not adapted to life in space or on other planets. It is the height of arrogance and hubris to assume that we can, or that we should:

A proper place is one in which we can flourish without too much of a struggle. It is a place in which one fits and does not appear uncomfortable or out of place…If we have to wear space suits to visit and completely remodel it in order to say, then it’s simply not our place. 
(Sparrow 1999, 238)

The problem with this argument is that it would put paid to many forms of human exploration. After all, there are many environments on Earth in which it is a daily struggle to survive, and yet we have explored and settled them nonetheless (e.g. Arctic environments). Furthermore, much of what makes Earth so liveable for us right now is as a result of significant technological and industrial design. The convenience of modern life (cheap food, electricity, running water, sanitation, antibiotics, central heating etc) are, arguably, the products of hubris. We wouldn’t have got to where we are today without embracing it.

This does not mean that we should pursue a policy of space exploration with reckless abandon. We should be cautious and reflective, and try to avoid the environmental errors of past instances of industrial expansion. But to suggest that we should simply stay put because Earth is our ‘proper place’ seems wrong.