Thursday, September 12, 2019

Are robots like animals? In Defence of the Animal-Robot Analogy


Via Rochelle Don on Flickr


People dispute the ontological status of robots. Some insist that they are tools: objects created by humans to perform certain tasks — little more than sophisticated hammers. Some insist that they are more than that: that they are agents with increasing levels autonomy — now occupying some liminal space between object and subject. How can we resolve this dispute?

One way to do this is by making analogies. What is it that robots seem to be more like? One popular analogy is the animal-robot analogy: robots, it is claimed, are quite like animals and so we should model our relationships with robots along the lines of the relationships we have with animals.

In its abstract form, this analogy is not particularly helpful. ‘Animal’ denotes a broad class. When we say that a robots is like an animal do we mean it is like a sea slug or like a chimpanzee, or something else? Also, even if we agree that a robot is like a particular animal (or sub-group of animals) what significance does this actually have? People disagree about how we ought to treat animals. For example, we think it is acceptable to slaughter and experiment with some, but not others.

The most common animal-robot analogies in the literature tend to focus on the similarities between robots and household pets and domesticated animals. This makes sense. These are the kinds of animals with whom we have some kind of social relationships and upon whom we rely for certain tasks to be performed. Consider the sheep dog who is both a family pet and a farmyard helper. Are there not some similarities between it and a companion robot?

As seductive as this analogy might be, Deborah Johnson and Mario Verdicchio argue that we should resist it. In their paper “Why robots should not be treated like animals” they accept that there are some similarities between robots and animals (e.g. their ‘otherness’, their assistive capacity, the fact that we anthropomorphise and get attached to them etc.) but also argue that there are some crucial differences. In what follows I want to critically assess their arguments. I think some of their criticisms of the animal-robot analogy are valid, but others less so.


1. Using the analogy to establish moral status
Johnson and Verdicchio look at how the analogy applies to three main topics: the moral status of robots, the responsibility/liability of robots, and the effect of human-robot relationships on human relationships with other humans. Let’s start by looking at the first of those topics: moral status.

One thing people are very interested in when it comes to understanding robots is their moral status. Do they or could they have the status of moral patients? That is to say, could they be objects of moral concern? Might we owe them a duty of care? Could they have rights? And so on. Since we ask similar questions about animals, and have done for a long time, it is tempting to use the answers we have arrived at as a model for answering the questions about robots.

Of course, we have to be candid here. We have not always treated animals as though they are objects of moral concern. Historically, it has been normal to torture, murder and maim animals for both good reasons (e.g. food, biomedical experimentation) and bad (e.g. sport/leisure). Still, there is a growing awareness that animals might have some moral status, and that this means they are owed some moral duties, even if this doesn’t quite extend to the full suite of duties we owe to an adult human being. The growth in animal welfare laws around the world is testament to this. Given this, it is quite common for robot ethicists to argue that robots, due to their similarities with animals, might be owed some moral duties.

Johnson and Verdicchio argue that this style of argument overlooks the crucial difference between animals and robots. This difference is so crucial that they repeat it several times in the article, almost like a mantra:

Robots are machines. Animals are sentient organisms, that is, they are capable of perception and they feel, whereas robots do not, at least not in the important sense in which animals do [they acknowledge in a footnote that roboticists sometimes talk about robots sensing and feeling things but then argue that this language is being used in a metaphorical sense]. 
(Johnson and Verdicchio 2018, pg 4 of the pre-publication version).
The problem is that robots do not suffer and even those of the future will not suffer. Yes, future robots might have some states of being that could be equated with suffering [refs omitted] but, futuristic thinking leaves it unclear what—other than metaphorical representation—it could mean to say that a robot suffers. Thus, the animal–robot analogy doesn’t work here. Animals are sentient beings and robots are not. 
(Johnson and Verdicchio 2018, 4-5)
Robots of today do not have sentience or consciousness and do not suffer. Robots of the future might have characteristics that are equated with sentience, suffering, and consciousness, but if these features are going to be independent of each other…they will be fundamentally different from what humans and (some) animals have. It is the capacity to suffer that drives a wedge between animals and robots when it comes to moral status. 
(Johnson and Verdicchio 2018, 5)

I quote these passages at some length because they effectively summarise the argument the authors make. It is pretty clear what the reasoning is:


  • (1) Animals do suffer/have sentience or consciousness.

  • (2) Robots cannot and will not suffer or have sentience or consciousness (even if it is alleged that robots do have those capacities, the terms will be applied metaphorically to the case of robots)

  • (3) The capacity to suffer or have sentience or consciousness is the reason why animals have moral status.

  • (4) Therefore, the robot-animal analogy is misleading, at least when used to ground claims about robot moral status.



I find this argumentation relatively weak. Beyond the categorical assertion that animals are sentient and robots are not, we get little in the way of substantive reasoning. Johnson and Verdicchio seem to just have a very strong intuition or presumption against robot sentience. This sounds like a reasonable position since, in my experience, many people share this intuition. But I am sceptical of it. I’ve outlined my thinking at length in my paper ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’.

The gist of my position is this. A claim to the effect that another entity has moral status must be justified on the basis of publicly accessible evidence. If we grant that sentience/consciousness grounds moral status, we must then ask: what publicly accessible evidence warrants our belief that another entity is sentient/conscious? My view is that the best evidence — which trumps all other forms of evidence — is behavioural. The main reason for this is that sentience is inherently private. Our best window into this private realm (imperfect though it may be) is behavioural. So if sentience is going to be a rationally defensible basis for ascribing moral status to others, we have to work it out with behavioural evidence. This means that if an entity behaves as if it is conscious or sentient (and we have no countervailing behavioural evidence) then it should be treated as having moral status.

This argument, if correct, undercuts the categorical assertion that robots are not and cannot be sentient (or suffer etc.), as well as the claim that any application of such terminology to a robot must be metaphorical. It suggests that this is not something that can be be asserted in the abstract. You have to examine the behavioural evidence to see what the situation is: if robots behave like sentient animals (granting, for the moment that animals are sentient) then there is no reason to deny them moral status or to claim that their sentience is purely metaphorical. Since we do not have direct epistemic access to the sentience of humans or other animals, we have no basis by which to distinguish between ‘metaphorical’ sentience and ‘actual’ sentience, apart from the behavioural.
This does not mean, of course, that robots as they currently exist have moral status equivalent to animals. That depends on the behavioural evidence. It does mean, however, the chasm between animals and robots with respect to suffering and sentience is not, as Johnson and Verdicchio assert, unbridgeable.

It is worth adding that this is not the only reason to reject the argument. To this point the assumption has been that sentience or consciousness is the basis of moral status. But some people dispute this. Immanuel Kant, for instance, might argue that it is the capacity for reason that grounds moral status. It is because humans can identify, respond to and act on the basis of moral reason that they are owed moral duties. If robots could do the same, then perhaps they should be afforded moral status too.

To be fair, Johnson and Verdicchio accept this point and argue that it is not relevant to their focus since people generally do not rely on an analogy between animals and robots to make such an argument. I think this is correct. Despite the advances in thinking about animal rights, we do not generally accept that animals are moral agents capable of identifying and responding to moral reasons. If robots are to be granted moral status on this basis, then it is a separate argument.


2. Using the analogy to establish rules for robot responsibility/liability
A second way in which people use the animal-robot analogy is to develop rules for robot responsibility/liability. The focus here is usually on domesticated animals. So imagine you own a horse and you are guiding it through the village one day. Suddenly, you lose your grip and the horse runs wild through the farmers’ market, causing lots of damage and mayhem in its wake. Should you be legally liable for that damage? Legal systems around the world have grappled with this question for a long time. The common view is that the owner of an animal is responsible for the harm done by the animal. This is either because liability is assigned to the owner on a strict basis (i.e. they are liable even if they were not at fault) or on the basis of negligence (i.e. they failed to live up to some standard of care).

Some people argue that a similar approach should be applied to robots. The reason is that robots, like animals, can behave in semi-autonomous and unpredictable ways. The best horse-trainer in the world will not be able to control a horse’s behaviour at all times. This does not mean they should avoid legal liability. Likewise, for certain classes of autonomous robot, the best programmer or roboticst will not be able to perfectly predict and control what the robot will do. This does not mean they should be off the hook when it comes to legal liability. Schaerer et al (2009) are the foremost proponents of this ‘Robots as Animals’ framework. As they put it:

The owner of a semi-autonomous machine should be held liable for the negligent supervision of that machine, much like the owner of a domesticated animal is held liable for the negligent supervision of that animal. 
(2009, 75)

Johnson and Verdicchio reject this argument. Although they agree with the overall conclusion — i.e. that robot manufacturers/owners should not be ‘off the hook’ when it comes to liability — Johnson and Verdicchio argue that the analogy being made between robots and animals is unhelpful because there are crucial differences between robots and animals:

no matter what autonomy is in robots, the robots will have been created entirely by humans. Differently from what happens in genetics, humans do have a complete knowledge of the workings of the electronic circuitry of which a robot’s hardware is comprised, and the instructions that constitute the robot’s software have been written by a team of human coders. Even the most sophisticated artefacts that are able to learn and perfect new tasks, thanks to the latest machine learning techniques, depend heavily on human designers for their initial set-up, and human trainers for their learning process. 
(Johnson and Verdicchio 2018, 7)

They continue to argue that these differences mean we should take a different route to the conclusion that robots manufacturers ought to be liable:

The concepts of strict liability and negligence seem relevant to legal liability for robot behaviour but not because robots are like domesticated animals, but simply because they are manufactured products with some degree of unpredictability. The fundamental difference between animals and robots—that one is a living organism and the other a machine—makes analogies suspect…In the case of animals, owners exert their influence through training of a natural entity; in the case of robots, manufacturers exert their influence in the creation of robots and they or others (those who buy the robots) may also exert influence via training. For this, animals are not a good model. 
(Johnson and Verdicchio 2018, 7)

I have mixed feelings about this argument. One minor point I would make is that I suspect the value of the animal-robot analogy will depend on the dialectical context. If you are talking to someone who thinks that robot manufacturers ought not be liable because robots are autonomous (or semi-autonomous), then the analogy might be quite helpful. You can disarm their reasoning by highlighting the fact that we already hold the owners of autonomous/semi-autonomous animals liable. This might cause them to question their original judgment and lead them toward the conclusion preferred by Johnson and Verdicchio. So to say that the analogy is unhelpful or obfuscatory does not strike me as being always true.

More seriously, the argument Johnson and Verdicchio make rests on what are, for me, some dubious assumptions. Foremost among them are (a) there is an important difference between training a natural entity and designing, manufacturing and training an artificial entity, (b) we have complete knowledge of robot hardware (and don’t have complete knowledge of animal hardware) and (c) this knowledge and its associated level of control makes a crucial difference when it comes to assigning liability. Let’s consider each of these in more detail.

The claim that there is some crucial difference between a trained natural entity and a designed/manufactured/trained artificial entity is obscure to me. The suggestion elsewhere in the article is that an animal trainer is working with a system (the biological organism) that is a natural given: no human was responsible for evolving the complex web of biological tissues and organs (etc) that give the animal its capacities. This is very different from designing an artificial system from scratch.

But why is it so different? The techniques and materials needed to create a complex artificial system are also given to us: they are the product of generations of socio-technical development and not the responsibility of any one individual. Perhaps biological systems are more complex than the socio-technical system (though I am not sure how to measure complexity in this regard) but I don’t see why that is a crucial difference. Similarly, I would add that it is misleading to suggest that domesticated animals are natural. They have been subject to artificial selection for many generations and will be subject to more artificial methods of breeding and genetic engineering in the future. Overall, this leads me to conclude that the distinction between the natural and the artificial is a red herring in this debate.

The more significant difference probably has to do with the level of knowledge and control we have over robots vis-a-vis animals. Prima facie, it is plausible to claim that the level of knowledge and control we have over an entity should affect the level of responsibility we have for that entity’s activities, since both knowledge and control have been seen as central to responsibility since the time of Aristotle.

But there are some complexities to consider here. First, I would dispute the claim that people have complete knowledge of a robot’s hardware. Given that robots are not really manufactured by individuals but by teams, and given that these teams rely heavily on pre-existing hardware and software to assemble robots, I doubt whether the people involved in robot design and manufacture have complete knowledge of their mechanics. And this is to say nothing about the fact that some robotic software systems are inherently opaque to human understanding, which compounds this lack of complete knowledge. More importantly, however, I don’t think having extensive knowledge of another entity’s hardware automatically entails greater responsibility for its conduct. We have pretty extensive knowledge of some animal hardware — e.g. we have mapped the genomes and neural circuitry of some animals like c.elegans — but I would find it hard to say that because we have this knowledge we are somehow responsible for their conduct.

Second, when it comes to control, it is worth bearing in mind that we can have a lot of control over animals (and, indeed, other humans) if we wish to have it. The Spanish neuroscientist — Jose Delgado — is famous for his neurological experiments on bulls. In a dramatic presentation, he implanted an electrode array in the brain of a bull and used a radio controller to stop it from charging at a him in a bullring. Delgado’s techniques were quite crude and primitive, but he and others have shown that it is possible to use technology to exert a lot of control over the behaviour of animals (and indeed humans) if you so wish (at the limit, you can use technology to kill an animal and shut down any problematic behaviour).

At present, as far as I am aware, we don’t require the owners of domesticated animals to implant electrodes in their brains and then carry around remote controls that would enable them to shut down problematic behaviour. But why don’t we do this? It would be an easy way to address and prevent the harm caused by semi-autonomous animals. There could be several reasons but the main one would probably be because we think it would be cruel. Animals don’t just have some autonomy from humans; they deserve some autonomy. We can train their ‘natural’ abilities in a particular direction way, but we cannot intervene in such a crude and manipulative way.

If I am right, this illustrates something pretty important: the moral status of animals has some bearing on the level of control we both expect and demand of their owners. This means questions about the responsibility of manufacturers for robots cannot be disentangled from questions about their moral status. It is only if you assume that robots do not (and cannot) have moral status that you assume they are very different from animals in this respect. The very fact that the animal-robot analogy casts light on this important connection between responsibility and status strikes me as being useful.


3. Using the analogy to understand harm to others
A third a way of using the animal-robot analogy is to think about the effect that our relationships with animals (or robots) have on our relationships with other humans. You have probably heard people argue that those who are cruel to animals are more likely to be cruel to humans. Indeed, it has been suggested that psychopathic killers train themselves, initially, on animals. So, if a child is fascinated by torturing and killing animals there is an increased likelihood that they will transfer this behaviour over to humans. This is one reason why we might want to ban or prevent cruelty to animals (in addition to the intrinsic harm that such cruelty causes to the animals themselves).

If this is true in the case of animals then, by analogy, it might also be true in the case of robots. In other words, we might worry about human cruelty to robots because of how that cruelty might transfer over to other humans. Kate Darling, who studies human-robot interactions at MIT has made this argument. She doesn’t think that robots themselves can be harmed by the interactions they have with humans but that human cruelty to robots (simulated though it may be) could encourage and reinforce cruelty more generally.

This style of argument is, of course, common to other debates about violent media. For example there are many people who argue that violent movies and video games encourage and reinforce cruelty and violence toward real humans. Whatever about the merits of those other arguments, Johnson and Verdicchio are sceptical about the argument as it applies to animals and robots. There are two main reasons for this. The first is that the evidence linking violence to animals and violence to humans may not be that strong. Johnson and Verdicchio certainly cast some doubts on it, highlighting the fact that there are many people (e.g. farmers, abattoir workers) whose jobs involve violence (of a sort) to animals but who do not transfer this over to humans. The second reason is that even if there were some evidence to suggest that cruelty to robots did transfer over to humans, there would be ways of solving this problem that do not involve being less cruel to robots. As they put it:

…if it were found to be true that the sight of cruelty to humanoid robots desensitized us to the sight of cruelty in humans or that engaging in cruelty to humanoid robots increased the likelihood that we would be cruel to one another, this would provide some justification for action. The justified action could but need not necessarily be to grant rights to robots. There are at least two different directions that might be taken. One would be to restrict what could be done to humanoid robots and the other would be to restrict the design of robots. 
(Johnson and Verdicchio 2018, 8)

They clarify that the restrictive designs for robots could include ensuring that the robot does not appear too humanoid and does not display any signs of suffering. The crucial point then is that this second option is not available to us in the case of animals. To repeat the mantra from earlier: animals suffer and robots do not. We cannot redesign them to prevent this. Therefore there are independent reasons for banning cruelty to animals that do not apply to robots.

I have written about this style of argument ad nauseum in the past. My comments have focused primarily on whether sexual violence toward robots might transfer over to humans, and not on violence more generally, but I think the core philosophical issues are the same. So, if you want to my full opinion on whether this kind of argument works I would suggest reading some of my other papers on it (maybe start with this one and this one). I will, however, say a few things about it here.

First, I agree with Johnson and Verdicchio that the animal-robot analogy is probably superfluous when it comes to making this argument. One reason for this is that there are other analogies upon which to draw, such as the analogy with the violent video games debate. Another reason is that whether or not robot cruelty carries over to cruelty towards humans will presumably depend on its own evidence and not on analogies with animals or violent video games. How we treat robots could be sui generis. Until we have the evidence about robots, it will be difficult to know how seriously to take this argument.

Second, one point I have been keen to stress in my previous work is that it is probably going to be very difficult to get that evidence. There are several reasons for this. One reason is that it is probably going to be very difficult to do good scientific work on the link between human-robot interactions and human-human interactions. We know this from other debates about exposure to violent media. These debates tend to be highly contentious and the effect sizes are often weak. Researchers and funders have agendas and narratives they would like to support. This means we often end up in a epistemically uncertain position when it comes to understanding the effects of such exposure on real world behaviour. This makes sense since one thing we do know is that the causes of violence are multifactorial. There are many levers that can be pulled to both discourage and encourage violence. At any one time, different combinations of these levers will be activated. To think that one such lever — e.g. violence to robots — will have some outsized influence on violence more generally seems naive.

Third, it is worth noting, once again, that the persuasiveness of Johnson and Verdicchio’s argument hinges on whether you think robots have the capacity for genuine suffering or not. They do not think this is possible. And they are very clear in saying that all appearances of robot suffering must be simulative or deceptive, not real. This is something I disputed earlier on. I think ‘simulations’ (more correctly: outward behavioural signs) are the best evidence we have to go on when it comes to epistemically grounding our judgments about the suffering of others. Consequently, I do not think the gap between robots and animals is as definitive as they claim.

Fourth, the previous point notwithstanding, I agree with Johnson and Verdicchio that there are design choices that roboticists can make that might moderate any spillover effects of robot cruelty. This is something I discussed in my paper on ‘ethical behaviourism’. That said, I do think this is easier said than done. My sense from the literature is that humans tend to identify with and anthropomorphise anything that displays agency. But since agency is effectively the core of what it mean for something to be a robot, this suggests that limiting the tendency to over-identify with robots is tantamount to saying that we should not create robots at all. At the very least, I think the suggestions made by proponents of Johnson and Verdicchio’s view — e.g. having robots periodically remind human users that they do not feel anything and are not suffering — need to be tested carefully. In addition to this, I suspect it will be hard to prevent roboticists from creating robots that do ‘simulate’ suffering. There is a strong desire to create human-like robots and I am not convinced that regulation or ethical argumentation will prevent this from happening.

Finally, and this is just a minor point, I’m not convinced by the claim that we will always have design options when it comes to robots that we do not have when it comes to animals. Sophisticated genetic and biological engineering might make it possible to create an animal that does not display any outward signs of suffering (Douglas Adams’s famous thought experiment about the cow that wants to be eaten springs to mind here). If we do that, would that make animal cruelty okay? Johnson and Verdicchio might argue that engineering away the outward signs of suffering doesn’t mean that the animal is not really suffering, but then we get back to the earlier argument: how can we know that?


4. Conclusion
I have probably said too much. To briefly recap, Johnson and Verdicchio argue that the animal-robot analogy is misleading and unhelpful when it comes to (a) understanding the moral status of animals, (b) attributing liability and responsibility to robots, and (c) the likelihood of harm to robots translating into harm to humans. I have argued that this is not true, at least not always. The animal-robot analogy can be quite helpful in understanding at least some of the key issues. In particular, contrary to the authors, I think the epistemic basis on which we ascribe moral status to animals can carry over to the robot case, and this has important consequences for how we attribute liability to actions performed by semi-autonomous systems.




Friday, September 6, 2019

Is there a liberal case for no-platforming?



Via Newtown Grafitti

No platforming is the practice of denying speakers the opportunity to speak at certain venues because of the views they espouse or are expected to espouse. De-platforming is the related practice of trying to remove or prevent a speaker from speaking, after they have been invited to speak or have begun to speak. In this context, ‘speaking’ can be interpreted broadly to include any opportunity given to someone to express their views to an audience (for example, a newspaper opinion writer could be de-platformed).

Although both practices can occur anywhere that speakers are provided with a platform — witness the 2018 controversy about Steve Bannon at the New Yorker festival — they are most commonly associated with university campuses. There have been several well-known incidents over the past few years in which protesters (usually student groups) have tried (sometimes with limited success) to deny speakers a platform on university campuses. Some of the best known examples include: Milo Yiannopolous at UC Berkeley, Charles Murray at Middlebury College, Maryam Namazie at Goldsmiths University, Ayaan Hirsi Ali at Brandeis University, and Germaine Greer at Cardiff University.

If they succeed, both no platforming and de-platforming are, in effect, partial forms of censorship. They do not completely prevent certain points of view from being expressed (there are, after all, many platforms), but they do prevent them from being expressed at specific times and places. In liberal thought, there is a general presumption against content-based censorship of this type. The most famous defence of free speech in the Western tradition comes from John Stuart Mill. In chapter 2 of On Liberty, Mill argued that we ought to allow for the expression of all points of view because this was a way of getting at the truth. To justify content-based censorship we have to assume a level of epistemic authority on the part of the censors that we should be inclined to doubt. Academic institutions, in particular, should be reluctant to do this since they are in the business of getting at the truth.

That said, Mill did accept that certain forms of speech could be censored or prohibited if they caused clear and identifiable harm to others. This concession creates some practical problems. Many of the recent debates about no platforming and de-platforming have accepted this Millian premise and have argued that the forms of speech in dispute do cause clear and identifiable harms to others. Thus, for example, Charles Murray’s views about race and IQ are said to be harmful to African American students on college campuses, and Germaine Geer’s views about transgender identity are said to be harmful to transgender students. In other words, no platforming has been defended in essentially Millian terms: the defenders accept that there is a presumption in favour of free speech but that this presumption is overturned in these cases because the speech acts in question do cause harm.

These arguments are controversial. ‘Harm’ is an inherently fuzzy concept. It is easily stretched and tightened to suit the circumstance. Must the harm be physical or can it be psychological too? Must the harm be directly caused by the speech or can it be indirectly caused through the incitement of third parties? There are no bright lines here and reasonable people can and do disagree about where to draw them. Some people try to narrow the definition as much as possible, others, often with an eye towards tolerance and equality, try to broaden it.

This feature of the debate about free speech and no platforming troubles Robert Simpson and Amia Srinivasan. In their article ‘No Platforming’, they argue that the standard liberal arguments get sucked into interminable and difficult-to-resolve debates about which kinds of speech are legitimately provocative and which are illegitimately harmful. This prompts them to consider whether they might be another way to resolve the issue on lines that are acceptable to proponents of traditional liberal thought. They argue that there might be. Using the concept of academic freedom, they suggest that there could be some legitimate liberal grounds on which to favour no platforming on university campuses.

In what follows, I want to critically analyse their argument. I will suggest that their proposal, though intriguing, fares little better than the Millian one they seek to supplant. I will conclude by arguing that questions concerning which kinds of speech ought to be given a platform are difficult to resolve on principled grounds. This is consistent with my previous analysis of Mill’s argument.


1. Academic Freedom and No Platforming
Simpson and Srinivasan’s argument hinges on a particular interpretation of what the purpose of a university is and the kinds of speech protection that are essential to that purpose. ‘Academic freedom’ is the conceptual label applied to the set of speech-governing rules and norms that serves this purpose.

What then is the purpose of the university and the nature of academic freedom? One view, which they dismiss, is that universities are committed to the pursuit of truth in all its forms and that speech on a university campus ought to be regulated in the same manner as speech in the public square. On this view, academic freedom can cover all speech by members of a university community, including controversial extramural speech on issues of social and political morality, unrelated to the disciplinary expertise of the academics in question. This view is predominant in public universities in the US, but is expansive and seems tantamount to saying that there is no distinctive purpose to a university other than to provide a forum for debate and conversation of all kinds. Another view, which they also dismiss, is more deflationary and holds that academic freedom is just whatever academics need it to be in order to do their work in a congenial manner. This view would obviously make it very difficult to have speech principles of any kind. Academic freedom is just a kind of power politics: whoever is in power gets to determine what can be said and what cannot be said.

In lieu of these accounts, Simpson and Srinivasan favour an account of academic freedom that was first developed by Robert Post. They do not do so because they think this is the best or most defensible account of academic freedom. They do so because they think Post’s account is reasonable and consistent with mainstream liberal principles. This somewhat non-committal endorsement of Post’s account is consistent with their rhetorical strategy which is to say ‘imagine you were a liberal; if so, is there anyway you could get onboard with some forms of no platforming?” This allows them to defend no platforming from a liberal perspective without themselves committing to that liberal perspective.

What does Post’s account of academic freedom say? It says that universities are not like the public sphere. Universities serve particular teaching and research missions. These teaching and research missions are guided by specific disciplinary norms concerning the style and content of communication. For example, if you are a scientist there is a particular methodology that you are expected to follow and a set of topics for teaching and research that fall inside the acceptable boundaries of that methodology. A physicist who teaches that the Earth is flat or that there is a perpetual motion machine is saying something that is not consistent with the communicative norms of their discipline. Similarly a historian who denies the evidence of the holocaust, and refuses to engage with the critics of their view, is not following the communicative norms of their discipline.

Academic freedom, for Post, requires that we accept that members of the relevant academic disciplines act as independent epistemic gatekeepers for their disciplines. They get to decide what the relevant methodologies and standards of evidence are. This means that there is inevitably going to be some content-based suppression of ideas. Some stuff just isn’t going to be relevant to the research and teaching missions of the different disciplines; and some stuff is going to be counter-productive to those missions. This is not to say that there cannot be growth and change within a discipline. Once upon a time, physicists believed in the existence of the luminiferous ether, nowadays they do not. But this growth and change happens through reasoned debate and argument among the independent epistemic gatekeepers.

This account of academic freedom can justify at least some forms of no platforming and de-platforming. As the epistemic gatekeepers, academics are entitled to deny certain speakers platforms or to protest the platforms given to others. If a creationist is invited to speak at a biology department, the academics within that department are within their rights to try to disinvite or deplatform them. This is entirely consistent with the mission of the biology department. Indeed, academics do, clearly, deny people platforms all the time along these lines; it’s just that most of the time this goes unobserved because we don’t know who it is they are not inviting.

Conversely — and Simpson and Srinivasan are keen to emphasise this point — the academics who serve as epistemic gatekeepers can also argue that someone has a right to speak at a university, even if their views are controversial, if they are consistent with the standards within the relevant discipline. So, for example, although there are some university administrators and politicians that might like to deny a platform to certain climate scientists because of what they say about climate change, the gatekeepers within the relevant academic disciplines can insist that they be given a platform in the interests of academic freedom.

Who gets to play this epistemic gatekeeping function? Is it just professors or permanent members of academic staff? They are certainly the most plausible candidates but Simpson and Srinivasan argue that others, including graduate students and undergraduate students can play a (lesser?) gatekeeping role. Graduate students are budding members of the relevant disciplines and so clearly have a stake in how the disciplinary standards develop. It is easy enough to make the case for them having some say over who gets a platform and who does not. Undergraduate students are a trickier case but Simpson and Srinivasan argue that they can have a role too. Members of academic disciplines are not epistemically infallible, they can be guilty of narrow-mindedness and groupthink with respect to methods and topics. Undergraduates, because they are less entrenched in the disciplinary norms, can help to spot these flaws. Thus, they can also play some role in setting the standards.

This is just an ‘in principle’ argument. It shows how someone embracing the Postian conception of academic freedom could also accept the leigitimacy of certain forms of no platforming. The devil, however, is going to be in the detail. What speakers, specifically, can be denied a platform? What do they say? What are the disciplinary norms? Who should be performing the gatekeeping function in this case? These questions will need to be answered before any actual defence of no platforming becomes persuasive.


2. Criticisms and Concerns
As I said at the outset, Simpson and Srinivasan’s argument is interesting and provocative. There is undoubtedly some truth to it. It is undeniable that academic disciplines do have some epistemic standards and these standards play a role who gets given a platform and who does not. This happens all the time, irrespective of how much controversy these gatekeeping decisions attract. To give a trivial example, I once ran a seminar series on legal philosophy in which I, along with the co-organisers of the series, frequently rejected speakers on the grounds that their papers weren’t sufficiently philosophical or theoretical. Content-based suppression takes place all the time.

Nevertheless, there are some serious problems with the argument, many of which are identified and discussed by Simpson and Srinivsan in a reasonably persuasive way. I want to review these problems here.

First, as Simpson and Srinivasan point out, there are going to be easy cases and hard cases. The Flat-earther and Holocaust denier are easy cases. Their views obviously do not comply with the standards of the relevant academic disciplines. The hard cases arise when the standards within the relevant disciplines are undergoing some kind of change or flux. In other words, when the standards are being debated with a view to the potential exclusion or inclusion of certain points of view. They single out the case of Germaine Greer as an example of a hard case. Germaine Greer was protested for her ‘trans-exclusionary’ views. Are such views still reasonably on the table within relevant academic disciplines (philosophy, gender studies etc) or are they not? This is something that is being actively debated. Given the relatively recent and underdeveloped nature of this debate, Simpson and Srinivasan conclude that Greer could not be de-platformed in a way that is consistent with the principles of academic freedom:

Some scholars with apparent institutional and disciplinary credibility – in fields like cultural studies, sociology, anthropology, philosophy, gender studies, and queer studies – will insist that the questions of what a woman is and whether trans women qualify are central to feminist inquiry. Others scholars in those same fields, with similar credentials, will insist that the question has been settled and is no longer reasonably treated as open to inquiry. Given this backdrop, it is unclear whether the no platforming of someone like Greer, who denies the womanhood of trans women, could be defended as consistent with respect for academic freedom under the account we have presented. The fact that there is live controversy over the relevant standards in the relevant disciplines suggests, on its face, that there are not any authoritative disciplinary standards that could be invoked in order to characterize Greer’s no platforming as a case of someone being excluded for lacking disciplinary competence. 
(Simpson and Srinivasan 2018, 17-18)


They do, however, go on to say just after this that this might change in the future. It might eventually be the case that there is a disciplinary consensus that blocks the expression of the trans-exclusionary view.

Second, as Simpson and Srinivasan also point out, there are different standards across different disciplines and hence sometimes there are difficult inter-disciplinary disputes about what can be expressed. The so-called hard sciences are commonly thought to have clear and definitive epistemic standards that rule certain kinds of speech in and out (usually on methodological grounds as opposed to content grounds). The softer sciences and humanities have less definitive standards. Indeed, some disciplines appear to have few if any standards. In philosophy, for example, all manner of controversial views are regularly debated. Some philosophers deny the existence of numbers, universals, the self, morality and so on. Some philosophers defend infanticide and anti-natalism. All these views are thought to be consistent with the disciplinary standards of philosophy. If we follow a ‘lowest common standard’ approach to what can be expressed on a university campus, then it might be the case that no views can be de-platformed due to the openness of philosophy to all views, even if other disciplines disagree.

Simpson and Srinivasan argue that it is not quite true to say that anything goes in a discipline like philosophy — there are still standards of rational inquiry and logical argument that must be upheld — but they seem to concede that there isn’t a good answer as to what to do about this issue:

One way to address these hard cases would be to say that any speaker seen as within the bounds of disciplinary competence by at least one discipline cannot be legitimately no platformed for the sake of upholding the disciplinary standards of any other discipline. But then the worry is that in protecting the disciplinary integrity of philosophy – as a discipline resistant to seeing any view as rationally beyond the pale – we impair other disciplines’ attempts to police their own intellectual standards. 
(Simpson and Srinivasan 2018, 20)

They then go on to say that the existence of difficult cases like this does not undermine the value of the Postian-approach. Indeed, they suggest that the Postian approach may reveal what really makes these hard cases so hard, i.e. that they are not disputes about what kinds of speech are harmful (or not) but rather about what kinds of speech meet the relevant academic standards.

My own view is that there is a much more serious problem going on here than they seem willing to acknowledge. Even in the hard sciences, there are long-standing controversies about which views are accepted within the disciplinary norms and which views are not. To give a non-political/sociological example, theoretical physicists were, for a long period in the 20th century, unwilling to debate the correct interpretation of quantum theory. The few who did found themselves ridiculed and ostracised by their peers, often to the detriment of their careers (the history of this is discussed in Adam Becker’s book What is Real?). Looking back, there is now a slowly growing realisation that this suppression of work on quantum foundations was a mistake. People realise that there is something rotten at the heart of quantum theory and this needs to be resolved. There are similarly controversial cases within other disciplines. For example, the recent replication crises in biomedical science and psychology (and other experimental disciplines) has revealed serious, long-standing flaws in the disciplinary norms of biomedical science and psychology: Some kinds of studies are prioritised beyond their true academic value, and others are suppressed or ignored.

Given these historical mistakes by the epistemic gatekeepers, it doesn’t seem obvious to me that we should want anything other than a Millian approach to speech on university campuses. At the very least, the historical failure of academic disciplines to set the right epistemic standards seems to warrant a strong presumption against no platforming on content-based grounds. Censorship on purely methodological grounds might be more reasonable, but as the example of the replication crisis shows, this would seem to warrant at most the minimal epistemic standards imposed by a discipline such as philosophy, and not anything more robust and exclusionary.

Another way of putting this point is that if we accept that principles of academic freedom should determine what can be said on a university campus, it’s not clear that we end up anywhere all that different from the Millian position that Simpson and Srinivasan criticise at the start of their article. We end up with equally controversial and equally difficult-to-resolve disputes about what can be censored or not. The one advantage that the academic freedom approach has over the Millian position is that we focus on epistemic standards and not on harmfulness. But is that really a clear advantage? One could argue that the Millian position is more reasonable since it accepts that epistemic standards are too controversial a basis for censorship and focuses instead on non-epistemic reasons for censorship.

In addition to this, I also worry that the position being defended by Simpson and Srinivasan assumes too narrow a view of the purpose of a university and the members of its community. Should everything said on a university campus be beholden to the standards of academic disciplines? Universities do many things. They are engaged in teaching and research, to be sure, but they are also social communities for the students that attend them. For example, I have worked at universities with Quidditch societies for students. Quidditch is, obviously, a fictional magical game taken from the Harry Potter series. Suppose the Quidditch society invites a speaker who seems to take the fiction seriously. They talk about flying brooms and magic spells with seeming earnestness. Could the physics faculty rightfully de-platform this speaker on the grounds that what they are saying is not consistent the disciplinary norms of physics? I find that deeply counterintuitive and not because I think the Quidditch society has its own epistemic standards that it can use to regulate speech. The Quidditch society isn’t connected to the research and teaching mission of the physics department. It serves another purpose, one that the physics department has no right to overturn.

There is a serious point lurking here. Many of the controversial cases of no platforming and de-platforming arise from student societies inviting speakers to university campuses. Sometimes these student societies have purposes that are intimately linked to specific academic disciplines, but oftentimes they do not. Student religious societies or political societies or sports societies, for example, do not serve purposes that are obviously linked to academic disciplines. Why should principles of academic freedom constrain what gets said at the platforms provided by these student societies? Simpson and Srinivasan do allude to this issue in a footnote when comparing no platforming of crank ‘experts’ at research seminars vis-a-vis student societies. Here is what they say, in full:

It is a more complicated case if the Holocaust denier or oil company shill is a credentialed expert in the relevant discipline. If they were invited by their disciplinary peers to address an academic research seminar – say, if the history department unwittingly invited a crank, and then opted not to rescind the invitation – then their no platforming wouldn’t be acceptable under Post’s account. If they were invited to address a student club or the like, then the case for the acceptability of them being no platformed would be stronger, all else being equal. At minimum, it cannot be the case that the status of these speakers as disciplinary experts entails that their academic freedom (or that academic freedom per se) is infringed just because a particular student club has not given them a platform to espouse their views. 
(Simpson and Srinivasan 2018, fn 25)

The phrase ‘all else being equal’ might be doing a lot of work here but my immediate reaction to this is that the case for no platforming at the student society can only be more persuasive if (a) you accept that student societies are bound by the norms of academic freedom and (b) you assume students have much less epistemic authority than academics. Both of these assumptions can be questioned, particularly the first.

There could be a separate issue here as to whether certain kinds of student societies should be allowed to exist. Maybe universities shouldn’t allow students to set up groups (with institutional approval) that are inconsistent with academic research and teaching missions. But once they do allow them, I find it hard to accept that they must all abide by the principles of academic freedom. If that is right, then it is difficult to see how speech can be regulated at such societies other than by applying something like the Millian harm principle.


3. Conclusion
To sum up, Simpson and Srinivasan try to use the concept of academic freedom to justify (in principle) some forms of no-platforming. To be precise, they have used Robert Post’s account of academic freedom to argue that academic disciplines serve particular research and teaching missions and are entitled to use certain epistemic standards to regulate speech in a way that serves those missions. While this is an interesting proposal, I think its practical difficulties are more severe than Simpson and Srinivasan seem willing to acknowledge.




Tuesday, September 3, 2019

Does Technology Induce Nihilism?


Image via Tim Gouw


Modern life is suffused by technology. We humans do not live in the natural world. We live in the technological world. From dawn to dusk, our activities are facilitated and mediated through a variety of technological aids. These technologies change how we relate to the world and how the world relates to us. Some of them are bright and prominent in our lives. Others have become part of the background furniture (literally) of life — hiding in plain sight.

Digital technologies are just the latest additions to our technological ecology. Their novelty means that they induce the most excitement and the most hand-wringing. People worry about the power of these technologies over our lives. Are they being used to surveil us against our wills? To control us and manipulate us to nefarious ends? Do they impair our cognitive capacities? Would we be better off without them?

But here’s a question that I suspect few people ask: is digital technology making us more nihilistic? Indeed, most people might think it is an odd question. It is, nevertheless, the question that lies at the heart of Nolen Gertz’s book Nihilism and Technology. The book is a short, polemic about the impact of technology on modern life. Using Nietzsche’s thoughts on nihilism, Gertz argues that digital technologies are provoking and accentuating a form of ‘passive nihilism’ and once this has been identified it should prompt greater critical scrutiny of the role technology is playing in the modern era.

Hewing to the nihilistic perspective, Gertz tries to avoid presenting a standard moral critique of technology, and tries to transcend the simple binary (pro/anti) thinking about technology that has become pervasive. He tells us that his goal is, instead, to get us to interrogate the process through which we evaluate technology and progress (Gertz 2017, Ch 1). For me, this makes the book somewhat confusing to read since it means that, at times, Gertz says he is doing one thing when it really seems like in practice he is doing another (other aspects of the book have been critiqued by other reviewers). Nevertheless, the book is entertaining and informative. In what follows, I want to try to reconstruct its main argument, as I understand it. This may not be the one that Gertz himself intends, but it is the one that makes the most sense to me.

Gertz’s book is divided into two main segments, each consisting of several chapters. The first is an introduction to nihilism and human-technology relations. The second is a series of five case studies on how technology induces and perpetuates a form of passive nihilism. For me, a lot of the interpretive problems with the book stem from the theoretical portion so I will spend a bit of time trying to make sense of that. Then, I will look at one of the five case studies.


1. Metaphysical versus Practical Nihilism
Since nihilism is the central concept in Gertz’s book, it is important for him to define it, preferably somewhere near the beginning, so that we have a clear sense of what it is that he trying to argue. He does this in the second chapter, starting with a definition that he thinks tracks the everyday usage of the term:

[I]n everyday usage [nihilism] is taken to mean something roughly equivalent to the expression “who cares?” In other words, when we say that someone is a “nihilist” we mean that this person is someone who does not care and someone who believes that, in general, no one else cares either. 
(Gertz 2017, Ch 2)

Gertz proceeds to elaborate and refine this colloquial definition, using a dash of Nietzsche and a pinch Sartre to help him out. Ultimately, however, he does not stray too far from this colloquial definition. In some ways, this is an acceptable stance. The term “nihilism” is multiply ambiguous in philosophy. Although it is primarily used in relation to evaluative and moral phenomena, people do also use the term when arguing that something lacks and overall purpose or utility (hence why people talk about ‘medical nihilism’ and why I talk about ‘conference nihilism’). Given this, it is fine to stipulate a preferred definition and work with it. Nevertheless, I find Gertz’s definition confusing because I think it ignores an important conceptual distinction between different forms of nihilism. This distinction is particularly important when it comes to understanding Gertz’s central thesis.

The distinction I have in mind is the one between metaphysical and practical nihilism. Metaphysical nihilism has to do with the structure of reality. It is the claim that there are no evaluative or normative facts about the world around us. Nothing is truly valuable or morally obligatory. We may project these moral properties onto reality; but they are always an illusion. To put it another way, any claims we might make such as ‘charity is good’ or ‘torture is forbidden’ are necessarily false. Metaphysical nihilism comes in different flavours, depending on the normative or evaluative properties that are thought not to exist. One can be an evaluative nihilist (i.e. believe that nothing is good or bad) or a normative nihilist (i.e. believe that nothing is forbidden, permitted, or obligatory) or an existential nihilist (i.e. believe that life has no meaning or purpose). One can be all three of these things or only one or two. When I think about nihilism, it is the metaphysical kind of nihilism that first springs to mind (this may, admittedly, be a personal quirk).

Practical nihilism has to do with how one behaves. Do you act as if there are no evaluative or normative facts? Do you assume your life has no purpose and that nothing you do really matters? If so, you are a practical nihilist. Practical nihilism often goes hand-in-hand with metaphysical nihilism. Thus, if there are no evaluative or normative facts about reality it is natural to assume that this will have some knock-on implications for how people will behave (though see this article by Guy Kahane for a contrasting view). But they do not have to go hand-in-hand. One can be a metaphysical nihilist without being a practical nihilist. In other words, you can accept that there are no evaluative or normative facts but still remain committed to a strong personal code of ethics or values. Indeed, some famous nihilists have argued, arguably paradoxically, that this is what one ought to do in response to the truth of metaphysical nihilism. For example, Albert Camus, in his essay The Myth of Sisyphus, argues that we have to embrace the absurdity of existence and play the game as best we can.

Gertz’s book is about practical nihilism, not metaphysical nihilism. He is not arguing that technology somehow reveals that evaluative or normative facts do not exist. He seems take that as a given (or, at least, doesn’t question it all that much). He is, instead, arguing that technology impacts on our behaviour in nihilistic ways. To be more precise, taking his cue primarily from Nietzsche but also from Sartre, he argues that technology is facilitating a form of passive nihilism and not an active nihilism. Here’s how I would characterise this distinction:

Passive Nihilism: Individuals do not take responsibility for determining their own value system and instead just accept whatever value system is foisted upon them by the society in which they live. They do this with an air of futility because they accept that nothing really matters and so there is no point in fighting back against this system. In Sartrean terms they are guilty of ‘bad faith’: an apathetic disavowal of existential responsibility.

Active Nihilism: Individuals do take responsibility for determining their own value system and critically engage with and scrutinise the values imposed on them by society. They do this while accepting the deeper truth of metaphysical nihilism.

Like Nietzsche, Gertz seems to favour the latter kind of nihilism over the former. Indeed, the whole point of his book appears to be to argue that we should shift from being passive nihilists to being active nihilists. I find this odd because it looks to me like this is an implicitly normative agenda that is inconsistent with metaphysical nihilism. On what grounds can one favour active nihilism if there are no deeper normative or evaluative truths? Is it just mere preference? If so, why bother arguing for that preference? The charge of self-contradiction is, admittedly, one that is commonly made against self-confessed nihilists — one could criticise Nietzsche on similar grounds — but it seems unavoidable in the present context.

To put it more bluntly, I wonder if there is anything truly nihilistic about Gertz’s critique of technology. When it comes to nihilism, I tend to be a ‘metaphysics first’ kind-of-guy: if the underlying metaphysical stance is not nihilistic, and if there is an obvious normative agenda, then I don’t see it as nihilistic. That said, I fully accept that there is an important distinction between passively accepting a system of values and actively questioning and creating a system of values. If Gertz’s critique is just that technology encourages the former and not the latter, then I am happy to accept it for what it is. It’s just that then it is a lot less novel and a lot less ‘sexy’. There are, after all, many books that present a similar critique of technology (e.g. Brett Frischmann and Evan Selinger’s book Re-engineering Humanity or Shannon Vallor’s Technology and the Virtues).


2. How Technology Fosters Passive Nihilism
There is a bit more to Gertz’s theoretical framework. In addition to clarifying what he means by nihilism he also presents a theory for understanding how humans relate to technology and the world around them. This theory is an updated version of Don Ihde’s phenomenological theory of human-technology relations (which I covered previously). I might address this on another occasion because it is interesting in its own right but it isn’t absolutely essential for understanding the rest of Gertz’s thesis. So, for now, I am going to skip over it and proceed to Gertz’s main argument about the nihilism-inducing power of technology.

Following Nietzsche, Gertz argues that there exists an ‘ascetic priesthood’ (the term is Nietzsche’s) that helps to foster and inculcate passive nihilism. This ascetic priesthood uses five ‘tactics’ to achieve this end: self-hypnosis, mechanical activity, petty pleasures, herd instinct, and orgies of feeling. Gertz’s twist on Nietzsche is that in our present day and age this ascetic priesthood is present in the technology industry (and the culture associated with it) and exerts its power through technology. The latter half of his book is a series of case studies of how modern technology uses the five tactics to induce passive nihilism.

As mentioned at the outset, I am only going to focus on one of the five case studies: how technology induces self-hypnosis. ‘Self-hypnosis’ is the phenomenon whereby we dull our emotional engagement with the world. Nietzsche described it as the the attempt to “reduce the feeling of life in general to the lowest”. If you were ever successfully hypnotised, you’ll know that it largely cuts off you sensory awareness of yourself and your surroundings. At most, you get a very narrow channel of sensory information. Your sense of pleasure, pain, selfhood, anger, excitement, desire and so on is significantly reduced. Achieving such a state of being is, obviously, one way to foster passive nihilism. When in the hypnotic state we become passive receptacles of whatever information or experience is fed to us through the narrow channel. We lose the larger sense of ourselves.

Nietzsche saw the growing fad for Buddhism and meditation as an example of this tactic for self-hypnosis. In meditative states we try to reduce our connection to ourselves and our world: we detach and distance ourselves from reality. Gertz argues that nowadays we do this with technological assistance, particularly through entertainment technology. We bombard ourselves with streams of entertainment (videos, audiobooks, newsfeeds, podcasts etc) in order to stop ourselves from being alone with our own thoughts. The TV, for Gertz, is the classic technological facilitator of self-hypnosis, one that has been perfected by the ubiquity of the screen in modern life:

Wake up, turn on the TV, and instantly become surrounded with sound, something, anything, to occupy what might otherwise be a space filled with nothing but silence and your own thoughts. Turn off the TV, leave. Return, turn the TV back on. In between, watch TV on the bus, on the train, on the plane, in the mall, on the billboard, on your computer, on your phone, or even on your watch. 
(Gertz 2017, ch 4)

In addition to the ubiquity of the screen, the entertainment companies have got wise to the ways in which they can make their entertainment maximally addictive. Instead of tuning in periodically to watch your favourite shows, you can binge watch entire series on Netflix and other streaming services. If you get bored of that, you can switch to some other form of equally addictive entertainment (news media, videogames etc). All of these forms of entertainment are mass produced, by committees and teams, in an increasingly formulaic way (think of the endless sequels of hit comic book movies). This breeds great conformity and groupthink.

The result is that we are sucked out of reality and taught to find meaning and purpose in imaginary worlds. What’s more, we are all fully aware of this (we agitate nervously about the zombifying effects of entertainment technology) but do it anyway because we enjoy it. We are thus complicit in our own self-hypnosis.

Gertz goes into far more detail on each of these points, providing some interesting statistics of how much time is now spent watching video and examples of how companies perfect the addictive qualities of their entertainment. Hopefully, I have provided enough to give you the gist of the argument. The subsequent case studies build upon this by showing how self-tracking, crowdfunding, and social media deploy the other tactics for inducing passive nihilism.

What do I think of all this? Well, I once wrote a paper called the ’Rise of the Robots and the Crisis of Moral Patiency’ that defends a similar view. I argued that automating technologies (in particular) have the tendency to induce more passive engagement with the world, which undermines a number of important human goods. And, as mentioned above, other people defend similar views. But as society grows gradually more pessimistic about technology I have — perhaps out of sheer contrariness — become more optimistic. So, for instance, I now think that the retreat from reality that Gertz laments in his discussion of self-hypnosis is less of a problem than he implies. There are two reasons for this. The first is that ‘reality’ is poorly defined and I am not sure it is possible to fully escape it. For example, even in a ‘virtual’ environment real things happen to you. The second is that as long as the environment to which we escape doesn’t foster or encourage genuine passivity then we can avoid the worst problems associated with ’tuning out’. Simply watching or consuming TV might be bad (if it’s the only thing we do) but certain forms of virtual reality or videogames might not be since they can help us to develop skilled (moral) agency. These are themes I explore in much greater detail in my book Automation and Utopia.



Thursday, August 29, 2019

Making Sense: The Art of Philosophical Living (Index)


Diogenes the Cynic


I don't see philosophy as a mode of inquiry; I see it as a way of life. Nevertheless, until relatively recently, I always tried to keep myself (i.e. my self) out of what I wrote. I did so because I believed this was the appropriate thing to do - that it was in the interests of personal and professional humility. After all, who cares about me? My philosophical lens was, thus, always turned outwards, onto the world, and never inwards, onto the self.

That changed when my sister died back in April 2018. In the year following her death, I wrote several, far more personal articles. These articles focused initially on how to cope with grief, but then grew into more general reflections on character, attitude and outlook. In each of them, I've been trying to use the tools of philosophical analysis to re-assess and to re-adjust.

I have found writing these articles to be therapeutic, even though I cringe, slightly, when I read back over them. To me they seem quite self-indulgent. Still, a large number of readers have responded positively to them and they are now among the most popular things I have ever written. Consequently, it feels like the time had come to group them together into one index. Some people might find it useful to read them together as a collection. Below, I have grouped them according to certain themes. This does, however, correspond roughly to the chronological order in which I wrote them. So not only do they cover specific topics, they also provide a pretty accurate record of how I was thinking over the course of a year or so.


How should I cope with death?

What kind of attitude should I have to life?

How should I approach my work?

Putting it all together




Wednesday, August 28, 2019

#63 - Reagle on the Ethics of Life Hacking

Joseph Reagle
In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: Hacking Life: Systematized Living and its Discontents (MIT Press 2019).

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).




Show Notes

  • 0:00 - Introduction
  • 1:52 - What is life-hacking? The four features of life-hacking
  • 4:20 - Life Hacking as Self Help for the 21st Century
  • 7:00 - How does technology facilitate life hacking?
  • 12:12 - How can we hack time?
  • 20:00 - How can we hack motivation?
  • 27:00 - How can we hack our relationships?
  • 31:00 - The Problem with Pick-Up Artists
  • 34:10 - Hacking Health and Meaning
  • 39:12 - The epistemic problems of self-experimentation
  • 49:05 - The dangers of metric fixation
  • 54:20 - The social impact of life-hacking
  • 57:35 - Is life hacking too individualistic? Should we focus more on systemic problems?
  • 1:03:15 - Does life hacking encourage a less intuitive and less authentic mode of living?
  • 1:08:40 - Conclusion (with some further thoughts on inequality)
 

Relevant Links




Friday, August 23, 2019

Understanding Praiseworthiness: Does more effort equal more praise?





I recently finished my first solo-authored book (available in all good bookstores in September!). Here’s a question: do I deserve any praise for doing this? Well, consider some relevant facts. I found writing, editing and indexing the book to be quite arduous. Don’t get me wrong. I enjoyed conceiving the main idea for the book and mapping out its main arguments; but the actual writing was a pain. It took me over a year to finish the 110,000 word manuscript. Due to various setbacks and delays, a surprising amount of that writing was completed in the last month (about 50,000 words). That month was tough. The writing took up all my energy and attention and left me with little time for anything else. What’s more, once I finished the manuscript the job wasn’t done. The manuscript had to be reviewed and I had to revise it in response to the reviewers. That took another month. After that, I had to go through two more rounds of copy edits and revisions, and, to top it all off, I then had to spend three days preparing and writing an index. If you have ever done it, you will know that preparing an index is one of the more mind-numbing tasks you can perform. First world problems, I know, but I just want to emphasise that it was a lengthy and difficult process.

So do I deserve any praise for this? You might say ‘no’ because the book isn’t any good. I wasted my time on something that isn’t worthwhile and no one deserves praise for wasting their time in this way. But let’s assume that’s not true. Let’s assume the book is worthwhile. Does the fact that I spent so much time and effort on it make its completion more praiseworthy? To be more precise, does the volume of effort expended on writing the book increase the amount of praise I am owed?

Many people have the intuition that it does. They follow a simple formula when deciding how much praise is due to someone for an achievement:

More effort = More praise (all else being equal)

But does this formula hold up to closer scrutiny? In a recent article entitled “Praiseworthiness and Motivational Enhancement: No Pain, No Praise?” Hannah Maslen and her colleagues have argued that it does not. Their argument is, ostensibly, about a particular issue in the enhancement debate — namely: whether motivational enhancement undermines praiseworthiness — but in the course of presenting this argument they develop a general theory of praiseworthiness that I found quite illuminating. I want to examine that theory in the remainder of this article. I won’t completely ignore what they have to say about motivational enhancement since it does provide a nice illustration of how their theory applies in practice, but my focus will be primarily on the theory itself.


1. The Theory of Praiseworthiness
Let’s start by thinking about what praiseworthiness is. As a first step we can say that praiseworthiness is related to, but importantly distinct from, responsibility. We often talk about people being ‘responsible’ for performing actions that produce certain results in the world (call these results the ‘outputs’ of the action). If we decide that someone is responsible for producing certain outputs, we then proceed to blame or praise them for doing so. We blame them if we think the outputs are bad; we praise them if we think the outputs are good. Both praise and blame come in degrees. In other words, an agent can be more or less praiseworthy/blameworthy depending on the circumstances.

There is a lot of attention dedicated to blame in the philosophical literature. This is not surprising. Figuring out who deserves to be blamed for doing wrong is a high stakes game and is central to most human societies. We have norms that we expect people to uphold and we see blame as an important way of policing and enforcing those norms (whether that is true and/or a good thing is beyond the scope of the present discussion). Praise has received less attention in the philosophical literature. This is unfortunate since not only is it a worthy topic in its own right, but thinking about praiseworthiness can also shed light on blameworthiness. Since they are complementary phenomena we can expect similar factors to be relevant to the assessment of both.

A theory of praiseworthiness should help to explain how praise varies depending on the circumstances. In other words, it should identify the variables that are relevant to assessing the degree of praise owed to someone for producing a certain output. What are these variables? Maslen et al argue that four variables are relevant to the assessment of praise. We can set these out in the form of a mathematical equation — since Maslen et al use mathematical language in explaining their theory — but I wouldn’t read too much into that formalisation. It’s a useful metaphor/mental model but we are obviously not going to be able to quantify the variables in this equation in any precise way.

The formula is this:

Degree of Praise = Voluntariness(Cost of Commitment x Strength of Commitment x Value of Output)

Each term in this formula needs to be explained. ‘Voluntariness’ is a threshold condition for praise. You cannot be praised for an action that is involuntary or coerced. For example, if I held a gun to your head and told you to donate all your money to charity or else, you would hardly deserve praise for being so charitable (if you decided to donate the money). So, in a sense, voluntariness can only take on one of two values in the above equation. If the action is voluntary (1) then we can conduct an inquiry into how praiseworthy it is by looking at the other three variables; if it is not voluntary (0), then those other three variables don’t really matter.

The ‘cost of commitment’ refers, unsurprisingly, to the expenses incurred by the agent in performing the actions that produced the output. The term ‘costs’ should be interpreted broadly here. The focus is not so much on the monetary cost of committing to the action (indeed, Maslen et al don’t really consider this type of cost at all in their article) but rather on the amount of time invested in the action, the psychological effort involved in performing those actions, and the foregone opportunities (opportunity cost) associated with the actions. One of the crucial arguments they make in their paper is that the ‘more effort = more praise’ intuition that many people have is too simplistic. Effort, which they define as the amount of psychological aversion an agent has to overcome when performing an action, is a type of costly commitment, but not the only type. An agent might reduce the amount of effort involved in an action but compensate for this by incurring increased costs elsewhere. For example, an athlete might take a painkiller in order to get through a training session. The painkiller will reduce the amount of effort involved in the training session because it will reduce their need to overcome pain. But this doesn’t mean that they deserve less praise as a result. On the contrary, the use of the painkiller might increase the amount of time they can invest in training and so increase their overall level of costly commitment. This might mean they deserve more praise, not less.

The ‘strength of commitment’ refers to the degree to which the agent prioritises the production of the relevant output in their life. Maslen et al separate this out from the cost of commitment but I’m not entirely clear on why they do this. It seems to me that the strength of commitment is largely measured by reference to the opportunities the agent forgoes in order to produce the output. The committed musician will dedicate themselves to perfecting their performances and will, consequently, have to sacrifice elsewhere in their lives. This seems like a straightforward manifestation of opportunity cost. I’m not sure what else strength of commitment could mean in this context. That said, I think I know what they are talking about and it seems appropriate to include it in the assessment of praise, whether that be as a specific type of cost or something different.

An important point to bear in mind is that both the costs of commitment and strength of commitment should be assessed diachronically. In other words, you shouldn’t determine how strong or costly someone’s commitment to producing an output is solely on the basis of the actions that immediately preceded the production of the output. To give an extreme example, the last character I typed in my book manuscript was a full stop (or period if you are American). It was very easy for me to type that symbol. It had a minimal cost. But it would, of course, be wrong to assess the praiseworthiness of my completing the book solely on the basis of this action. You have to look at all the things I did that got me to the point at which that full stop was all I need to complete the book.

Finally, the value of the output produced must play some role in assessing the degree of praiseworthiness. A very low value output will not warrant much praise, no matter how costly our commitment to it was. For example, I could spend years counting all the blades of grass in my backyard. This would be a very costly, very effortful endeavour, but I would not warrant much praise for doing so. The value of the output is too low. That said, Maslen et al point out that the value of the output shouldn’t play too big a role in the assessment of praiseworthiness. Many outputs are a matter of luck: you can put lots of effort and time in and not achieve the desired result. It seems like it would be wrong to let praiseworthiness be dictated too much by luck (though, as Thomas Nagel pointed out long ago: we do allow luck to play a large role in our assessments of blame).


2. Some Implications of the Theory
That’s Maslen et al’s theory in a nutshell. Apart from the minor niggle I mentioned regarding the distinction between the cost of commitment and the strength of commitment, I quite like it. But what are its practical implications? Does it overlook anything important?

Let me consider the second of those questions first. As Maslen et al point out, the theory outlined above works well for local assessments of praiseworthiness. Local assessments concern the praiseworthiness of specific agents in relation to a specific output. The opening example of the degree of praiseworthiness I might be due for finishing my book is a good example of a local assessment in action. It is specifically concerned with one output (the book) and whether I deserve praise for producing that one output. Global assessments of praiseworthiness focus not just on how an agent dedicated themselves to one specific output but on how the agent allocates their scarce resources of time and energy across different possible projects. I might deserve praise for finishing my book if you look at this through a local lens but not if you look at it through a global lens. Maybe I invested my scarce resources of time and energy poorly.

In the paper, Maslen et al give the example of a medical researcher who dedicated their time and energy to created a vaccine for one specific disease. This is a valuable end and their commitment to pursuing it was costly. As such, it looks like they deserve a lot of praise. But maybe we shouldn’t leap to that judgment. What else could they have done with their time and energy? Suppose it turns out that they could have dedicated the same amount of time and effort to producing vaccines for three separate diseases. From that more global perspective, maybe what they did wasn’t so praiseworthy after all?

This raises another, related, point. You cannot gratuitously increase the costs of producing an output and expect more praise (whether this was intended or not). So, to stick with the example of the medical researcher, suppose that instead of doing all their experimental calculations with computer software they used paper and pen. This would increase the amount of effort involved in producing the vaccine, but it’s hardly praiseworthy. Using paper and pen might have taken them longer. Sometimes the efficient production of an output is more praiseworthy than the inefficient production. Indeed, there are some people (I’m thinking specifically of David Krakauer) who argue that intelligence is largely a measure of how efficiently you can solve problems. The more efficient (i.e. the lower the cost) the better. In fact, we often praise people for their using their intelligence in this way. What’s going on here? Does this undermine the theory of praise outlined by Maslen et al? Maybe not. I suspect we praise people who come up with efficient ways of solving problems because we see the invention of those methods as a kind of valuable output, but those who make use of those efficient methods don’t subsequently increase the praise they are owed just because they use those methods.

In addition to this, although I appreciate what Maslen et al are saying about counterfactual judgments and the role they play in assessments of praiseworthiness, I do worry about our ability to make those judgments fairly and reasonably. For example, I know of several famous book authors who write everything out in longhand before transcribing it to a word processor. You could argue that this means they have used a gratuitously inefficient method for writing a book and so any assessment of praiseworthiness should be modified accordingly. Perhaps they could have written more valuable books in less time if they had adopted a more efficient method? But they will, no doubt, argue that this inefficient method actually helps them to produce a better output. It helps them to think more clearly and carefully about what they want to say. I, personally, find that hard to understand. I find writing things out by hand to be too slow and error prone. Whenever I do it I get frustrated and stop writing sooner than I would if I used a word processor. That said, who am I to second guess their judgment? Maybe they are right and they wouldn’t have done as well if they used a word processor from the get go.

The important point here, I think, is that perhaps we shouldn’t rush to judgment of those who use inefficient methods for producing certain outputs, or who dedicate themselves to tasks we think are less valuable than other tasks they could have dedicated themselves to. Determining whether someone is investing their talents and time appropriately is often very tricky and I’m not sure that we can do it well.