Pages

Thursday, July 30, 2015

Did my brain make me do it? Neuroscience and Free Will (1)




Consider the following passage from Ian McEwan’s novel Atonement. It concerns one of the novel’s characters (Briony) as she philosophically reflects on the mystery of human action:

She raised one hand and flexed its fingers and wondered, as she had sometimes done before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge.

Is Briony’s quest forlorn? Will she ever find herself at the crest of the wave? The contemporary scientific understanding of human action seems to cast this into some doubt. A variety of studies in the neuroscience of action paint an increasingly mechanistic and subconscious picture of human behaviour. According to these studies, our behaviour is not the product of our intentions or desires or anything like that. It is the product of our neural networks and systems, a complex soup of electrochemical interactions, oftentimes operating beneath our conscious awareness. In other words, our brains control our actions; our selves (in the philosophically important sense of the word ‘self’) do not. This discovery — that our brains ‘make us do it’ and that ‘we’ don’t — is thought to have a number of significant social implications, particularly for our practices of blame and punishment.

Or so a popular line of argument goes. Is this line of argument any good? Christian List and Peter Menzies’s article, ‘My brain made me do it: The exclusion argument against free will and what’s wrong with it’, claims that it is not. In this two-part series, I want to closely examine their arguments. Although I sympathise with parts of their critique, I think their attempt to apply this critique to the recent debates about neuroscience and responsibility are somewhat misleading. I’ll explain why I think this in part two. For the remainder of this part, I’ll focus on their primary argument.


1. The Challenge from Physicalism and Neurosicence
What does it take to be free? Two conditions are said to be important. The first is the alternativism condition, according to which we must be capable of doing otherwise in order for actions to be free. The second is the sourcehood condition, according to which we must be the source of our action in order for it to be the product of our free will. Both conditions are threatened by popular philosophical theses. The thesis of determinism threatens the alternativism condition, and the thesis of physicalism threatens the sourcehood condition.

We could talk about the impact of determinism on the alternativism condition, but we won’t. Instead, we will focus on the impact of physicalism on the sourcehood condition. In particular, we will focus on what List and Menzies call the ‘exclusion argument’ against free will. The main substance of their article is directed towards this argument, so we need to understand it if we are to understand the article. The argument works a little something like this (note: the numbering of the premises does not follow the numbering in List and Menzies article — this might make cross-comparison a little awkward):


  • (1) Someone’s action is free only if it is caused by the agent, particularly by the agent’s mental states, as distinct from the physical states of the agent’s brain and body (call this the ‘Causal Source Thesis’)
  • (2) Physicalism rules out any agential or mental causation, as distinct from causation by physical states of the agent’s brain and body (call this the ‘Purported Implication of Physicalism’)
  • (3) Therefore, there can be no free actions in a physicalist world (call this the ‘Source-Incompatibilist Conclusion’)



The argument is a little underwhelming at first glance. Although we might be inclined to accept premise (1), premise (2) is going to be unconvincing to many physicalists. They will accept that the mental and physical are one and the same thing: that mental states are constituted by particular patterns of brain states, but they will deny the implication that this rules out agential causation. They will just say that, provided the actions are caused by the right kinds of brain states (i.e. the ones that constitute the right kinds of mental states), there is agential causation and hence the sourcehood condition is satisfied. It does not matter that there is no ‘distinct’ class of mental causation.

This is where the exclusion argument comes into play. The exclusion argument derives from the work of Jaegwon Kim, a famous proponent of physicalism. Kim argues that physicalism entails mental supervenience (i.e. the mental supervenes upon the physical), and that mental supervenience entails epiphenomenalism (i.e. that the mental has no real causal role in our actions). This means that there is no mental causation on physicalism, which means that premise (2) is sound.

As I mentioned above, List and Menzies direct most of their critique against this exclusion argument. They identify two variations upon the argument, and argue that both rely on a mistaken understanding of agential causation. Once the correct account of agential causation is substituted-in, the argument becomes less plausible. There is, consequently, no reason to suspect that physicalism rules out mental causation of the appropriate kind. List and Menzies also try to argue that something very much akin to the exclusion argument underlies much of the current ‘my brain made me do it’ rhetoric in the neuroscience community. Consider Sam Harris’s statement, from his 2012 book Free Will:

'Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence’ 
(Harris 2012, 7)

There is something exclusion-argument-esque about this, for sure. But, although I’m inclined to agree with List and Menzies in their critique of the physicalist challenge to sourcehood, I’m less inclined to agree with them about the neuroscientific challenge. I’ll get to that in the next post.


2. Two Versions of the Exclusion Argument
Before we do anything else, we need to gain a deeper understanding of the exclusion argument. List and Menzies maintain that this argument comes in two major forms. The first, simpler form, relies on a straightforward physicalist causal closure principle (i.e. on a principle claiming that the physical world is causally closed: physical causes are sufficient for all physical effects). This will be familiar to anyone who has debated the merits of Cartesian dualism vis-a-vis physicalism. The second, more complex form, relies on slightly more general claim about the nature of causation and causal sufficiency.

The first version of the argument works like this:


  • (4) An agent’s action is free only if it is caused (in a relevant sense of causation simpliciter) by the agent’s mental states.
  • (5) Any effect that has a cause has a sufficient physical cause (i.e. a causally sufficient physical condition) occurring at the same time.
  • (6) An agent’s mental states are not identical to any physical states, but rather supervene on underlying physical states.
  • (7) If an effect has a sufficient cause C, it does not have any cause C* (simpliciter) distinct from C, occurring at the same time (except in cases of overdetermination).
  • (8) Therefore, there are no free actions.


The second version of the argument simply changes premise (5) to the following:


  • (5*) Causation implies causal sufficiency.


The conclusion then follows in the same manner, provided you also accept this lemma:

Lemma: If C* is causally sufficient for some effect E, and C* supervenes on C, then C is causally sufficient for E.

This lemma is easily proved because the supervenience relationship is a necessary one. In other words, if C* supervenes on C, then whenever C is present, so too is C*. It follows then that if C* is sufficient for E, then C is also sufficient for E. If you are confused, see my previous post on the nature of the supervenience relationship.

List and Menzies are at pains to point out that most of the premises of both versions of the argument are plausible. I won’t explore the matter in quite the same detail as they do, but I will give a quick run-down of the salient points.

I’ll start with premise (4). This premise looks to be a pretty uncontroversial statement of the sourcehood condition: in order to freely will an action you (your mental agency) must be the source of that action. This premise should be acceptable to most people, irrespective of their philosophical worldview.

Premises (5) and (5*) are slightly more controversial, but still highly plausible. Premise (5) simply states a standard physicalist account of causal closure. It is also quite weak in its claims. It states only that if an event has a cause, then physical causes are sufficient to produce that event. This is consistent with the existence of some non-physical events with no causes. It should, consequently, be acceptable to virtually all physicalists. Premise (5*) is even more relaxed in its claims. It doesn’t appeal to physicalism at all. It states that if an event C causes an event E, then C is causally sufficient for E. This is potentially compatible with all versions of causal determinism. The premise could also be refined so as to incorporate a probabilistic version of causation. Still, despite its more relaxed nature, there is something worth disputing. Everything depends on how you understand the concepts of causation and causal sufficiency. List and Menzies think that an incorrect understanding of both concepts permeates the exclusion argument. We will return to this problem below.

Premise (6) requires some commitment to non-reductive physicalism. That is, to the view that mental states depend on (supervene on) physical states but are not identical or reducible to them. This, of course, means that reductive physicalists and non-physicalists have a route out of the argument. That’s to be expected. But it is worth noting that non-reductive physicalism has tended to be the dominant position in the philosophy of mind for the past century or so. It is also the view that seems most at home with a scientifically oriented worldview, which is the sort of worldview shared by List and Menzies, and the neurosceptics.

That leaves us with premise (7). This is the most problematic one, according to List and Menzies, because it assumes an incorrect theory of causation.


3. A Difference-Making Account of Causation
Let’s try to unpack their critique in more detail. There are two main types of causation:

Production-Causation: This is a metaphysical account of causation according to which causes produce effects via some metaphysical source. As List and Menzies describe it ‘[c]ausation here involves a causal ‘oomph’, i.e. the production of an outcome through some causal force or power’ (List and Menzies 2014).

Difference-Making Causation: This is a probabilistic or counterfactual theory of causation. It says that to be the cause of an effect is to make some sort of difference to the occurence of that effect across possible worlds. More precisely, it holds that C causes E if, and only if, two conditionals are satisfied:
The Positive Conditional: If C were to occur, then E would occur.
The Negative Conditional: If C were not to occur, then E would not occur.

List and Menzies argue that the difference-making account is much more consistent with the scientific worldview. The kinds of experimental evidence of causation that scientists discover usually involve playing around with the conditionals in the manner envisaged by the difference-making account (e.g. the randomised placebo-controlled trial in medicine). Furthermore, the production account seems to require a metaphysical ‘leap of faith’.

In addition to this, they argue that the difference-making account is the most natural way to understand agential causation. In other words, to say that an agent mentally causes an event is to say that the agent (and the relevant mental states) made a difference to that event. When the relevant mental state is present, so too is the effect, and when it is not, neither is the effect.

The crucial thing about the difference-making account of causation is that it casts premise (7) into doubt. This is because the difference-making account allows for cases in which certain microphysical states might be the production-causes of an event; but higher-level, supervenient events, might be the difference-making causes of the event. Here’s an example. Suppose you have a flask of boiling water that breaks because of the pressure inside. The movements of the particles (or some subset of particles) within the flask might be causally sufficient for the break. These microstates would then be the production causes of the event. But it is the boiling of the water (which supervenes on various microstates) that is the difference-maker. It satisfies the positive and negative conditionals. As List and Menzies point out:

If the boiling had occurred, but had been realized by a slightly different microstate, the flask would still have broken, and if the boiling had not occurred, the flask would have remained intact…Although it is true that if the microstate in the flask had been exactly as it was, the flask would be broken, it is not true that if the microstate had been slightly different, the flask would have remained intact. The boiling could have been realized in many different ways, through different configurations of molecular motion, and would still have led the flask to break. 
(List and Menzies 2014)

In other words, the boiling is supervenient upon the underlying microstates, but it is multiply realisable by those microstates. This means that it (not the microstates) is the true difference-maker. The same thing could then hold true for mental causation. Mental states could be multiply realisable. Different physical states of the brain could give rise to the same mental event. Where those different physical states give rise to the same event, we can say that the supervenient mental state is the true difference-maker. The result is that the exclusion argument fails: if we adopt a difference-making account of causation, there is no reason to think that physicalism rules out the appropriate style of mental causation.

I’m broadly in agreement with this line of argument, though I would note that much depends here on how fine-grained or coarse-grained we are in our understanding of what constitutes a common or distinct event or mental state. Daniel Dennett’s paper ‘Real Patterns’ is quite good on this topic, for those of you who are interested.

Right, that’s it for this post. To briefly recap, the exclusion argument claims that physicalism rules out free will because, on physicalism, we are not the sources of our actions. But, as we have just seen, this argument assumes an implausible theory of mental causation. If we adopt a difference-making account, then there is no reason why supervenient mental states cannot count as the causes of our actions. How does this affect the debate about neuroscience and free will? We’ll look into that in part two.

Monday, July 27, 2015

The Psychology of Revenge: Biology, Evolution and Culture


The Murder of Agamemnon - A Revenge Killing?


“Revenge is a dish best served cold…” 
(Ancient Klingon Proverb)

When I was younger I longed for revenge. I remember school-companions doing unspeakably cruel things to me — stealing my lunch, laughing at my misfortune and so forth (hey, it all seemed cruel at the time). I would carefully plot my revenge. The revenge almost always consisted of performing some similarly unspeakably cruel act towards them. Occasionally, my thoughts turned to violence. Sometimes I even lashed out in response.

I’m less inclined towards revenge these days. Indeed, I am almost comically non-confrontational in all aspects of my life. But I still feel the pangs. When wronged, I’ll briefly get a bit hot under the collar and my thoughts will turn to violence once more. I’ll also empathise with the characters in the innumerable revenge narratives that permeate popular culture, willing them on and feeling a faint twinge of pleasure when they succeed. I don’t think I ever act on the impulses anymore, but I have come close. And I’m sure everyone has had similar feelings.

But why is this? Why do we so frequently seek revenge? And how can we stop ourselves from acting on the impulse? I want to look at some potential answers to those questions today. In particular, I want to cover three related topics. First, I want to consider the psychology and neurobiology of revenge, focusing on why revenge can oftentimes feel pleasurable. Second, I want to consider the supposed ‘rationality’ of revenge, i.e. why the instinct for revenge is sometimes a good thing, and why the instinct may have evolved. And third, I want to examine the various methods that can be used to minimise the amount of vengeance being sought in society at any given time.

In doing all this, I’ll be drawing heavily from the discussion in Steven Pinker’s book The Better Angels of our Nature, and from the various studies cited therein.


1. The Mechanics of Revenge
One thing that is noticeable about revenge is how common it is. Literary classics of the distant and recent past often extoll its virtues in poetic terms; and it is a frequent motive for state and non-state violence (consider the use of reprisals in international conflicts). In addition to this, Pinker, following work by McCullough and Daly and Wilson, suggests that blood feuds — cases in which one tribe/gang kill the members of rival tribe/gang in retaliation for a similar attack on themselves — are endorsed by around 95% of the world’s cultures.

The commonality of revenge suggests that there is something deep within the architecture of the typical human brain that facilitates it. This seems to be borne out by a variety of studies. For one thing, it is easy enough to provoke people into seeking revenge in simple psychological experiments. Once more citing the work of McCullough, Pinker mentions studies done on college students (as pretty much all psychological experiments are…) in which the students are first given an insulting evaluation written by a fellow student, and then given the opportunity to punish the evaluator in a variety of ways (electric shocks, blasts with an air horn). It is very easy to induce students to engage in such revenge attacks.

So which brain systems undergird this thirst for revenge? Pinker mentions two. The first is the so-called Rage Circuit. This is a pathway linking the midbrain to the hypothalamus and amygdala. The rage circuit works by receiving pain signals from other parts of the nervous system and then responding, rapidly, with aggressive behavioural patterns. If activated, it provokes an animal to lash out at the nearest available victim. Jaak Panksepp performed experiments on the rage circuits of cats. The experiments involved activating the rage circuit with an electrical current. This provoked an instantaneous reaction from the cat. It would leap towards Panskepp with its claws and fangs bared, while hissing and spitting. It is likely that the thirst for revenge starts with the rage circuit: when we are hurt, we have an instant urge to lash out.

But it doesn’t end there. It is known that the stimulation of the rage circuit is unpleasant and animals will often work to switch it off. But the desire for revenge can linger. The reason for this seems to be that other brain systems support the quest for revenge. In particular, there is the so-called ‘Seeking’ system, named by Panskepp. This is a network within the brain that facilitates reward and pleasure seeking behaviour and incorporates the mesolimbic and mesocortical dopamine systems. You have probably come across some description of them before. The original experimental work on them involved rats placed in Skinner boxes. Every time the rats pressed a lever in the box they would stimulate their dopamine systems. It was found that rats would do so until they dropped dead from exhaustion. For a long time, this was thought to provide the neurobiological basis for addiction, although nowadays scientists realise that addiction is a more complex phenomenon.

Anyway, the important point here is that revenge seems to activate the seeking system. People appear to crave revenge, hoping that it will prove satisfying and rewarding. Studies done by Dominique de Quervain and his colleagues scanned the brains of men who had been wronged in a simple trust game (they entrusted another with some money and that other kept it for himself). The men were given the opportunity to punish the wrongdoer at some cost to themselves. It was found that part of the striatum (a key component in brain’s seeking system) lit up as they pondered the opportunity, and that the more it lit up, the more likely the men were to punish the others. This seems to indicate that reward seeking is part of the motivation for revenge.


2. The Rationality of Revenge
The commonality of revenge, and the fact that people seem to crave it, poses another question: why have we evolved (or been enculturated) to pursue revenge? After all, there is something of a paradox underlying our lust for revenge. It is a costly endeavour, and no matter how much pain we inflict on the wrongdoer, we can never really correct for the historical wrongdoing that provokes our revenge. And yet revenge persists.

Pinker favours a ‘deterrence’ explanation for revenge. We seek revenge, and derive pleasure from it, because it is an effective means of deterring would-be wrongdoers. Now, on a previous occasion, I discussed a whole range of psychological evidence suggesting that people’s punishment-related behaviours did not, in fact, follow the logic of deterrence. Au contraire, those studies suggested that people were natural-born retributivists: they sought revenge because they felt it was important for people to get their ‘just deserts’, and not because it would deter other wrongdoers. But the contradiction between these experimental findings and Pinker’s preferred explanation is more apparent than real. The studies discussed in that earlier post focused on the proximate psychological causes of revenge, i.e. on what best explained individual judgments and patterns of behaviour. Pinker’s explanation focuses on the ultimate societal causes of revenge, i.e. on what best explains the persistence of revenge in spite of its costly nature. His claim is that deterrence is the best ultimate explanation for this persistence. That is perfectly consistent with the claim that most individuals follow a retributivist (non-deterrentist) logic.

What evidence can be adduced in favour deterrence explanation? Pinker discusses two main pieces. Both come from studies of iterated prisoner’s dilemmas (IPDs) (note: I am not going to explain what the PD or IPD is here because I have discussed it on previous occasions - the important point is that PDs are thought to provide a good model for many social dilemmas). The first piece of evidence is largely theoretical, and focuses on computer-based simulations of IPDs. These computer-based simulations seem to confirm the long-term effectiveness of vengeance in achieving deterrence. The second is largely experimental, and focuses on how real people behave in lab-based IPDs. These also seem to confirm the willingness to seek and effectiveness of revenge. (You may dispute my calling the computer-based simulations ‘theoretical’ as opposed to ‘experimental’ evidence. I guess they are a type of experiment, but they are experimental tests of highly formalised strategies, not tests of the behaviour of real people.)

The computer-based simulations of IPDs are fascinating, and have generated a rich literature over the years. As you probably know, the standard PD involves two players, each faced with two choices: cooperate or defect. Collectively, the best strategy is to cooperate; but, individually, the best strategy is to defect (it dominates all other choices). But this is only true if the PD is a once off. If the players repeatedly interact in PD-style games, over multiple rounds and with different opponents, then other strategies can prevail. This is the key insight from the computer-based simulations. One of the earliest, and most enduring, findings from those simulations was that a simple programme called TIT FOR TAT could beat out most competitors in an IPD tournament. The TIT FOR TAT programme embodied the logic of deterrence-based revenge. It involved cooperation on the first round of the tournament, and then a change in subsequent rounds, depending on what the opponent did in the previous round. Thus, for example, if the opponent defected in the first round, TIT FOR TAT would defect in the second round; if the opponent cooperated in the second round, TIT FOR TAT would switch back to cooperation in the third round; and so on. The idea is that this models deterrence-based revenge because it rewards and punishes opponents with a view to changing outcomes in future rounds.

The success of TIT FOR TAT in IPDs is attributed to the fact that it is nice, clear, retaliatory and forgiving. But TIT FOR TAT is not an unbridled success. One difficulty is that it can easily degenerate into an endless cycle of defection (sometimes called a ‘death spiral’), particularly if one TIT FOR TAT is playing against another TIT FOR TAT and they happen to first interact on a round when they are both playing ‘defect’. Alternative strategies can be more effective in the right environments. For instance, GENEROUS TIT FOR TAT, which randomly restarts cooperating on some rounds, or TIT FOR TWO TATS, which avoids immediate retaliation by waiting to see whether its opponent defects in two successive rounds, or CONTRITE TIT FOR TAT, which tries to correct for its own mistakes, can be more effective.

I could go on about the details and variations, but that would be unnecessary. The important point is that strategies that all these strategies incorporate some degree of revenge (and, importantly, forgiveness), and can help to sustain long-term cooperation. This supports the deterrence-explanation. I should probably note at this point that after Pinker published his book there was an interesting paper published by Press and Dyson on IPDs. The paper proved that extortionate strategies (called ‘Zero Determinant’ strategies), i.e. ones that weren’t simply vengeful and forgiving, were optimal in some IPDs. There has been much hype about this result, and you can read explanations of it here, but it doesn’t completely undermine the long-term effectiveness of TIT FOR TAT and its variations.

So much for the theoretical bit of evidence, what about the work done on actual human beings? Since the late 1990s, a whole series of studies have been published showing that costly punishment can help to sustain cooperation in repeated PD-style interactions (researchers refer to the phenomenon as 'altruistic punishment'). The most famous study in this vein comes from Fehr and Gachter. The study involved a Public Goods game wherein people were given the opportunity to contribute to a common investment fund (which would benefit them all), or to free ride on the good will of others who invested. If experimental subjects were allowed to punish free riders, free-riding was eliminated over repeated plays of the game. Furthermore, other experiments have found that people are more likely to punish when they think others are watching. This demonstrates a willingness to seek a reputation for revenge in a social setting. This again seems to confirm the deterrence explanation because a reputation for revenge is important for deterrence.

The upshot is that deterrence — and the pursuit of mutually beneficial cooperation — look like reasonable explanations for the long-term persistence of revenge.


3. The Modulation of Revenge
Granting that revenge is common, and occasionally rational, there remains a challenge: how can we ensure that there is not too much of it? It is clear that too much revenge can be destructive. This is obvious to anyone who has lived through seemingly endless cycles of blood-feuding (the real-world equivalent of the TIT FOR TAT ‘death spirals’). It might be trite and simplistic to put it this way, but such cycles seem to be part of the reason for the persistence of sectarian violence in Northern Ireland. Or, at least, it seemed that way to me as child growing up in the Republic of Ireland.

Is it possible to prevent such destructive cycles of revenge? Would it be possible to create a world in which there was no need to seek revenge, i.e. in which revenge lost its rationality? In his analysis, Pinker identifies five factors which seem to modulate and reduce the need for revenge. I won’t discuss them in too much detail here. Instead, I will simply give short descriptions and links to relevant supporting evidence:

A. The Presence of Leviathan: The Leviathan is, of course, Hobbes’s famous term for the state. The Leviathan effectively functions as a means for outsourcing violence (in particular revenge). We all have Leviathans in our lives. When I was a school-child, I did not necessarily need to lash out at the cruel behaviour of my companions, I could sometimes outsource my revenge to a teacher who could punish the bullies on my behalf. This outsourcing of revenge can have two major benefits. First, the Leviathan can function as a more effective deterrent if it can create the belief that it is ‘all-seeing’ and ‘all-knowing’ (or close enough) and capable of retaliating even if the wrongdoer crushes their victim. Second, the Leviathan may be less prone to the distorting biases that fuel cycles of revenge. It is well-known that victims often overestimate the degree of harm they have suffered, and consequently can punish wrongdoers in excess. Shergill et al performed an experiment in which people placed their finger under a bar that applied a precise amount of force. They were then asked to press down on the finger of another experimental subject with the same amount of force. It was found that they used approximately eighteen times more force than they originally received, highlighting the gap between perceived harm and reality. Pinker refers to this as part of the ‘moralization gap’ and highlights further evidence in support of it. Leviathan, as a third party, may avoid the excesses of this gap.

B. Civic-Mindedness and Perceptions of Governmental Legitimacy: The mere presence of Leviathan is not enough in itself to eliminate destructive cycles of revenge. It is clear that the people who are subjected to the authority of Leviathan must have some degree of civic-mindedness, i.e. must be committed to the institutional basis for Leviathan and perceive them to be legitimate. Herrmann, Thoni and Gachter performed a cross-cultural study of Public Goods games which highlighted this. They found, somewhat surprisingly, that in some cultures players actually punished people who contributed generously to the public investment fund. This is odd since generous contributors of this sort actually benefitted the group as a whole. When they dug into the data a little deeper, Herrmann, Thoni and Gachter found that a major predictor of this willingness to spitefully punish generous contributors was the degree of civic mindedness in the respective cultures. In other words, cultures in which the commitment to the rule of law was weak (e.g. countries where people didn’t pay taxes, cheated on social welfare payments etc.) were more likely to engage in spiteful punishment.

C. Expanding the Circle of Empathy: This is an obvious one. It is well-known that we are more likely to forgive people who fall within our natural circle of empathy (kin, friends etc) for their transgressions. This modulates our desire for revenge. Thus, creating an expanded circle of empathy can help prevent destructive cycles of revenge. The question, of course, is how to do this. Various cultural practices and rituals can help to create ‘fictive kinships’ which are often effective means of expanding the circle of empathy. Religions have been good at this, and often explicitly invoke kinship metaphors (e.g. ‘brothers and sisters in Christ’). But there is a dark side to this too as you can often create an excessive in-group/out-group mentality, which can in turn fuel revenge and associated forms of violence.

D. Shared Goals: A simple way to overcome excessive in-group/out-group mentalities is to generate common interests, i.e. to make the success of one group dependent on the success of another. There was a famous experiment to this effect performed on a group of boys at the Robbers Cave summer camp back in the 1950s. The boys were arbitrarily divided into two separate groups at the start of camp. This generated intense loyalty within the groups, and intense rivalry between them, with acts of provocation and retaliation following soon after. But the experimenters found that they could reduce this rivalry by bringing the groups together and forcing them to work together for mutual benefit, e.g. in having to restore the camp’s water supply. The value of such mutual interdependencies is often highlighted as a major reason why countries that trade with one another are less likely to go to war.

E. Creating a Perception of Harmless: A final way to reduce destructive cycles of revenge is to cultivate a reputation for non-violence. That is: to signal to the other side that you are not going to continue with a destructive conflict. Apologies and reconciliation events are central to this, but apologies are often deemed ‘cheap talk’. They are easy to make and easy to break. There is some suggestion that physiological responses like blushing are a way in which evolutionary forces facilitated costly signaling of apologies. There is also evidence from the study of international and civil conflicts that apologies and reconciliation events are more likely to work when they are costly, involve some symbolic (but incomplete) justice, and involve participants with some shared history. The work of Long and Brecke is the key source here.

I have illustrated these five modulators in the diagram below.





4. Conclusion
To briefly sum up, revenge seems to be common, occasionally rational and capable of being reduced. Its commonality is illustrated by its near-universal endorsement, and the ease with which it can be endorsed. It seems to be undergirded by two major brain systems: the Rage circuit, which facilitates rapid violent responses to perceived harm; and the Seeking circuit, which facilitates reward-seeking behaviours. The rationality of revenge is illustrated by its utility as a deterrence mechanism in iterated versions of the prisoner’s dilemma. And the ability to reduce the amount of destructive revenge is illustrated by the five factors listed above.

Sunday, July 26, 2015

How to Study Algorithms: Challenges and Methods




(Series Index)

Algorithms are important. They lie at the heart of modern data-gathering and analysing networks, and they are fueling advances in AI and robotics. On a conceptual level, algorithms are straightforward and easy to understand — they are step-by-step instructions for taking an input and converting it into an output — but on a practical level they can be quite complex. One reason for this is the two translation problems inherent to the process of algorithm construction. The first problem is converting a task into a series of defined, logical steps; the second problem is converting that series of logical steps into computer code. This process is value-laden, open to bias and human error, and the ultimate consequences can be philosophically significant. I explained all these issues in a recent post.

Granting that algorithms are important, it seems obvious that they should be subjected to greater critical scrutiny, particularly among social scientists who are keen to understand their societal impact. But how can you go about doing this? Rob Kitchin’s article ‘Thinking critically about and researching algorithms’ provides a useful guide. He outlines four challenges facing anyone who wishes to research algorithms, and six methods for doing so. In this post, I wish to share these challenges and methods.

Nothing I say in this post is particularly ground-breaking. I am simply summarising the details of Kitchin’s article. I will, however, try to collate everything into a handy diagram at the end of the post. This might prove to be a useful cognitive aid for people who are interested in this topic.


1. Four Challenges in Algorithm Research
Let’s start by looking at the challenges. As I just mentioned, on a conceptual level algorithms are straightforward. They are logical and ordered recipes for producing outputs. They are, in principle, capable of being completely understood. But in practice this is not true. There are several reasons for this, some are legal/cultural, some are technical. Each of them constitutes an obstacle that the researcher must either avoid or, at least, be aware of.

Kitchin mentions four obstacles in particular. They are:

A. Algorithms can be black-boxed: Algorithms are oftentimes proprietary constructs. They are owned and created by companies and governments, and their precise mechanisms are often hidden from the outside world. They are consequently said to exist in a ‘black box’. We get to see their effects on the real world (what comes out of the box), but not their inner workings (what’s inside the box). The justification for this black-boxing varies, sometimes it is purely about protecting the property rights of the creators, other times it is about ensuring the continued effectiveness of the system. Thus, for example, Google are always concerned that if they reveal exactly how their Pagerank algorithm works, people will start to ‘game the system’, which will undermine its effectiveness. Frank Pasquale wrote an entire book about this black-boxing phenomenon, if you want to learn more.

B. Algorithms are heterogeneous and contextually embedded: An individual could construct a simple algorithm, from scratch, to perform a single task. In such a case, the resultant algorithm might be readily decomposable and understandable. In reality, most of the interesting and socially significant algorithms are not produced by one individual or created ‘from scratch’. They are, rather, created by large teams, assembled out of pre-existing protocols and patchworks of code, and embedded in entire networks of algorithms. The result is an algorithmic system, that is much harder to decompose and understand.

C. Algorithms are ontogenetic and performative: In addition to being contextually embedded, contemporary algorithms are also typically ontogenetic. This is a somewhat jargonistic term, deriving from biology. All it means is that algorithms are not static and unchanging. Once they are released into the world, they are often modified or adapted. Programmers study user-interactions and update code in response. They often experiment with multiple versions of an algorithm to see which one works best. And, what’s more, some algorithms are capable of learning and adapting themselves. This dynamic and developmental quality means that algorithms are difficult to study and research. The system you study at one moment in time may not be the same as the system in place at a later moment in time.

D. Algorithms are out of control: Once they start being used, algorithms often develop and change in uncontrollable ways. The most obvious way for this to happen is if algorithms have unexpected consequences or if they are used by people in unexpected ways. This creates a challenge for the researcher insofar as generalisations about the future uses or effects of an algorithm can be difficult to make if one cannot extrapolate meaningfully from past uses and effects.

These four obstacles often compound one another, creating more challenges for the researcher.


2. Six Methods of Algorithm Research
Granting that there are challenges, the social and technical importance of algorithms is, nevertheless, such that research is needed. How can the researcher go about understanding the complex and contextual nature of algorithm-construction and usage? It is highly unlikely that a single research method will do the trick. A combination of methods may be required.

Kitchin identifies six possible methods in his article, each of which has its advantages and disadvantages. I’ll briefly describe these in what follows:

1. Examining Pseudo-Code and Source Code: The first method is the most obvious. It is to study the code from which the algorithm was constructed. As noted in my earlier post there are two bits to this. First, there is the ‘pseudo-code’ which is a formalised set of human language rules into which the task is translated (pseudocode follows some of the conventions of programming languages but is intended for human reading). Second, there is the ‘source-code’, which is the computer language into which the human language ruleset is translated. Studying both can help the researcher understand how the algorithm works. Kitchin mentions three more specific variations on this research method:
1.1 Deconstruction: Where you simply read through the code and associated documentation to figure out how the algorithm works.
1.2 Genealogical Mapping: Where you ‘map out a genealogy of how an algorithm mutates and evolves over time as it is tweaked and rewritten across different versions of code’ (Kitchin 2014). This is important where the algorithm is dynamic and contextually embedded.
1.3 Comparative Analysis: Where you see how the same basic task can be translated into different programming languages and implemented across a range of operating systems. This can often reveal subtle and unanticipated variations.
There are problems with these methods: code is often messy and requires a great deal of work to interpret; the researcher will need some technical expertise; and focusing solely on the code means that some of the contextual aspects of algorithm construction and usage are missed.

2. Reflexively Producing Code: The second method involves sitting down and figuring out how you might convert a task into code yourself. Kitchin calls this ‘auto-ethnography’, which sounds apt. Such auto-ethnographies can be more or less useful. Ideally, the researcher should critically reflect on the process of converting a task into a ruleset and a computer language, and think about the various social, legal and technical frameworks that shape how they go about doing this. There are obvious limitations to all this. The process is inherently subjective and prone to individual biases and shortcomings. But it can nicely complement other research methods.

3. Reverse-engineering: The third method requires some explanation. As mentioned above, one of the obstacles facing the researcher is that many algorithms are ‘black-boxed’. This means that, in order to figure out how the algorithm works, you will need to reverse engineer what is going on inside the black box. You need to study the inputs and outputs of the algorithm, and perhaps experiment with different inputs. People often do this with Google’s Pagerank, usually in an effort to get their own webpages higher up the list of search results. This method is also, obviously, limited in that it provides incomplete and imperfect knowledge of how the algorithm works.

4. Interviews and Ethnographies of Coding Teams: The fourth method helps to correct for the lack of contextualisation inherent in some of the preceding methods. It involves interviewing or carefully observing coding teams (in the style of a cultural anthropologist) as they go about constructing an algorithm. These methods help the researcher to identify the motivations behind the construction, and some of the social and cultural forces that shaped the engineering decisions. Gaining access to such coding teams may be a problem, though Kitchin notes one researcher, Takhteyev, who conducted a study while he was himself part of an open-source coding team.

5. Unpacking the full socio-technical assemblage: The fifth method is described, again, in somewhat jargonistic terms. The ‘socio-technical assemblage’ is the full set of legal, economic, institutional, technological, bureaucratic, political (etc) forces that shape the process of algorithm construction. Interviews and ethnographies of coding teams can help us to understand some of these forces, but much more is required if we hope to fully ‘unpack’ them (though, of course, we can probably never fully understand a phenomenon). Kitchin suggests that studies of corporate reports, legal frameworks, government policy documents, financing, biographies of key power players and the like are needed to facilitate this kind of research.

6. Studying the effects of algorithms in the real world: The sixth method is another obvious one. Instead of focusing entirely on how the algorithm is produced, and the forces affecting its production, you also need to study its effects in the real world. How does it impact upon the users? What are its unanticipated consequences? There are a variety of research methods that could facilitate this kind of study. User experiments, user interviews and user ethnographies would be one possibility. Good studies of this sort should focus on how algorithms change user behaviour, and also how users might resist or subvert the intended functioning of algorithms (e.g. how users try to ‘game’ Google’s Pagerank system).

Again, no one method is likely to be sufficient. Combinations will be needed. But in these cases one is always reminded of the old story about the blind men and the elephant. Each is touching a different part, but they are all studying the same underlying phenomenon.




Tuesday, July 21, 2015

Epistemology, Communication and Divine Command Theory


I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

Anyway, in this post I want to take another look at this so-called epistemological objection. I do so through the lens of Glenn Peoples’s article, simply entitled ‘The Epistemological Objection to Divine Command Ethics’. Peoples is a theist and a proponent of DCT (or so I believe). He thinks that the epistemological objection fails. His paper focuses on two versions of the objection and two versions of DCT. The first version of the objection he views as being ‘crude’; the second is slightly more sophisticated and comes from work done by Wes Morriston.

I’m going to ignore what Peoples says about the ‘crude’ versions. I tend to agree that they are crude and, frankly, uninteresting. So I’ll focus on Morriston’s version instead. As will become clear, I much more favourably disposed to Morriston’s line of argument than Peoples seems to be. I will try to explain why as I go along.

I’ll do so in three parts. First, I’ll try to explain the differences between the two versions of DCT mentioned in Peoples’s article. Second, I’ll outline and analyse Peoples’s argument for thinking that the epistemological objection fails in the case of the first version of the DCT. And third, I’ll outline and analyse his argument for thinking that it fails in the case of the second version of DCT. I’ll offer my own responses in each section.


1. Two Versions of Divine Command Theory
Sloppy terminology is abundant in philosophy. This is a real shame since it often means that participants in philosophical debates end up talking past each other. This is particularly true in debates about DCTs, where several of the theories that are grouped under that heading are not really properly called ‘command’ theories at all.

Obviously, DCTs all share the claim that certain (perhaps all) moral statuses depend on God in some way. On a previous occasion I followed Erik Wielenberg’s suggestion and drew a distinction between two classes of these divine-dependency theories. The first, and more general, class is that of ‘theological stateism’. All theories in this class claim that certain moral statuses depend for their existence on one or more of God’s states of being (e.g. his nature, his beliefs, his desires etc). The second, and more narrowly circumscribed class, is that of ‘theological voluntarism’. Theories in this class claim that certain moral statuses depend for their existence on one or more of God’s voluntary acts (e.g. his willing or intending X; his commanding X). Voluntarist theories are a subset of stateist theories, and DCTs are a further subset of voluntarist theories. I have tried to illustrate this below.




Hopefully that is reasonably clear. Within the class of command theories, Morriston and Peoples introduce further two further distinctions. They are:


Causal Divine Will Theories: These theories hold that some moral statuses (most commonly that status of being obligatory) are dependent for their existence on God’s willing that they be so. This sort of view was defended by Philip Quinn, and was referred to as a ‘command’ theory, but Morriston argues that it is not really about commands per se since on Quinn’s view the commands need not be communicated. Whether that is sufficient to disqualify it from being a ‘command’ theory is debateable. For now, I’ll view it as such.

Modified Divine Command Theories: These theories hold that some moral statuses (most commonly the status of being obligatory) are dependent for their existence on God’s commanding and communicating that they be so. This is the sort of view defended by Robert Adams and is, according to Morriston, properly called a ‘command’ theory since communication is essential to the creation of the particular moral status.


Adams’s view is worthy of further consideration here since it is quite popular among contemporary DCTers. I have discussed it on a few previous occasions. In essence, Adams thinks that axiological moral statuses (i.e. the status of being good or bad) do not depend for their existence on God’s commands. But he thinks that God’s commands are necessary for the creation of certain deontic moral statuses, in particular the status of being obligatory. Indeed, Adams argues that without commands from an authoritative agent we cannot know the difference between something’s being morally supererogatory (i.e. above and beyond our moral obligations) and morally obligatory. For instance, it might be a morally excellent thing for me to send half my income to charitable organisations in the developing world, but without an authoritative command we cannot say that it is obligatory.

Communication of commands is consequently essential to Adams’s theory since without being told (in some way) that X is obligatory we cannot know that it really is. This need for communication turns out to be important when assessing the strength of Morriston’s critique. I will return to it later.


2. The Epistemological Objection and Causalist Theories
Now that we have distinguished between these two versions of theological voluntarism, we can proceed to assess the strength of epistemological objection in relation to each. We start with the causalist theory propounded by Quinn. Peoples argues that the epistemological objection has no real impact on this theory. I am less convinced of this.

We have to understand what he argues first. Peoples, following Quinn, argues that divine will theories are pure ontological theories. In other words, they do not incorporate an epistemic condition into their account of moral ontology. He doesn’t put it in these terms, but that’s the gist of it. To illustrate, he offers the following quote from Quinn on the epistemological objection:


Our theory asserts that divine commands are conditions causally necessary and sufficient for moral obligations and prohibitions to be in force. It makes no claims at all about how we might come to know just what God has commanded. For all the theory says, it might be that we can come to know what God has commanded by first coming to know what is obligatory and forbidden. After all, it is a philosophical truism that the causal order and the order of learning need not be the same. 
(Quinn 2006, 44-45)


Quinn is clear in this passage that his theory (unlike Adams’s) makes ‘no claims at all’ about moral epistemology. It only claims that an act of the divine will is necessary to bring moral obligations into existence. How people come to learn of those obligations is irrelevant. I have tried to illustrate this in the diagram below. The bit in the shaded box represents Quinn’s account of moral ontology; ordinary moral agents sit outside this box. They may come to know what the moral truths are, or they may not. This does not upset the plausibility of the underlying ontological theory.



Peoples seems to think that this is right. He thinks that if Quinn says his theory contains no epistemic conditions, then his theory contains no epistemic conditions. The epistemological objection has no foothold against such a theory. In saying this, Peoples is assisted by the fact that Morriston himself concedes that the objection has no impact on Quinn’s theory. I’m less convinced about this. For one thing, I don’t believe that the proponent of a theory is always the final arbiter of what that theory does or does not entail. For another, I believe that any plausible account of moral ontology probably has to include some implicit epistemic condition.

I am not alone in this belief. It seems to be pervasive in contemporary metaethics. I wrote a series of posts on this topic a few years back. In them, I looked at typical methodological approaches in metaethics. Oftentimes, proponents of a particular metaethical theory will assess that theory relative to a number of plausibility conditions, i.e. things that they think any good metaethical theory should account for. Included in those conditions there is usually something about how moral facts ‘join up’ with the reasoning capacities of moral agents. This typically requires some plausible account of how a moral agent comes to know what its relevant moral obligations are. A failure to account for this renders a theory less plausible. This is why there is so much discussion of debunking arguments in the literature. It is also why I wrote so much about those debunking arguments. For instance, in the debate between moral realists and moral anti-realists, some anti-realists argue that realism is implausible because it doesn’t explain how evolved beings like us could come to have knowledge of moral reality.

It could be that this approach to metaethics is fundamentally misconceived. But if it is not, then it seems like epistemic conditions must be folded into any plausible account of moral ontology. Thus, we should not be so eager to embrace Quinn’s statement that his theory ‘makes no claims at all’ about moral epistemology. It probably has to, if it is to be plausible.


3. The Epistemological Objection to Modified Command Theories
Let’s move on to Adams’s theory. As I mentioned above, Adams’s seems to concede that his account of moral ontology includes an epistemic condition. For him, moral obligations do not exist unless they are commanded and communicated to a moral agent by God. Remember how the communication is necessary in order for the moral agent to be able to distinguish between what is supererogatory and what is obligatory. I’ve tried to illustrate this in the diagram below. You should be able to see from this how different Adams’s theory is from Quinn’s. Whereas Quinn left the agent’s awareness of the command out of his account of moral ontology; Adams’s incorporates it into his account.




Morriston seizes upon this in presenting his version of the epistemological objection. It goes a little something like this:



  • (1) According to Adams, in order for X (or not-X) to be a moral obligation it must be commanded by God and communicated to the moral agent to whom it applies.
  • (2) In order for a command to X (or not-X) to be communicated to a moral agent it must be communicated via a sign that the agent is capable of identifying and understanding.
  • (3) A reasonable non-believer has no epistemic vices, but cannot identify an/or understand divine commands.
  • (4) Therefore, a reasonable non-believer cannot have moral obligations (under the terms of Adams’s theory).



We need to clarify certain aspects of this argument before we can evaluate it. First, we need to clarify the concept of a reasonable non-believer. A reasonable non-believer is someone who honestly searches for proof of God’s existence, but cannot find any evidence that brings them to believe. In doing this, the reasonable non-believer does not violate any epistemic duties. They are not bitter or biased or closed to potential sources of evidence. They simply cannot find any. The reasonableness of these non-believers is crucial to Morriston’s argument. We can safely assume that Adams’s theory does not require that commands be understood by the insane or the morally evil. It is only those who are epistemically open that are affected. Another point of clarification is that the conclusion of the argument can be taken in a number of different ways. I like to use it to argue that the modified DCT fails to provide a fully plausible account of moral ontology. Others like to use it as something akin to a reductio of the modified DCT. In other words, they say things like ‘but of course reasonable non-believers have knowledge of moral obligations; therefore, the DCT is absurd’. Maybe there is no practical difference between these two positions. Just a difference in style.

Moving on to the evaluation of the argument, there is really only one premise that is at issue. That is premise (3). A proponent of the DCT could target the first part of premise (3) and argue that there is no such thing as a reasonable non-believer. Since I like to think of myself as a reasonable non-believer, I’m not inclined to accept that line of argument. But Peoples thinks there may be something to it, though he doesn’t discuss it at any great length. That leaves the second part of premise (3) as the other potential target. They could argue that a reasonable non-believer does in fact have the ability to identify and understand the relevant divine commands. To make this argument credible, they would need to offer a fuller account of what it means for an obligation to be communicated to a moral agent. This means they need to go back into premise (2) and flesh out the standard of communication that is being implied by that premise.

Now, in his discussion of the argument, Morriston seems to have a very narrow conception of the possible forms of divine communication. He seems to think that (on Adams’s theory) God must communicate his commands in the form of a speech act. Peoples, rightly in my opinion, argues that no proponent of the DCT has such a narrow conception of divine communication. Instead, they all talk about multiple possible forms of divine communication (e.g. via moral intuition, general revelation, special revelation, and natural law). So to make the epistemological objection compelling, you must show that communication fails across these multiple possible forms.

And this is where Peoples thinks the argument falls down. Morriston argues that in order to have the requisite knowledge of the divine command, the moral agent must know the source of the command. That is to say, they must know that the command emanated from God. But of course this is exactly what a reasonable non-believer cannot know. Peoples thinks this is wrong. He says they only need to have knowledge of the content of the command. To underscore his point, he relies on Adams’s brief sketch of what it takes for God to communicate a command to an agent:

Adams’s Communicative Standard: “In my opinion, a satisfactory account of [this standard] will have three main points: (1) A divine command will always involve a sign, as we may call it, that is intentionally caused by God; (2) In causing the sign God must intend to issue a command, and what is commanded is what God intends to command thereby; (3) The sign must be such that the intended audience could understand it as conveying the intended command.” (Adams, Finite and Infinite Goods).

Peoples makes much of condition (3). He points out that this condition says nothing about the agent needing to understand the source of the command:

“Adams did not say that a sign needs to be such that a person can understand that it conveys a divine command, but only that he can understand it as conveying “the intended command”. He does not even need to know that it is a command….In slogan form: People need knowledge of the command, not knowledge about the command.” 
(Peoples 2011)

He then goes on to give an example of how someone might know the content of a command without knowing its source:

“Consider for example the possibility that God conveys the ‘sign’ to people regarding some act (let’s pick murder) via a proper function of the human conscience. Nobody needs to know what conscience is, how we got one, or that God uses it to ensure that we have some true beliefs in order for them to know, via conscience, that murder is wrong.” 
(Peoples 2011)

What he is imagining here is a case in which someone has a really strong innate feeling that murder is forbidden, without knowing how or why they came to have it. Even still, God has successfully communicated his command to them. This is why Peoples thinks that Morriston’s argument fails. He goes on to point out that in such a case a reasonable non-believer might have incomplete moral knowledge, or might fail to appreciate how bad the violation of that command is, but that this is irrelevant to whether they satisfy the epistemic condition in Adams’s argument.

I have some problems with this. To repeat something I said earlier, I don’t think we can merely take Adams’s word for it regarding the communicative standard implied by his theory. He might think that knowledge of content is all that is required; but that doesn’t mean he is right. Remember the importance of the supererogation/obligation distinction. In his original work, Adams’s seems pretty clear that a command from a being with the right kind of authority is needed in order for an agent to be able to distinguish an obligation from an act of supererogation. As best I can tell, this implies that the agent must have knowledge of the source of the command as well as knowledge of its content. It is not enough that the agent knows that killing is really bad, or that giving money to charity is really good. They must know that these things are morally required of them. And under Adams’s theory knowing that these things were commanded by the right kind of entity is critical to drawing the distinction between what is great and what is obliged.

Admittedly, this is merely the sketch of an argument. But it seems to be truer to the communicative demands of Adams’s theory. If so, the epistemological objection still has some bite because reasonable non-believers will be incapable of knowing that a command (be it communicated via speech or conscience or whatever) emanates from the right kind of source. This is something I discussed at much greater length in my previous post on this topic.


Right, I'm exhausted with this topic now. That's it for this post.

Monday, July 20, 2015

The Philosophical Importance of Algorithms


IBM's Watson (Image from Clockready via Wikipedia)

In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson…and our lives will be better for it. 
(Ginni Rometty commenting on IBM’s Watson)

I’ve written a few posts now about the social and ethical implications of algorithmic governance (algocracy). Today, I want to take a slightly more general perspective on the same topic. To be precise, I want to do two things. First, I want to discuss the process of algorithm-construction and the two translation problems that are inherent to this process. Second, I want to consider the philosophical importance of this process.

In writing about these two things, I’ll be drawing heavily from the work done by Rob Kitchin, and in particular from the ideas set out in his paper ‘Thinking critically about and researching algorithms’. Kitchin is currently in charge of The Programmable City research project at Maynooth University in Ireland. This project looks closely at the role of algorithms in the design and function of ‘smart’ cities. The paper in question explains why it is important to think about algorithms and how we might go about researching them. I’ll be ignoring the latter topic in this post, though I may come back to it at a later stage.


1. Algorithm-Construction and the Two Translation Problems
The term ‘algorithm’ can have an unnecessarily mystifying character. If you tell someone that a decision affecting them was made ‘by an algorithm’, or if, like me, you talk about the rise of ‘algocracy’, there is a danger that you present an overly alarmist and mysterious picture. The reality is that algorithms themselves are relatively benign and easy to understand (at least conceptually). It is really only the systems through which they are created and implemented that give rise to problems.

An algorithm can be defined in the following manner:

Algorithm: A set of specific, step-by-step instructions for taking an input and converting into an output.

So defined, algorithms are things that we use everyday to perform a variety of tasks. We don’t run these algorithms on computers; we run them on our brains. A simple example might be the sorting algorithm you use for stacking books onto the shelves in your home. The inputs in this case are the books (and more particularly the book titles and authors). The output is the ordered sequence of books that ends up on your shelves. The algorithm is the set of rules you use to end up with that sequence. If you’re like me, this algorithm has two simple steps: (i) first you group books according to genre or subject matter; and (ii) you then sequence books within those genres or subject areas in alphabetical order (following the author’s surname). You then stack the shelves according to the sequence.

But that’s just what an algorithm is in the abstract. In the modern digital and information age, algorithms have a very particular character. They lie at the heart of the digital network created by the internet of things, and the associated revolutions in AI and robotics. Algorithms are used to collect and process information from surveillance equipment, to organise that information and use it to form recommendations and action plans, to implement those action plans, and to learn from this process.

Everyday we are exposed to the ways in which websites use algorithms to perform searches, personalise advertising, match us with potential romantic partners, and recommend a variety of products and services. We are perhaps less-exposed to the ways in which algorithms are (and can be) used to trade stocks, identify terrorist suspects, assist in medical diagnostics, match organ donors to potential donees, and facilitate public school admissions. The multiplication of such uses is what gives rise the phenomenon of ‘algocracy’, i.e. rule by algorithms.

All these algorithms are instantiated in computer code. As such, the contemporary reality of algorithm construction gives rise to two distinct translation problems:


First Translation Problem: How do you convert a given task into a human-language series of defined steps?

Second Translation Problem: How do you convert that human-language series of defined steps into code?


We use algorithms in particular domains in order to perform particular tasks. To do this effectively we need to break those tasks down into a logical sequence of steps. That’s what gives rise to the first translation problem. But then to implement the algorithm on some computerised or automated system we need to translate the human-language series of defined steps into code. That’s what gives rise to the second translation problem. I call these ‘problems’ because in many cases there is no simple or obvious way in which to translate from one language to the next. Algorithm-designers need to exercise judgment, and those judgments can have important implications.

Kitchin uses a nice example to illustrate the sorts of issues that arise. He discusses an algorithm which he had a role in designing. The algorithm was supposed to calculate the number of ‘ghost estates’ in Ireland. Ghost estates are a phenomenon that arose in the aftermath of the Irish property bubble. When developers went bankrupt, a number of housing developments (‘estates’) were left unfinished and under-occupied. For example, a developer might have planned to build 50 houses in a particular estate, but could have run into trouble after only fully completing 25 units, and selling 10. That would result in a so-called ghost estate.

But this is where things get tricky for the algorithm designer. Given a national property database with details on the ownership and construction status of all housing developments, you could construct an algorithm that sorts through the database and calculates the number of ghost estates. But what rules should the algorithm use? Is less than 50% occupancy and completion required for a ghost estate? Or is less than 75% sufficient? Which coding language do you want to use to implement the algorithm? Do you want to add bells and whistles to the programme, e.g. by combining it with another set of algorithms to plot the locations of these ghost estates on a digital map? Answering these questions requires some discernment and judgment. Poorly thought-out answers can give rise to an array of problems.


2. The Philosophical Importance of Algorithms
Once we appreciate the increasing ubiquity of algorithms, and once we understand the two translation problems, the need to think critically about algorithms becomes much more apparent. If algorithms are going to be the lifeblood of modern technological infrastructures, if those infrastructures are going to shape and influence more and more aspects of our lives, and if the discernment and judgment of algorithm-designers is key to how they do this, then it is important that we make sure we understand how that discernment and judgment operates.

More generally than this, if algorithms are going to sit at the heart of contemporary life, it seems like they should be of interest to philosophers. Philosophy is divided into three main branches of inquiry: (i) epistemology (how do we know?); (ii) ontology (what exists?); and (iii) ethics/morality (what ought we do?). The growth of algorithmic governance would seem to have important repercussions for all three branches of inquiry. I’ll briefly illustrate some of those repercussions here though it should be noted that what I am about to say is by no means exhaustive (Note: Floridi discusses similar ideas under his concept of information philosophy).

Looking first to epistemology, it is pretty clear that algorithms have an important impact how we acquire knowledge and on what can be known. We witness this in our everyday lives. The internet and the attendant growth in data-acquisition has resulted in the compilation of vast databases of information. This allows us to collect more potential sources of knowledge. But it is impossible for humans to process and sort through those databases without algorithmic assistance. Google’s Pagerank algorithm and Facebook’s Edgerank algorithm effectively determine a good proportion of the information with which we a presented on day-to-day basis. In addition to this, algorithms are now pervasive in scientific inquiry and can be used generate new forms of knowledge. A good example of this is the C-Path cancer prognosis algorithm. This is a machine-learning algorithm that was used to discover new ways in which to better assess the progression of certain forms of cancer. IBM hope that their AI system Watson will be provide similar assistance to medical practitioners. And if we believe Ginni Rometty (from the quote at the top of this post) use of such systems will effectively become the norm. Algorithms will shape what can be known and will generate knew forms of knowledge.

Turning to ontology, it might be a little trickier to see how algorithms can actually change our understanding of what kinds of stuff exists in the world, but there are some possibilities. I certainly don’t believe that algorithms have an effect on the foundational questions of ontology (e.g. whether reality is purely physical or purely mental), though they may change how we think about those questions. But I do think that algorithms can have a pretty profound effect on social reality. In particular, I think that algorithms can reshape social structures and create new forms of social object. Two examples can be used to illustrate this. The first example draws from Rob Kitchin’s own work on the Programmable City. He argues that the growth in so-called ‘smart’ cities gives rise to a translation-transduction cycle. On the one hand, various facets of city life are translated into software so that data can be collected and analysed. On other hand, this new information then transduces the social reality. That is to say, it reshapes and reorganises the social landscape. For example, traffic modeling software might collect and organise data from the real world and then planners will use that data to reshape traffic flows around a city.

The second example of ontological impact is in the slightly more esoteric field of social ontology. As Searle points out in his work on this topic, many facets of social life have a subjectivist ontology. Objects and institutions are fashioned into existence out of our collective imagination. Thus, for instance, the state of being ‘married’ is a product of a subjectivist ontology. We collectively believe in and ascribe that status to particular individuals. The classic example of a subjectivist ontology in action is money. Modern fiat currencies have no intrinsic value: they only have value in virtue of the collective system of belief and trust. But those collective systems of belief and trust often work best when the underlying physical reality of our currency systems is hard to corrupt. As I noted before, the algorithmic systems used by cryptocurrencies like Bitcoin might provide the ideal basis for a system of collective belief and trust. Thus, algorithmic systems can be used to add to or alter our social ontology.

Finally, if we look to ethics and morality we see the most obvious philosophical impacts of algorithms. I have discussed examples on many previous occasions. Algorithmic systems are sometimes presented to people as being apolitical, technocratic and value-free. They are anything but. Because judgment and discernment must be exercised in translating tasks into algorithms, there is much opportunity for values and to affect how they function. There are both positive and negative aspects to this. If well-designed, algorithms can be used to solve important moral problems in a fair and efficient manner. I haven’t studied the example in depth, but it seems like the matching algorithms used to facilitate kidney exchanges might be a good illustration of this. I have also noted, on a previous occasion, Tal Zarsky’s argument that well-designed algorithms could be used to eliminate implicit bias from social decision-making. Nevertheless, one must also be aware that implicit biases can feed into the design of algorithmic systems, and that once those systems are up and running, they may have unanticipated and unexpected outcomes. A good recent example of this is the controversy created by Google’s photo app, which used a facial recognition algorithm to label photographs of some African-American people as ‘gorillas’.

Anyway, that’s all for this post. Hopefully the challenges of algorithm construction and the philosophical importance of algorithmic systems is now a little clearer.


Wednesday, July 15, 2015

How should you title an academic article?




I have two guiding presumptions about the nature of academic publishing. The first is that academics want their work to be read. Academia is, for better or worse, a popularity contest. Academics want their work to be popular among other academics, and among policy-makers and the general public (depending on their goals and the nature of their research). ‘Popular’ doesn’t necessarily mean respected or admired. It is, of course, better to be popular and right, or popular and interesting, or popular and thought-provoking. But if you can’t be any of these things, then being debated and discussed is probably better than being ignored (within reason: if you are so controversial or stupid that you are constantly ridiculed, harassed or threatened, it is unlikely to be pleasant; anonymity might be better in that case).

In saying this, I don’t mean to downplay the intrinsic merits or rewards of writing and research. There is a lot to be said for the process of thinking and puzzling out an issue; of gaining private insight into some important concept or truth. But if you are only in it for these intrinsic rewards, then you don’t need to publish at all. If you are publishing your work, then popularity must matter at some level. This is true even if you only care about publishing in terms of the material rewards it brings. In the modern academy, career advancement depends, to a large extent, on how popular your work is. Universities love popularity metrics (e.g. reputational rankings). And the importance of all this is reflected in the fact that most academic publishers now provide you with a variety of popularity metrics whenever you publish your work with them. These include things like the number of downloads, shares on social networking sites, and citation rates. Academics often reference these things when looking for promotion or employment (I know I do).

My second presumption about the nature of academic publishing is that attention spans are incredibly short, and probably getting shorter all the time. This is certainly true for me. The internet is a rich cornucopia of information, and academic papers are published at an alarming rate. Deciding which papers to read is like trying to drink from a firehose. This means that if you want your work to be read, you really need to grab the potential reader’s attention. But how can you do this? I have a tendency to use my own experience as a guide — based on the assumption that there is nothing abnormal or non-average about me. A more data-driven approach would be useful but I’m quite lazy on that front. In any event, based on my own experience, two things determine whether or not I will read an article: the first is the article title; the second is the article abstract.

Now, I have a pretty rigid set of views about what an article abstract should look like. I think it should provide a very clear summary of the argument (or arguments) that will be defended in the article. The reader should be left in no doubt about the position(s) you will end up with at the end of the article. I also have a preferred template or test I use when writing an abstract. I wrote about this on a previous occasion. But despite my well-ordered approach to writing article abstracts, my approach to article titles is completely haphazard. I come up with something that feels or looks intuitively adequate, and then I think about it no more.

But if the goal is to be read, then this is pretty odd approach to take. In many ways, the title is likely to be more important than the abstract. The title is the first thing the reader sees. It will determine whether or not they even look at the abstract. So I really should be thinking about article titles in a more systematic manner. This post is a first step in this direction. I want to use it to catalogue some of my previous article-titling strategies, and to offer some reflections or thoughts on these strategies. And I also want to use it as a springboard for debate and discussion. It would be great if people could share their own thoughts and reflections on how to come up with article titles in the comments section.

I’ll start the ball rolling by describing my own approaches. As I just said, I’m pretty haphazard on this front. Nevertheless, there are some patterns and rules to what I do. The main rule is that I don’t like overly ‘clever’ or ‘funny’ titles. When I first started reading academic journal articles, I was enamoured with what I took to be funny or clever titles. I won’t name or shame anybody but you can imagine the kind of thing. Articles with titles like: ‘Bitch Better Have My Money: On the wisdom of debt forgiveness’. Over time I grew tired and suspicious of these title. Maybe this is irrational, but I think titles of this sort have a tendency to obscure. My own preferred titling-strategies settle into four categories:


The Question Title: A title which contains a provocative question of some sort. Some people hate question-titles. There is long-standing trope in journalism that any headline in the form of a question can always be answered ‘no’. But this isn’t true and I think question-titles have great merit. Questions can raise intriguing issues that pique a reader’s curiosity, and I think they can convey the subtle implication that the approach taken in the article will be inquisitive and non-ideological in nature (even if concrete conclusions are reached). I have only used a question in two of my article titles in the past — “Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised?” and “Hyperagency and the Good Life: Does Extreme Enhancement Threaten Meaning? — but I think I will do it more often in the future.

The Propositional Title: A title which contains (implicitly or explicitly) a clear statement of the main proposition(s) that will be defended in the article. I think this is a good approach to take, provided that the propositions being defended are interesting and capable of being stated succinctly. Many of my article titles are implicitly propositional, but I think only one or two have been successful on this front. My article on AI risk was titled “Why AI Doomsayers are like Sceptical Theists and Why it Matters”, which sets out pretty clearly what I will attempt to argue in main body of the article. And my article on the death penalty was titled “Kramer’s Purgative Rationale for Capital Punishment: A Critique”, which just about manages to imply what will be argued, though it doesn’t explain exactly what the problem with Kramer’s rationale is. I would like to experiment with more explicit propositional titles in the future.

The Descriptive-Triplet or -Doublet Title: A title which mentions the two or three key concepts or topics that will be addressed in the article. Descriptive titles definitely have their merits. I like them because they can be effective ways of conveying to the reader what the article is about, and they can allow readers to easily identify whether the concepts or topics covered are relevant to their own areas of research. I also think that doublets and triplets can be succinct, memorable and pleasing to the ear. Nevertheless, this is definitely a format that I tend to overuse, and I often fall back on it when desperate. For example, my last two articles have adopted the descriptive-triplet format — “Human Enhancement, Social Solidarity and the Distribution of Responsibility” and “Common Knowledge, Pragmatic Enrichment and Thin Originalism”. These seem dull and uninspiring to me now. I’m not sure I would have any interest in reading an article with a similar-sounding title.

The Ridiculous Title: A title which attempts to be provocative, descriptive or propositional but which fails due to length or obscurity. This is really just a catch-all category for the article titles I have come up with which seem — to my eyes — to fail miserably to provide an interesting hook for a reader. My favourite example of this from my own work is the article I published earlier this year on brain-based lie detection. For some unknown reason I thought the following would be a good title: “The Comparative Advantages of Brain-Based Lie Detection: The P300 Concealed Information Test and Pre-Trial Bargaining”. I think the idea was to provide a title that covered the main concepts and ideas, and then gave a sense of what the argument would be (something about the ‘comparative advantages’ of brain-based lie detection tests, whatever they are). But I think it fails miserably because it is replete with jargon (what is a “P300 Concealed Information Test”?) and is overly long. If I were given the chance, I would definitely re-title it to something like “Stopping the Innocent from Pleading Guilty: How Brain-Based Lie Detection Might Help” — which would give a clearer sense of what is being argued in the piece and why it is important.

So those are my strategies and thoughts. Do you have any thoughts on this topic? Do you know of any good data-based studies of academic article titles? (Someone must have looked into this in a systematic way). If so, please share in the comments section.