Wednesday, August 19, 2015

The Shape of an Academic Career: Some Reflections on Thaler's Misbehaving




I have long been interested in behavioural science and behavioural economics — heck, I even wrote a masters thesis about it once. I have also long been interested in the nature and purpose of an academic career — which is not that surprising since that’s the career in which I find myself. It was for these two reasons that I found Richard Thaler’s recently-published memoir Misbehaving: The Making of Behavioural Economics to be an enjoyable read. In it, Thaler skillfully blends together an academic memoir — complete with reflections on his friends and colleagues, and the twists and turns of his career — and a primer on behavioural economics itself. The end result is a unique and reader-friendly book.

But I don’t really want to review the book or assess the merits of behavioural economics here. Instead, I want to consider the model of the academic career that is presented in Thaler’s book. This is something that has been bothering me recently. Wont as I am to philosophical musings, I do occasionally find myself waking up in the mornings and wondering what it’s all for. Why do I frantically read and annotate academic papers? Why do I try so desperately to publish an endless stream of peer-reviewed articles? Why do I clamour for attention on various social media sites? I used to think it was just because I have a set of intellectual passions, and I want to pursue them to the hilt. If that means spending the majority of my time reading, writing, and sharing my work, then so be it. As Carl Sagan once said ‘when you’re in love, you want to tell the world’ and if you’re in love with ideas, that’s the form that your expression takes.

More recently, I’ve begun to question this view of my life. To this point, I have pursued my intellectual interests in a more-or-less haphazard fashion. If I’m interested in something, I’ll read about it. And if I’m really interested in it, I’ll write about it. I don’t worry about anything else. I don’t try to pursue any grand research agenda; I don’t try to defend any overarching worldview or ideology; I don’t try to influence public debate or policy. The result is an eclectic, disjointed, and arguably self-interested body of work. Should I be trying to do more? Should I be focused on some specific research agenda? Should I worry about the public impact of my work?

There seem to be at least two reasons for thinking I should. First, in terms of research agenda, it seems that one way to get ahead in academia (i.e. to win research-funding, wider acclaim and promotion) is to be an expert on a narrow range of topics. The eclectic and haphazard approach of my previous work is out of kilter with this ideal. Being a jack of all trades but master of none is a surefire way to academic mediocrity. Second, pursuing intellectual interests for their own sake, can be said to be both irresponsible and selfish. You ought to think about the public impact of your work (if only to save your job in the wake of the recent ‘impact’-fetishism in higher education). You ought to improve the world through what you do. Or so I have been told.

I have some problems with these claims. I don’t enjoy thinking about my work in purely instrumentalist terms. I am not convinced that eclecticism is such a bad thing, or that one should pursue ideological consistency as an end in itself. And while I would certainly like to make the world a better place, I would worry about my lack of competence in this pursuit. Ideas can make the world a much worse place too, as even a cursory glance at history reveals. That said, I do often feel the call of public spiritedness and grubby instrumentalist careerism.

Which brings me back to Thaler’s book….not that he’s a grubby careerist or anything (I don’t know the guy). It’s just that the book, perhaps inadvertently, presents a particular model of an academic career that I found interesting. It’s not the explicit focus of the narrative, but if you zoom out from what he is saying, you see that there are three main stages to his academic career. They weren’t pursued in a strict chronological fashion — there was some overlap and back-and-forth between them — but they are distinguishable nonetheless. And when you isolate them you see how it is possible to build a career from a foundational set of intellectual interests into something with greater public impact.

The three stages were:

Stage I - Pursuing one's Intellectual Curiosities: It would be difficult to be a (research-active) academic without having some modicum of intellectual curiosity. There must be something that piques your interest, that you would like to be able to understand better, or evaluate in more depth. Without a foundation in intellectual curiosity, it would be difficult to sustain the enthusiasm and hard work required to succeed. I choose to believe that anyway, and it certainly seems to be the case for Thaler. As an economics student, he was taught the standard rational utility maximising theory of human behaviour. But then he spotted all these examples of humans behaving in ways that contradicted this theory. He describes all this in Chapter 3 of the book where he talks about the ‘List’. This was something he compiled early in his career, listing of all the anomalies that were starting to bother him. A large part of his research was taken up with trying to confirm and explain these anomalies. And the curiosity didn’t stop with this early list either. Later in the book he gives some illustrations of how his curiosity was always being piqued by seemingly mundane phenomena, such as the formula the University of Chicago business school used for allocating offices in its new building, or the behaviour of contestants on popular game shows. I like to think this sort of ‘curiosity for the mundane’ is valuable, partly because I think curiosity is an end in itself and partly because mundane or trivial phenomena often provide insights into more serious phenomena. Either way, curiosity was the bedrock to Thaler’s career.

Stage II - Influencing one's Academic Peers: In academia there are few enough objective standards of success (outside of mathematics and the hard sciences anyway and even there influencing one’s peers is important). The only true measure of the academic value of one’s research is its acceptance by and influence on one’s academic peers. To some extent, the mere publication of one’s original research in well-respected journals is a way to do achieve this influence, but it is often not enough. After all, very few people read such articles. You often need to take more a more concerted approach. We see evidence of this in Thaler’s life too. His behavioural research presented a challenge to the received wisdom in his field. If he was going to get ahead and have any impact on the world of ideas, he needed to engage his peers: convince them that there was indeed something wrong with the traditional theory and influence future debates about economic theory. Part V of his book is dedicated to how he did this. Three examples stuck out for me. The first was relatively obvious. It concerned a conference he participated in in 1985. The conference was a face off between the traditional economic theorists and the more radical behaviourists. This conference format forced engagement with more sceptical peers. The second was a regular column he managed to secure in a leading economic journal (Journal of Economic Perspectives). The column was entitled ‘Anomalies’ and in it he presented examples of anomalies challenging the mainstream theory. The articles were intended for the economics profession as a whole, not just for research specialists, and most often involved findings from other researchers (i.e. not just from Thaler himself). The column gave him a regular platform from which he could present his views. And third, there was the summer school for graduate students that he helped to create back in 1992. This provided intensive training for future academic economists in the theories and methods of the behaviouralist school. This helped to ensure a lasting influence for his ideas. Building such outlets for academic influence looks like a wise thing to do.

Stage III - Pursuing Broader Societal Impact: The term ‘academic’ is sometimes used in a pejorative sense. People refers to debates as being ‘strictly academic’ when they mean to say ‘of little relevance or importance’. Academics often struggle with this negative view of their work. Some embrace it and defend to the hilt the view that their research need have no broader societal impact; others try to use their work to change public policy and practice. This is something Thaler eventually tried to do with his work (once the solid foundation in basic research had been established). There are several examples of this dotted throughout the book. The most important is probably the book he wrote with his colleague Cass Sunstein called Nudge. This book tried to show how behavioural research could be used to improve outcomes in a number of areas, from tax collection to retirement saving to bathroom hygiene. The book was published with a popular press and found its way into the hands of current British Prime Minister David Cameron (at the time he was leader of the opposition). Impressed by the book, Cameron established the Behavioural Insights Team to help improve the administration of the British government. Thaler was also involved in setting up and advising this team and he still works with them to this day. Now, I’m sure you could challenge some of the work they have done, but what’s interesting to me here is how Thaler managed to successfully leverage his research work into real-world impact.


Just to be clear, I’m not suggesting that all successful academic careers will have these three stages, or that, if they do, they will look just as they did in Thaler’s case. All I’m suggesting is that there is a useful model in these three stages, one that I might think about using for my own career. Thus, I want to make sure that I always maintain a firm foundation in intellectual curiosity, and use that to generate a valuable set of insights/arguments. That’s effectively what I have spent most of my time doing so far and I will continue to do it for as long as I can. But I don’t want to just leave it at that. From this foundation I want to think about ways in which to influence both my academic peers and the broader society. I don’t think my work lends itself to the same kind of practical impact as does Thaler’s, but I think that’s okay. Societal impact can be generated in other ways, e.g. through public education and inspiration, and I suspect that might be more my thing.

Wednesday, August 12, 2015

The Art of Academic Reading: Strategies and Tactics




If you’re like me, then reading will be an important part of your life. Indeed, it might just be the most important part of your life. I’m not an empirical researcher. I don’t have a lab in which to perform experiments. I don’t interview people or conduct surveys. I don’t go out in the ‘field’ and collect data. Heck, I rarely even leave the privacy of my own home. My primary form of research consists in sitting in quiet isolation, reading a bunch of stuff, thinking about it for a while, and then hopefully stumbling upon an interesting idea or argument. Reading is critical to what I do. It is the ‘field’ in which I collect my data; and the ‘laboratory’ in which I conduct my experiments.

Given its importance, it would probably behoove me to have some method or theory of reading. Methods and theories seem pretty important in other aspects of my life. When I write, or run, or cook, or clean, I don’t do so haphazardly. There is some set of defined steps, some method to my madness. I have a sense, however vague, of what I’m trying to achieve with these activities and I put in place a plan that I think will best achieve these ends. You would think that it would be the same with reading, but oddly I have found this not to be the case. Many people I know, including many academics, approach reading in a fairly haphazard and intuitive manner. I do not exempt myself from this. It is only in recent times that I have really reflected on the methods and theories behind my own daily reading. As per usual, the prompt for these reflections was the need to teach students the art of academic reading. The purpose of this post is to share some of these initial reflections.

Before I get to them, I hope you’ll indulge me for a moment or two as I consider further the need for such reflections (if you’re really only interested in my theories and methods you can skip to the next section). I think it is important to teach students something about the art of reading, but in my (limited and narrow) experience this is not often done, or if it is done it is not done well or systematically. I know that there are famous guides to reading, both general and discipline specific, but I don’t know how often those are taught as opposed to being proffered as further reading for interested students. And in my own discipline of law, I find that the efforts to teach the art of reading are woefully inadequate. The typical assumption seems to be that students already know how to read: our job should simply be to test whether they understood what they read. Indeed, I have encountered some really strange attitudes toward reading in my life as both a student and a lecturer. Two quick anecdotes about this.

When I was a student I distinctly remember, in my first ever tutorial, being told by my tutor that it should take me 12 minutes to read a 15-page case. At the time, I thought she was being serious, and I was disheartened when it took me much longer to get through it, but I later realised that this was a bizarre piece of advice. Cases are one of the bits of ‘raw data’ with which a lawyer must contend. They contain reasoned legal arguments that the lawyer can accept or dispute. Getting to grips with these bits of data is an essential part of a law student’s life. There’s no way this can be done properly for a 15-page case in a mere 12 minutes, certainly not if you are a first year law student. The tutor’s mistake came from assuming that reading a legal case was much the same thing as reading one of John Grisham’s legal novels. But these activities are not the same. One is highly engaged intellectual process; the other is passive enjoyment.

Similarly, when I was starting out as a lecturer, I remember one of my colleagues (who shall remain nameless) expressing bafflement at the notion that we should teach students how to read. She believed that this was something they would already know how to do, and that there were far more important things to be teaching them, such as the content of legal rules. Now, I admit there is some value to teaching law students the content of legal rules, but once again I was struck by how bizarre her attitude toward the teaching of reading was. Since reading is a practice that is intrinsic to virtually every aspect of the law (practical or academic), one would think that there could be nothing more important than teaching students how to engage in this practice well. Maybe my colleague didn’t really mean it — maybe if she had reflected on it for a moment she would have conceded that it is something upon which students need greater guidance — but her initial reaction is, I believe, indicative of the unreflective and intuitive approach most people have toward the art of reading.

Anyway, with all that in mind, I want to present my own thoughts on the theories and methods of reading. This is very much my first systematic attempt to get these reflections down in written form, and I hate to be overly prescriptive in what I say. There is much to be said for the notion that this reflective attitude toward reading is something that people can learn themselves over time. Consequently, I offer these thoughts merely as one way (among many) to think about and practice the art of reading. I would be happy to hear of other approaches in the comments section.


1. Why do I read?
The comedian Bill Hicks used to do a bit about reading a book in a waffle house in Fyffe, Alabama. He had just done a show, and he was hungry, so he ordered some waffles. While eating, he decided to occupy his mind by reading a book. The waitress approached him and asked ‘Hey, what you reading for?’. Hicks was surprised. He thought this was the weirdest question he had ever been asked. People might ask ‘what are you reading?’, but never ‘what are you reading for?’. Hicks thought about it for a bit and responded ‘I guess I read so I don’t end up being a waffle waitress’.

Stand-up routines are never quite as funny when reduced to prose, so you’d need to watch the bit to get the full effect (there’s more to it than I’m letting on as well - see for yourself). Clearly, Hicks was engaging in a bit of ‘Southern-bashing’, chastising and poking fun at the backward and anti-intellectual attitudes among denizens of the South. This is, no doubt, an unfair cultural generalisation. And, as it happens, I think the waitress’ question is a good one. It is always worth considering the purpose behind our reading. When I consider this myself, I find that the purposes are threefold:

Pleasure: I often read purely for pleasure, i.e. for the subjective enjoyment I get from following a story or narrative or argument. This is mainly true for fiction reading; but I also find certain forms of non-fiction reading to be highly pleasurable.

Understanding/Insight: I frequently read for understanding and insight, i.e. to gain a deeper appreciation for why something happens the way it does, to gain practical knowledge, to appreciate something from a new perspective, or simply to learn something cool. This is mainly true for non-fiction, but, of course, well-written fiction can often help you to gain insight or understanding.

Fuel for the imagination: I also read so I can provide fuel for my own thinking and critical reflection. This might be slightly more esoteric so allow me to explain. As I mentioned above, I am not an empirical researcher. My academic work consists in combining and recombining ideas, arguments and concepts that others have presented or written about in the past. In essence, I spot patterns and connections between different bodies of ideas. The only real novelty in my work comes from the combinatorial process. To do this, of course, I need to have a storehouse full of ideas, arguments and concepts in my brain and I need to have situational prompts or habits that allow me to see connections between these ideas, arguments and concepts. Reading is essential to this because it helps me to fill up the storehouse. This is the sense in which it provides fuel for my imagination.

The three purposes are not mutually exclusive. The same text can be a source of pleasure, understanding and imaginative fuel. That said, I think the three purposes are separable, to at least some extent. In other words, I think it is possible to read purely for pleasure, purely for understanding, or purely to provide fuel for the imagination. By this I mean that it is possible to approach the task with only one of these purposes locked in the focus of the conscious mind (your mind may subconsciously and automatically combine these purposes).

Knowing why you are doing something is a good first step to figuring out how to do it better. This is where reading methods come in. I won’t focus on reading for pleasure in the remainder of this post since my concern is really with ‘academic’ reading, which falls more squarely within the purview of the other two purposes. If you are reading for understanding/insight and to provide fuel for imagination, you need to do something to ensure (a) you are fully intellectually engaged by the process (i.e. that you are comprehending and evaluating the ideas being presented) (b) you have some reinforcement mechanism (i.e. some way to remember the ideas, arguments and concepts, and to fuse them into your mental frameworks). I’ll talk a bit about how you can do this. In doing so, I’m going to adopt a military metaphor and distinguish between reading strategies and reading tactics.


2. What are my reading strategies?
In military parlance, a strategy is a general campaign plan, whereas a tactic is a specific method or step used to implement that campaign plan. That’s a first pass at the distinction anyway. A lot has been written about it and some people flesh out the distinction in more rigorous and nuanced ways. I don’t want to go into too much detail here since I’m merely using it as a rough analogy for understanding how I approach the task of reading. For me, the term ‘reading strategy’ denotes a general style of reading, with a particular purpose or set of purposes in mind; whereas as a ‘reading tactic’ is a specific step taken to achieve those purposes. You may wonder why we need a separate category of reading strategies when we have already identified a set of reading purposes, but I think the category is needed because there are distinctive reading styles that are differentiated by the amount of time and intellectual effort they involve.

I employ two general reading strategies. They are:

Broad Brush: I use this strategy when I want to read a lot of material, in a relatively short period of time, and I don’t want to critically engage with the minute details of the arguments, ideas or concepts presented in the text. Instead, I just want to get the general gist of those arguments, ideas and concepts. I mainly use this strategy when reading popular non-fiction books. For example, when reading popular science, history or philosophy books. I use these texts to expand my general knowledge, and as gateways into new fields of inquiry. I approach them in a relatively open-minded fashion, hoping to gain new insights that may prove useful in the future; I’m not really concerned with critiquing them. Hence the broad brush style seems most appropriate for these texts.

Deep Dive: I use this strategy when I really want to critically engage with the minute details of the arguments, ideas or concepts presented in the text. This is a much more labour intensive style of reading. It takes a long time, and involves copious amounts of notetaking and reflective interludes (I’ll say more about these ‘tactics’ below). I mainly use this strategy when reading academic articles or monographs within my current field of research. This makes sense since these texts are the primary source material for my own imaginative combinatorics and critical evaluations.

Although I describe these as two separate strategies, it is important to realise that they are not truly distinct. They represent the two ends of a spectrum. One can slide along this spectrum depending on the degree of effort and time expended in the reading process. I should also clarify that I don’t adopt these strategies on an exclusive basis. I will often flit back and forth between them whilst reading the same text. Thus, even though I said I ‘mainly’ use the broad brush strategy whilst reading trade non-fiction; I don’t do so exclusively. Sometimes I’ll find that one of the ideas or arguments presented in such a book warrants a deeper dive, and so I’ll slow down and start to engage with the text in a more intensive. The same goes for academic articles and monographs. Sometimes I’ll speed up and start reading these in a broad brush style.

I think it is important to adopt both strategies. You might think that the labour intensive style of the deep dive is optimal and that really we should employ this style for all texts we read. But I don’t think that is true. One reason for this is that I don’t think people fully appreciate how much time it takes (in my opinion) to do a proper deep dive. This was the mistake of my tutor when she suggested that we read a 15-page case in 12 minutes. In reality, something like this should take a student a couple of hours. I know it takes me at least two hours to do a deep dive on a typical 10,000 word academic article. Oftentimes it takes longer, depending on the difficulty of the piece and my level of interest. But if I constantly did deep dives like this I would severely limit the amount I could read. This is why I think it is important to supplement deep dive reading with broad brush reading. That way you get a nice balance between depth of analysis and breadth of knowledge (hence the names).

That said, I have no idea what the optimal mix of these strategies is.


3. What are my reading tactics?
Tactics are the nitty gritty, step-by-step details of the reading process. The enumeration of such details might be something that is only of interest to uber-reading-nerds, but I’ll risk joining their ranks by providing some detail about what I do. The tactics I employ depend on the reading strategy I’m following, so I’ll need to talk about both (i) broad brush tactics and (ii) deep dive tactics.

There are, however, some shared tactics and it makes sense to start with them. For example, no matter what the strategy, I always try to cultivate a consistent and diverse reading habit. By consistent I mean I try to read every day, often at set times. A typical reading routine for me is to read for 30-60 mins first thing in the morning (either whilst having breakfast or shortly thereafter); to read in the afternoon (usually between 3 and 5); and to read late at night (just before going to bed — I usually don’t read in bed because I tend to fall asleep pretty quickly). By diverse I mean both that I try read a wide range of materials and jump back and forth between the different strategies. I think it is important to read a wide range of material (from many disciplines) because this helps you to identify novel connections and combinations of ideas. And I have already given my reasons for thinking that diverse strategies are important. On a normal day, I will do broad brush reading in the mornings and late at night. I will do deep dive reading in the afternoon. Even though I strive for consistency and diversity, I often fail at both things. It is not always possible to read at the same times every day, and sometimes I get stuck in routines where I’ll read the same kind of material over and over again. I don’t beat myself up about this. In the very long run I think I manage to maintain consistency and diversity.

In terms of broad brush reading tactics, there are really only three or four things I do on a regular basis. Remember, the purpose of broad brush reading is to get the general gist of argument, idea or concept. It is about breadth of coverage, not depth. So the tactics cannot be too labour intensive. Still, you need to have some way to understand and reinforce the breadth of the material you are reading. The simplest way to do this is to pause and reflect on what you are reading. I do this a lot. If you ever watched me reading something, you’d see me frequently gazing into the middle distance and you might assume I was goofing off. But I’m usually thinking about what I have just read (usually…). I also dog-ear important pages in the book for future reference (if I’m reading on Kindle, I’ll bookmark or highlight but if I am honest, I find Kindle pretty much useless for any sort of intellectually engaged reading; I use it almost exclusively for pleasure reading). Some people are infuriated by my habit of dog-earing books, arguing that it ‘ruins’ them. I find this odd. I don’t think the value of books lies in their resale value (which is negligible anyway); the value lies in what I get out of them. If dog-earing helps me to get more out of them so be it. That said, dog-earing by itself is not hugely effective. You need to revisit and reconsider the key passages. This requires discipline and I often fail to be disciplined. That’s why, if I really want to reinforce something I’ll write short end-of-chapter summaries. Sometimes (but sadly not always) chapters end on pages with plenty of blank space. If I’m so minded, I will use this space to summarise, in my own words, the key arguments and ideas. The image below shows an example of this from a book I read about David Hume’s argument from miracles (I eventually did a deep dive on this book, though that wasn’t my original intention). Another thing I will do (though I’ve only recently experimented with it) is assemble my own book-index. So, inside the front cover, I will write down page numbers and brief descriptions of the key ideas on these pages. This is a handy guide for future reference.

Fogelin's Defence of Hume's on Miracles - Chapter Summary


Although I use all of these tactics at different times, I find that by far the most useful broad brush tactic is to listen to podcasts or watch videos in which the author of the book I am reading lectures on or is interviewed about its main ideas. This is something that has only really been made possible in the past few years but it is now exceptionally easy to do: authors are encouraged to promote their work by doing talks and interviews, and a huge volume of this promotional effort is now archived online. To give an example, I am currently reading the book Why the West Rules for Now by Ian Morris. It is a long and detailed study of the patterns of social development across the East and West over the past 14,000 years. It is full of interesting and provocative insights. I highly recommend it. To supplement my reading of this book, I have watched or listened to about 5 or 6 different talks given by Morris. I listen to these repeatedly, whilst driving or cooking or when performing other manual tasks. Doing so helps to guide my reading of the book, and to reinforce its key ideas.

Turning to deep dive tactics, these are obviously more time-consuming and labour intensive. Remember, the main goal of deep dive reading is to understand the minutiae of the arguments, ideas and concepts contained in the text, and to spot interesting patterns or connections. To do this, I need to engage in extensive annotation of the material. The precise method of annotation depends on whether I am reading the material (usually an article or monograph) in hard copy or digitally:

Hard Copy Annotation: I will underline key passages; summarise the main steps in the argument in the margins; write down critical questions or objections when appropriate (though I don’t do this too often — I tend to save the critical probing for when I eventually write about the relevant idea, if I ever do). I will also diagram the arguments in the article, or draw some other flow chart or picture that helps me to understand what has been written. I am quite a visual thinker and I enjoy representing complex ideas in more than one dimension. I have tried to give some examples of these annotations in the photographs below.
Hard Copy Annotation - Summarising key points in the margin

Hard Copy Annotation - Diagramming key concepts



Digital Copy Annotation: I use a program called Papers for the Mac. This helps me to store, read and annotate digital copies of articles and monographs. I do this by reading in full screen, highlighting key passages, and using the note-taking function to write a rolling summary of the article. I find digital annotation less flexible and less engaging than hard copy annotation. But it has some compensating benefits: it is much faster to type summaries; and there is no need to print and physically store copies of the annotated papers. I’m doing this more often than I used to, but I still like to do a lot of hard copy reading and annotation. The screenshot below gives a flavour of how digital annotation works on Papers. I’m sure there are similar or better programs out there. I just happen to like this one.


Digital Copy Annotation on Papers - Summarising key arguments with notetaking function


I like to read things once and to do so in the most intellectually engaged manner that I can. I’ll then rely on my own notes and summaries if I want to revisit the piece. I don’t like the method of reading through something once to get the general gist and then going back over it to take notes. That seems like a waste of time to me: I have enough trouble motivating myself to read something once, never mind doing it multiple times.

The main tactic I employ for reinforcing deep dive reading is writing. If I want think further about something, I will either write a blogpost about it or use it as the basis for an academic article. I can think of no more effective reinforcement method. Writing is a type of thinking. Writing blogpost summaries of an article I have just read really forces me to make sure that I understand what it is saying, that I am being charitable to its author, and that I critically engage with its contents. This was one of the main reasons I started blogging in the first place. I didn’t do so because I wanted to be read (though that is nice); I did so because I wanted to forge a deeper understanding of the material I was reading.

Anyway, that’s all I have to say (for now) about the art of academic reading. I have tried to summarise my strategies and tactics in the diagram below. As I said at the outset of this post, I would love to hear from readers about their own reading strategies and tactics. What do you do differently? What do you find most effective?



Thursday, August 6, 2015

Does God guarantee meaning in life? A Novel Argument for Atheism




Meaning is important. People want to live meaningful lives. They want to make a ‘difference’. They want for it all to ‘matter’. Some people think that this is only possible if God exists. They say that if God does not exist, then we are doomed to live finite lives on a finite planet in a finite universe. Everything will eventually collapse, crumble and die. It will all be for naught. But if God does exist, there is hope. He will save us; He can guarantee our eternal lives in the most perfect state of being; He can imbue the universe with purpose and value.

But is this traditional picture of the relationship between God and meaning right? I have written numerous posts challenging it over the years. But I am always keen to find fresh perspectives. That’s exactly what Megill and Linford’s recent paper ‘God, the Meaning of Life, and a New Argument for Atheism’ provides. They make an interesting, two-part case. The first part argues that God’s existence would indeed guarantee meaning in life. The second part argues that even though God’s existence would guarantee meaning, it is highly unlikely that God himself is the source of that meaning.

As I say, this is an interesting juxtaposition of arguments. On the one hand God is said to be sufficient meaning; on the other hand he is not thought to be necessary for it. I want to look at both sides of this equation over the next two posts. Today, I look exclusively at the first part of the argument. As we shall see, Megill and Linford think that this argument has an interesting consequence: it allows us to formulate a new argument for atheism.


1. Why God’s Existence Should Guarantee Meaning
Let’s get something straight first: ‘meaning’ is a tricky concept. It denotes a property of human lives that is thought to be valuable and worth having. It is distinct from the property of well-being, though it may be related to it (i.e. well-being may be necessary or sufficient for meaning, according to some theories). It is also likely to be distinct from similar properties like significance or purpose or worthwhileness, though oftentimes the term ‘meaning’ is used interchangeably with these other terms. Another important point is that meaning is usually understood to come in degrees. It is not a purely digital phenomenon. It is not the case that you either have a meaningful life or you don’t. Rather, you can have degrees of meaning in your life, though this is consistent with the existence of some threshold of meaningfulness that is needed to make your life worth living.

There are many theories of meaning. I have covered some of them in previous blog posts. One of the most popular is that God somehow provides or imbues our lives with meaning. Megill and Linford’s first argument agrees with this. They say that God can indeed ensure that our lives are meaningful. The argument is pretty straightforward. If we assume that God is omnibenevolent (or maximally benevolent); and if we assume that meaning is something that makes human life better than it would otherwise have been; then it looks like God would only desire to create lives with meaning. Of course, desire is one thing, practical realities are another. To ensure that our lives have meaning God would have to have the ability and power to actualise meaningful lives. Fortunately, those powers are also part and parcel of the traditional concept of God. He is, after all, also said to be omnipotent and omniscient.

That gives us the following argument:


  • (1) If God exists, he is omnibenevolent, omnipotent and omniscient (or maximally good, powerful and knowledgeable, or whatever variant on ‘perfect being’ theology you happen to prefer)
  • (2) An omnibenevolent God would not create meaningless lives.
  • (3) An omniscient God would know whether or not lives had meaning.
  • (4) An omnipotent God could actualise a world in which our lives had meaning.
  • (5) Therefore, if God exists, our lives must have meaning.


We’ll run through a few critiques of this argument below, but a couple of points are worth noting before that. First, in relation to premise (2), it might seem intuitively obvious that a perfectly good being would, if possible, create lives with meaning. But this intuitive obviousness can be underscored by another argument that draws explicit links between meaning and the problem of evil. The problem of evil claims that God’s existence is incompatible with the existence of certain types of evil. One of the most problematic types of evil is gratuitous suffering. This is a type of suffering that seems to be pointless, or not contributive to the greater good. One of Megill and Linford’s main arguments — one that they return to over and over again in their article — is that a life with any degree of suffering, and which also lacks meaning, would consist of gratuitous suffering. If the life lacked meaning then the suffering within it would not be contributing to any larger purpose or good. Consequently, if it consists of any suffering whatsoever, it follows that this suffering would be gratuitous. But given that people do in fact suffer in life, and given that gratuitous suffering is incompatible with God’s existence, it follows that if God exists our lives must have meaning. As the authors themselves put it:

If our lives lack meaning, there would be no greater meaning for our suffering either, and so it would be gratuitous. But then, given that we do suffer, and that God’s existence and gratuitous suffering are not compossible, if God exists, our live must have meaning. 
(Megill and Linford 2015)

The other point worth noting about this argument is that it is very expansive in terms of its scope. It is not simply claiming that if God exists some lives must have meaning; it is claiming that if God exists, all lives have meaning. This becomes important below when we consider how this argument provides the basis for a new argument for atheism.


2. Objections and Replies
Let’s now consider five objections to the argument. This follows the discussion in Megill and Linford’s article, but I’m going to number both the objections and replies so that I can plug them into an argument map at the end of this section. As I run through these objections and replies, you will start to see how important the ‘gratuitous suffering’ argument is to their case.

The first objection claims that God need not guarantee that our lives are meaningful; rather he can simply create the conditions in which our lives have meaning and let us exercise our own free will in guaranteeing whether or not they actually have meaning. This is similar to the move made by many theists in the debate about the problem of evil. They claim that God need not guarantee an absence of suffering and evil in the world so long as he provides us with conditions in which we can exercise the great good of free will:


  • (6) Objection: God merely has to create the conditions in which meaning is possible; he need not guarantee that our lives have meaning.


Megill and Linford’s response here is to appeal to the gratuitous suffering problem. They point out that if some lives lack meaning, and if there is suffering in these lives, then that suffering is gratuitous. This would be incompatible with the existence of God as traditionally conceived. Thus, if there is going to be suffering in our lives, God really does have to guarantee that life has meaning. There is a conditional built into this reply: it is only if our lives involve suffering that God must guarantee meaning. If there was no suffering, this would not be a problem. However, this is pretty cold comfort to the theist since in the actual world — i.e. the one we actually live in — our lives do involve suffering. Summing up:


  • (7) If our lives involve any suffering (as they actually do) then God must guarantee meaning in order to ensure that our lives do not involve gratuitous suffering.


(For what it’s worth, I think a theist might be able to craft a response to this along the lines of Anderson’s defence of sceptical theism. I haven’t thought it out in full detail but you can read my critique of Anderson’s defence here.)

The second objection focuses on the distinction between well-being and meaning. As mentioned above, many philosophers think that meaning is distinct from subjective happiness or contentment. Maybe God could exploit this distinction? Maybe he could compensate us for a lack of meaning by providing us with an overabundance of well-being?


  • (8) Objection: God could compensate us for the lack of meaning by providing us with an abundance of well-being.


Megill and Linford think that there are a number of problems with this objection. First, it is not clear that it is conceptually coherent. If a life is devoid of meaning then arguably one cannot be truly happy. Second, it is hard to see why God would actualise an inferior world. If it is possible to ensure that lives have both meaning and happiness, then surely God would actualise those lives over lives with merely superficial happiness. Finally, we have once more the problem of gratuitous suffering: a superficially happy life with no meaning, and a mere tincture of suffering, would involve gratuitous suffering and that would be incompatible with God’s existence.


  • (9) There are three problems with this objection: a happy but meaningless life may be conceptually incoherent; God would not actualise an inferior world; a superficially happy but meaningless life would have to involve no suffering.


The third objection is a little bit more serious. It is similar to the classic objections to the problem of evil. It argues that God simply does not have it in his power to actualise a world that is devoid of meaningless lives (just as some theists maintain that God does not have it in his power to actualise a world that is devoid of suffering). So God is not to blame for the fact that some lives lack meaning.


  • (10) Objection: It is impossible for God to create a world in which all lives have meaning.


Megill and Linford don’t say a whole lot in response to this. They say that it is difficult to see why this is impossible, and I suppose they have a point here. When responding to the problem of evil, theists will typically point to some reason why God has to allow some suffering in the world (such as free will). So I guess you could say that the burden of proof is on the theist in this respect. It is up to them to show why God is justified in creating a few meaningless lives. There is also the danger that any justification they offer would end up being paradoxical. I’ll discuss this in more detail in a moment. The other point Megill and Linford make is to appeal, once again, to the gratuitous suffering argument. But I won’t repeat that anymore.


  • (11) No reason is offered for thinking that it is impossible for God to create such a world. The burden of proof is on the theist.


The fourth objection is a variation of the previous one. It is the sceptical theist response. This will be familiar to anyone who engages with the literature on the evidential problem of evil. The idea is that our minds are cognitively limited. We do not fully understand the relationships between different conditions of value and ultimate meaning. God does. Thus, it could be the case, for all we know, that God has some justification for allowing a few meaningless lives. The difference between this objection and the previous one is that it attempts to rationalise theistic ignorance. So there is some attempt here, however minimal, to discharge the burden of proof.


  • (12) Objection: It could be the case, for all we know, that God has some justification for creating lives that are devoid of meaning.


There are many possible responses to this. One is to highlight the epistemic costs associated with the sceptical theist position. I’ve written a whole serious of posts about those costs. I have also published two academic articles about them. Unique to this particular dialectic, however, there is the complaint that any purported justification would be incoherent. I hinted at this above. Now is the time to spell out the argument in full. The idea is that any purported justification for the existence of meaningless lives would, presumably, be to the effect that those lives are necessary for some greater good. But if those lives are necessary for some greater good, it seems to follow that they have some ultimate purpose/value/meaning. Therefore, if God has a justification for them they must be meaningful, which undermines the original objection.


  • (13) There are two problems with this: any purported justification for meaninglessness would imbue the life with meaning; and sceptical theism has other associated epistemic costs.


Megill and Linford discuss one final objection. I’m not going to get into this objection in any real detail though because I think it is an instance of philosophical overkill (i.e. identifying and responding to objections that aren’t really all that threatening just for the sake of being comprehensive). The gist of the objection is that there might be a particular theory of meaning that justifies some meaningless lives. But this looks very similar to the sceptical theism objection — which has already been dealt with — and Megill and Linford only really discuss it so that they can highlight what I take to be an obvious feature of their argument: it makes no appeal to any particular theory of meaning; if it works, then it works for all accounts of meaning (whatever it is that meaning turns out to be).



3. Conclusion: A New Argument for Atheism
That brings us to the end of this post. To briefly sum up, if Megill and Linford are correct, God’s existence would entail that all lives are meaningful. One interesting implication of all this is that the argument just presented can obviously be flipped around into an argument against the existence of God. As follows:


  • (14) If God exists, then all lives have meaning.
  • (15) There is or has been at least one human life that lacked meaning.
  • (16) Therefore, God does not exist.


Megill and Linford claim that this is a novel argument. It is not simply a rehash of the problem of evil because it is not just about suffering and pain. After all, meaning is distinct from well-being and happiness. I not so sure that this is so ‘novel’. I think the problem of evil already encompasses a broader set of disvalue than mere suffering and pain. I wrote a series of posts about this on a previous occasion.

Anyway, let’s quickly analyse the argument. The first premise is just the conclusion to the preceding argument and so should cause no controversy. The second premise is the tricky one. An atheist would need to point to at least one life that lacked meaning. It might be quite difficult to prove this since a theist will, no doubt, always appeal to the possibility of some ultimate meaning. Thus, even if you could point to lots of individual human lives that seem (for all we know) to be devoid of meaning, it is possible for the theist to argue that they all fit into God’s mysterious plan. To make this argument stronger, you would need to cut off this possibility (i.e. insist on a purely secular theory of meaning). That’s what the second part of Megill and Linford’s article tries to do. I’ll discuss that in the next post.

Sunday, August 2, 2015

Did my brain make me do it? Neuroscience and Free Will (2)




(Part One)

Discoveries in neuroscience, and the science of behaviour more generally, pose a challenge to the existence of free will. But this all depends on what is meant by ‘free will’. The term means different things to different people. Philosophers focus on two conditions that seem to be necessary for free will: (i) the alternativism condition, according to which having free will requires the ability to do otherwise; and (ii) the sourcehood condition, according to which having free will requires that you (your ‘self’) be the source of your actions. A scientific and deterministic worldview is often said to threaten the first condition. Does it also threaten the second?

That is what Christian List and Peter Menzies article “My brain made me do it: The exclusion argument against free will and what’s wrong with it” tries to figure out. As you might guess from the title, the authors think that the scientific worldview, in particular the advances in neuroscience, do not necessarily threaten the sourcehood condition. I discussed their main argument in the previous post. To briefly recap, they critiqued an argument from physicalism against free will. According to this argument, the mental states which constitute the self do not cause our behaviour because they are epiphenomenal: they supervene on the physical brain states that do all the causal work. List and Menzies disputed this by appealing to a difference-making account of causation. This allowed for the possibility of mental states causing behaviour (being the ‘difference makers’) even if they were supervenient upon underlying physical states.

If that seems at all confusing, I recommend reading the previous post. I will be taking a lot of the argumentative ground covered in that post for granted in the remainder of this post. That’s because the remainder of this post switches the focus from physicalism (a philosophical doctrine) to findings from contemporary neuroscience. List and Menzies argue that many modern day neuroscientists are sceptical about free will. But their neuroscepticism exhibits the same flaw that they found in the exclusion argument. I’m not so sure about this. I’m going to try to explain why.


1. List and Menzies’ Interpretation of the Neurosceptical Argument
To start things off, I need to explain how List and Menzies’ understand the neurosceptical position. A paradigmatic statement of neuroscepticism can be found in Sam Harris’s book Free Will. I quoted this in the previous post, but it is worth repeating here:

’Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence’ 
(Harris 2012, 7)

List and Menzies use Harris’s work as their main scratching post in their paper, highlighting another section of the book where he refers to free will as an ‘illusion’ because ‘thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control’ (Harris 2012, 12).

Using quotes of this sort as their source, the authors argue that Harris and other neurosceptics rely on the following (oftentimes implicit) argument (numbering continues on from part one):



  • (9) If an agent’s choices and actions are wholly caused by neural states and processes that are inaccessible to his or her consciousness, then these choices and actions are not free.

  • (10) Human choices and actions are wholly caused by neural states and processes that are inaccessible to the agent’s consciousness.

  • (11) Therefore, human choices and actions are not free.



Let’s consider how one might defend the two main premises of this argument.

The first premise states a condition for free will. It holds that causation by consciously accessible mental states is essential to free will. This is very similar to the sourcehood condition that I mentioned above. It is also very similar to the principle used to motivate the exclusion argument that was discussed in part one. If you recall, the opening premise of that argument stated that an agent’s mental states had to cause their behaviour in order for it to be free. The only real difference is that this new argument demands that the mental states be consciously accessible. I’m guessing this means that the agent’s conscious mental states must be causally responsible for their behaviour. I think this is a plausible sufficient condition for free will (or, at a minimum, for responsible behaviour), but I’m not sure if it is necessary. Neil Levy’s recent book argues that it is, but I have not read it yet. List and Menzies think it is plausible, citing some survey evidence from Eddy Nahmias suggesting that most ordinary people think that premise (9) is correct.

Premise (10) is where the discoveries in neuroscience come into play. Neurosceptics typically appeal to a widely-known set of evidence suggesting that our behaviour is caused by neural events that are largely beneath or outside our conscious awareness. I’ll mention three such sources of evidence here. This is for illustrative purposes only; it is not intended to be exhaustive. First, there are Benjamin Libet’s famous studies on intention and behaviour. By getting people to perform simple actions and recording associated brainwaves, Libet’s studies found that the conscious intention to act post-dated the neural causation of the action by nearly half-a-second. These studies have been scrutinised and challenged over the years (I always enjoyed Dennett’s discussion of them). Second, there are the more recent studies by the likes of Haggard and Haynes which seem to confirm and extend Libet’s results. These studies suggest that neural causes precede conscious awareness by even longer periods of time, perhaps by up to 10 seconds. Third, and finally, there is the work of Daniel Wegner, particularly the work found in his book The Illusion Conscious Will, which brings together a diverse set of studies, all pointing to the same conclusion: that the conscious will does not direct or control our behaviour. Rather, our consciousness confabulates a mental cause of our behaviour after the fact. This evidence all confirms Harris’s view that we are mere ‘conscious witness[es]’ to the true, underlying, neural causes of our actions.

If this is all correct, then the neurosceptical position is confirmed.


2. List and Menzies' Critique of Neuroscepticism
But, obviously, List and Menzies do not see it that way. They argue that the neurosceptics make the same mistake as the physicalists. This is unsurprising since most neurosceptics are resolute physicalists, but it is worth going through the mistake to see exactly how it applies to the neurosceptical position. The main flaw comes with the motivating principle stated in premise (9). This premise appears to claim that the existence of a sufficient neural cause rules out the existence of a mental cause. But this is wrong. Just as the physicalists mistakenly assumed that sufficient lower-level physical causes ruled out higher-level mental causes, so too do the neurosceptics mistakenly assume that sufficient lower-level neurological causes rule out higher-level mental ones.

To be more precise, the mistake lies in the assumption that a sufficient neurological cause rules out a higher difference-making cause. It could well be that for every single action there is a sufficient neurological cause, but also a difference-making mental cause. Remember the example from the previous post. Suppose you have a flask of boiling water and the flask cracks. What is the cause of the cracking? You could attribute it to a particular arrangement of the (expanding) molecules of water, or to the act of boiling. The particular arrangement of molecules is a sufficient cause of the cracking; but the act of boiling is the difference-maker. This is because boiling could have led to a different arrangement of water molecules that was also sufficient for cracking. It is the true-difference maker because its presence or absence makes a difference to the outcome across an appropriate set of possible worlds. The particular arrangement of water molecules does not.

The key point is that the same could be true of the relationship between sufficient neural causes and supervenient mental states. Neuroscientists might discover that a particular pattern of neuronal firing is sufficient for the act of raising one’s hand. But it is possible that the same act could be caused by a slightly different pattern of neuronal firing. The only thing shared by the two distinct patterns of neuronal firing might be a supervenient mental state (e.g. the intention to raise one’s hand). This mental state would then be the true difference-maker.

The upshot of this is that premise (9) would need to be reformulated if the neurosceptical position were to be persuasive. List and Menzies suggest the following reformulation, one that respects the difference-making account of causation (note I have amended this from their original discussion):


  • (9*) If an agent’s choices and actions have a difference-making cause at the neuronal level, and they do not have any other difference-making cause at the mental level occurring at the same time, then the agent’s actions and choices are not free.


With this reformulated premise in place, the debate switches to premise (10), or rather to a suitably reformulated version of that premise. This one would claim that neuroscientific evidence points to difference-making causes at the neuronal level, not co-occurrent with difference-making mental causes. But List and Menzies reject this premise. In doing so they make two points, one conceptual and one empirical.

The conceptual point focuses on what it takes for something to be a difference-making cause. It requires the satisfaction of two counterfactual conditionals. First, there is the positive conditional ‘if C occurs, then E occurs’; then there is the negative conditional ‘if C did not occur, E would not occur’. List and Menzies’ argument is that mental causes will tend to satisfy these two conditional tests, whereas neuronal causes will not. As they put it themselves:


[W]hen we understand causation as difference-making, we are likely to conclude that the cause of an agent’s action is not the agent’s brain state, but his or her mental state. Only the supervenient mental state, but not the subvenient brain state, may satisfy the two conditionals for difference-making. 
(List and Menzies 2014)


They then illustrate this conceptual point in more detail by drawing out a map of possible worlds and the different possible causes that one can identify across these possible worlds. This is supposed to show how, under a difference-making account of causation, higher level mental causes can potentially exclude lower-level neural causes. They call this their ‘downward exclusion’ result. I find this slightly redundant, however, as the map is their own construction and merely serves to repeat their main conceptual point, which is that mental states are more likely to be the difference-makers.

This brings us to their empirical point. They accept that whether neural or mental causes are the difference-makers is an empirical question. And they also accept that a suitably constructed psychological study could tell us which is the case. But they say nothing more, suggesting that they think no such study currently exists (or, at least, the current set of studies are not decisive one way or the other).


3. Criticisms and Concluding Thoughts
What are we to make of all this? As mentioned the last day, I think List and Menzies are broadly correct in their critique of the exclusion argument from physicalism. But I think they are much less persuasive in their dismissal of neuroscepticism. It is not that I disagree with them entirely, or that I am a resolute neurosceptic, it is just that their interpretation of the neurosceptical position seems remarkably uncharitable and their engagement with the empirical evidence insufficient.

Their lack of charity stems from their original formulation of premise (9). They interpret the neurosceptic as holding that the existence of neural causes excludes the existence of mental causes. Perhaps there are some neurosceptics who rely on this principle. But as best I can tell, the neurosceptical position advanced on foot of the studies by Libet, Haynes and Haggard (who List and Menzies explicitly reference) and on foot of Wegner’s work (which they do not reference) has nothing to do with the alleged sufficiency of neural causation. It has everything to do with the timing of conscious mental states and the timing of unconscious neural events. The claim in these studies is always that the mental states come after the fact: people confabulate and reinterpret their behaviour as having a mental cause but it really doesn’t. In other words, I think the neurosceptics already embrace something akin to premise (9*). They think that neural causes (or external causes) are the difference-makers not mental states, because the neural causes precede and initiate actions, whereas the mental states do not.

This is also why I think their engagement with the empirical evidence is insufficient. At the end of the article they appeal to the possibility of psychological experimentation helping us to work out whether neural causes or mental causes are the difference makers. But it seems to me that the experimental evidence from the likes of Libet, Haynes and Wegner already helps us in this regard (maybe not directly, but certainly indirectly). For example, my interpretation of the evidence is that Libet-style experiments do not undermine the existence of difference-making mental causes because conscious mental states always seem to be required for the performance of actions in those experiments (the experimental set-up is such that the subject is primed well in advance to consciously will an action). My interpretation of the Wegner-like evidence is slightly different. You would have to read Wegner’s work to get a fuller picture (and I confess it has been several years since I read it myself). Nevertheless, it does seem to me like Wegner identifies many cases in which conscious mental states are not the different makers. For instance, patients with hemiplagias often perform actions with one side of their bodies that they subsequently deny or confabulate a reason for performing. In these cases, the hemiplagia seems to be the difference-maker, not the conscious mental state. Admittedly, these kinds of cases are exceptional as they involve some sort of pathology. The question is how far do similar causal sequences creep into our everyday lives. I am not sure.

Anyway, those are some of my quick reflections on the piece. Obviously, a much more systematic review of the empirical evidence, with List and Menzies causal principle in place, is needed. More experimental research would help too.

Thursday, July 30, 2015

Did my brain make me do it? Neuroscience and Free Will (1)




Consider the following passage from Ian McEwan’s novel Atonement. It concerns one of the novel’s characters (Briony) as she philosophically reflects on the mystery of human action:

She raised one hand and flexed its fingers and wondered, as she had sometimes done before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge.

Is Briony’s quest forlorn? Will she ever find herself at the crest of the wave? The contemporary scientific understanding of human action seems to cast this into some doubt. A variety of studies in the neuroscience of action paint an increasingly mechanistic and subconscious picture of human behaviour. According to these studies, our behaviour is not the product of our intentions or desires or anything like that. It is the product of our neural networks and systems, a complex soup of electrochemical interactions, oftentimes operating beneath our conscious awareness. In other words, our brains control our actions; our selves (in the philosophically important sense of the word ‘self’) do not. This discovery — that our brains ‘make us do it’ and that ‘we’ don’t — is thought to have a number of significant social implications, particularly for our practices of blame and punishment.

Or so a popular line of argument goes. Is this line of argument any good? Christian List and Peter Menzies’s article, ‘My brain made me do it: The exclusion argument against free will and what’s wrong with it’, claims that it is not. In this two-part series, I want to closely examine their arguments. Although I sympathise with parts of their critique, I think their attempt to apply this critique to the recent debates about neuroscience and responsibility are somewhat misleading. I’ll explain why I think this in part two. For the remainder of this part, I’ll focus on their primary argument.


1. The Challenge from Physicalism and Neurosicence
What does it take to be free? Two conditions are said to be important. The first is the alternativism condition, according to which we must be capable of doing otherwise in order for actions to be free. The second is the sourcehood condition, according to which we must be the source of our action in order for it to be the product of our free will. Both conditions are threatened by popular philosophical theses. The thesis of determinism threatens the alternativism condition, and the thesis of physicalism threatens the sourcehood condition.

We could talk about the impact of determinism on the alternativism condition, but we won’t. Instead, we will focus on the impact of physicalism on the sourcehood condition. In particular, we will focus on what List and Menzies call the ‘exclusion argument’ against free will. The main substance of their article is directed towards this argument, so we need to understand it if we are to understand the article. The argument works a little something like this (note: the numbering of the premises does not follow the numbering in List and Menzies article — this might make cross-comparison a little awkward):


  • (1) Someone’s action is free only if it is caused by the agent, particularly by the agent’s mental states, as distinct from the physical states of the agent’s brain and body (call this the ‘Causal Source Thesis’)
  • (2) Physicalism rules out any agential or mental causation, as distinct from causation by physical states of the agent’s brain and body (call this the ‘Purported Implication of Physicalism’)
  • (3) Therefore, there can be no free actions in a physicalist world (call this the ‘Source-Incompatibilist Conclusion’)



The argument is a little underwhelming at first glance. Although we might be inclined to accept premise (1), premise (2) is going to be unconvincing to many physicalists. They will accept that the mental and physical are one and the same thing: that mental states are constituted by particular patterns of brain states, but they will deny the implication that this rules out agential causation. They will just say that, provided the actions are caused by the right kinds of brain states (i.e. the ones that constitute the right kinds of mental states), there is agential causation and hence the sourcehood condition is satisfied. It does not matter that there is no ‘distinct’ class of mental causation.

This is where the exclusion argument comes into play. The exclusion argument derives from the work of Jaegwon Kim, a famous proponent of physicalism. Kim argues that physicalism entails mental supervenience (i.e. the mental supervenes upon the physical), and that mental supervenience entails epiphenomenalism (i.e. that the mental has no real causal role in our actions). This means that there is no mental causation on physicalism, which means that premise (2) is sound.

As I mentioned above, List and Menzies direct most of their critique against this exclusion argument. They identify two variations upon the argument, and argue that both rely on a mistaken understanding of agential causation. Once the correct account of agential causation is substituted-in, the argument becomes less plausible. There is, consequently, no reason to suspect that physicalism rules out mental causation of the appropriate kind. List and Menzies also try to argue that something very much akin to the exclusion argument underlies much of the current ‘my brain made me do it’ rhetoric in the neuroscience community. Consider Sam Harris’s statement, from his 2012 book Free Will:

'Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence’ 
(Harris 2012, 7)

There is something exclusion-argument-esque about this, for sure. But, although I’m inclined to agree with List and Menzies in their critique of the physicalist challenge to sourcehood, I’m less inclined to agree with them about the neuroscientific challenge. I’ll get to that in the next post.


2. Two Versions of the Exclusion Argument
Before we do anything else, we need to gain a deeper understanding of the exclusion argument. List and Menzies maintain that this argument comes in two major forms. The first, simpler form, relies on a straightforward physicalist causal closure principle (i.e. on a principle claiming that the physical world is causally closed: physical causes are sufficient for all physical effects). This will be familiar to anyone who has debated the merits of Cartesian dualism vis-a-vis physicalism. The second, more complex form, relies on slightly more general claim about the nature of causation and causal sufficiency.

The first version of the argument works like this:


  • (4) An agent’s action is free only if it is caused (in a relevant sense of causation simpliciter) by the agent’s mental states.
  • (5) Any effect that has a cause has a sufficient physical cause (i.e. a causally sufficient physical condition) occurring at the same time.
  • (6) An agent’s mental states are not identical to any physical states, but rather supervene on underlying physical states.
  • (7) If an effect has a sufficient cause C, it does not have any cause C* (simpliciter) distinct from C, occurring at the same time (except in cases of overdetermination).
  • (8) Therefore, there are no free actions.


The second version of the argument simply changes premise (5) to the following:


  • (5*) Causation implies causal sufficiency.


The conclusion then follows in the same manner, provided you also accept this lemma:

Lemma: If C* is causally sufficient for some effect E, and C* supervenes on C, then C is causally sufficient for E.

This lemma is easily proved because the supervenience relationship is a necessary one. In other words, if C* supervenes on C, then whenever C is present, so too is C*. It follows then that if C* is sufficient for E, then C is also sufficient for E. If you are confused, see my previous post on the nature of the supervenience relationship.

List and Menzies are at pains to point out that most of the premises of both versions of the argument are plausible. I won’t explore the matter in quite the same detail as they do, but I will give a quick run-down of the salient points.

I’ll start with premise (4). This premise looks to be a pretty uncontroversial statement of the sourcehood condition: in order to freely will an action you (your mental agency) must be the source of that action. This premise should be acceptable to most people, irrespective of their philosophical worldview.

Premises (5) and (5*) are slightly more controversial, but still highly plausible. Premise (5) simply states a standard physicalist account of causal closure. It is also quite weak in its claims. It states only that if an event has a cause, then physical causes are sufficient to produce that event. This is consistent with the existence of some non-physical events with no causes. It should, consequently, be acceptable to virtually all physicalists. Premise (5*) is even more relaxed in its claims. It doesn’t appeal to physicalism at all. It states that if an event C causes an event E, then C is causally sufficient for E. This is potentially compatible with all versions of causal determinism. The premise could also be refined so as to incorporate a probabilistic version of causation. Still, despite its more relaxed nature, there is something worth disputing. Everything depends on how you understand the concepts of causation and causal sufficiency. List and Menzies think that an incorrect understanding of both concepts permeates the exclusion argument. We will return to this problem below.

Premise (6) requires some commitment to non-reductive physicalism. That is, to the view that mental states depend on (supervene on) physical states but are not identical or reducible to them. This, of course, means that reductive physicalists and non-physicalists have a route out of the argument. That’s to be expected. But it is worth noting that non-reductive physicalism has tended to be the dominant position in the philosophy of mind for the past century or so. It is also the view that seems most at home with a scientifically oriented worldview, which is the sort of worldview shared by List and Menzies, and the neurosceptics.

That leaves us with premise (7). This is the most problematic one, according to List and Menzies, because it assumes an incorrect theory of causation.


3. A Difference-Making Account of Causation
Let’s try to unpack their critique in more detail. There are two main types of causation:

Production-Causation: This is a metaphysical account of causation according to which causes produce effects via some metaphysical source. As List and Menzies describe it ‘[c]ausation here involves a causal ‘oomph’, i.e. the production of an outcome through some causal force or power’ (List and Menzies 2014).

Difference-Making Causation: This is a probabilistic or counterfactual theory of causation. It says that to be the cause of an effect is to make some sort of difference to the occurence of that effect across possible worlds. More precisely, it holds that C causes E if, and only if, two conditionals are satisfied:
The Positive Conditional: If C were to occur, then E would occur.
The Negative Conditional: If C were not to occur, then E would not occur.

List and Menzies argue that the difference-making account is much more consistent with the scientific worldview. The kinds of experimental evidence of causation that scientists discover usually involve playing around with the conditionals in the manner envisaged by the difference-making account (e.g. the randomised placebo-controlled trial in medicine). Furthermore, the production account seems to require a metaphysical ‘leap of faith’.

In addition to this, they argue that the difference-making account is the most natural way to understand agential causation. In other words, to say that an agent mentally causes an event is to say that the agent (and the relevant mental states) made a difference to that event. When the relevant mental state is present, so too is the effect, and when it is not, neither is the effect.

The crucial thing about the difference-making account of causation is that it casts premise (7) into doubt. This is because the difference-making account allows for cases in which certain microphysical states might be the production-causes of an event; but higher-level, supervenient events, might be the difference-making causes of the event. Here’s an example. Suppose you have a flask of boiling water that breaks because of the pressure inside. The movements of the particles (or some subset of particles) within the flask might be causally sufficient for the break. These microstates would then be the production causes of the event. But it is the boiling of the water (which supervenes on various microstates) that is the difference-maker. It satisfies the positive and negative conditionals. As List and Menzies point out:

If the boiling had occurred, but had been realized by a slightly different microstate, the flask would still have broken, and if the boiling had not occurred, the flask would have remained intact…Although it is true that if the microstate in the flask had been exactly as it was, the flask would be broken, it is not true that if the microstate had been slightly different, the flask would have remained intact. The boiling could have been realized in many different ways, through different configurations of molecular motion, and would still have led the flask to break. 
(List and Menzies 2014)

In other words, the boiling is supervenient upon the underlying microstates, but it is multiply realisable by those microstates. This means that it (not the microstates) is the true difference-maker. The same thing could then hold true for mental causation. Mental states could be multiply realisable. Different physical states of the brain could give rise to the same mental event. Where those different physical states give rise to the same event, we can say that the supervenient mental state is the true difference-maker. The result is that the exclusion argument fails: if we adopt a difference-making account of causation, there is no reason to think that physicalism rules out the appropriate style of mental causation.

I’m broadly in agreement with this line of argument, though I would note that much depends here on how fine-grained or coarse-grained we are in our understanding of what constitutes a common or distinct event or mental state. Daniel Dennett’s paper ‘Real Patterns’ is quite good on this topic, for those of you who are interested.

Right, that’s it for this post. To briefly recap, the exclusion argument claims that physicalism rules out free will because, on physicalism, we are not the sources of our actions. But, as we have just seen, this argument assumes an implausible theory of mental causation. If we adopt a difference-making account, then there is no reason why supervenient mental states cannot count as the causes of our actions. How does this affect the debate about neuroscience and free will? We’ll look into that in part two.

Monday, July 27, 2015

The Psychology of Revenge: Biology, Evolution and Culture


The Murder of Agamemnon - A Revenge Killing?


“Revenge is a dish best served cold…” 
(Ancient Klingon Proverb)

When I was younger I longed for revenge. I remember school-companions doing unspeakably cruel things to me — stealing my lunch, laughing at my misfortune and so forth (hey, it all seemed cruel at the time). I would carefully plot my revenge. The revenge almost always consisted of performing some similarly unspeakably cruel act towards them. Occasionally, my thoughts turned to violence. Sometimes I even lashed out in response.

I’m less inclined towards revenge these days. Indeed, I am almost comically non-confrontational in all aspects of my life. But I still feel the pangs. When wronged, I’ll briefly get a bit hot under the collar and my thoughts will turn to violence once more. I’ll also empathise with the characters in the innumerable revenge narratives that permeate popular culture, willing them on and feeling a faint twinge of pleasure when they succeed. I don’t think I ever act on the impulses anymore, but I have come close. And I’m sure everyone has had similar feelings.

But why is this? Why do we so frequently seek revenge? And how can we stop ourselves from acting on the impulse? I want to look at some potential answers to those questions today. In particular, I want to cover three related topics. First, I want to consider the psychology and neurobiology of revenge, focusing on why revenge can oftentimes feel pleasurable. Second, I want to consider the supposed ‘rationality’ of revenge, i.e. why the instinct for revenge is sometimes a good thing, and why the instinct may have evolved. And third, I want to examine the various methods that can be used to minimise the amount of vengeance being sought in society at any given time.

In doing all this, I’ll be drawing heavily from the discussion in Steven Pinker’s book The Better Angels of our Nature, and from the various studies cited therein.


1. The Mechanics of Revenge
One thing that is noticeable about revenge is how common it is. Literary classics of the distant and recent past often extoll its virtues in poetic terms; and it is a frequent motive for state and non-state violence (consider the use of reprisals in international conflicts). In addition to this, Pinker, following work by McCullough and Daly and Wilson, suggests that blood feuds — cases in which one tribe/gang kill the members of rival tribe/gang in retaliation for a similar attack on themselves — are endorsed by around 95% of the world’s cultures.

The commonality of revenge suggests that there is something deep within the architecture of the typical human brain that facilitates it. This seems to be borne out by a variety of studies. For one thing, it is easy enough to provoke people into seeking revenge in simple psychological experiments. Once more citing the work of McCullough, Pinker mentions studies done on college students (as pretty much all psychological experiments are…) in which the students are first given an insulting evaluation written by a fellow student, and then given the opportunity to punish the evaluator in a variety of ways (electric shocks, blasts with an air horn). It is very easy to induce students to engage in such revenge attacks.

So which brain systems undergird this thirst for revenge? Pinker mentions two. The first is the so-called Rage Circuit. This is a pathway linking the midbrain to the hypothalamus and amygdala. The rage circuit works by receiving pain signals from other parts of the nervous system and then responding, rapidly, with aggressive behavioural patterns. If activated, it provokes an animal to lash out at the nearest available victim. Jaak Panksepp performed experiments on the rage circuits of cats. The experiments involved activating the rage circuit with an electrical current. This provoked an instantaneous reaction from the cat. It would leap towards Panskepp with its claws and fangs bared, while hissing and spitting. It is likely that the thirst for revenge starts with the rage circuit: when we are hurt, we have an instant urge to lash out.

But it doesn’t end there. It is known that the stimulation of the rage circuit is unpleasant and animals will often work to switch it off. But the desire for revenge can linger. The reason for this seems to be that other brain systems support the quest for revenge. In particular, there is the so-called ‘Seeking’ system, named by Panskepp. This is a network within the brain that facilitates reward and pleasure seeking behaviour and incorporates the mesolimbic and mesocortical dopamine systems. You have probably come across some description of them before. The original experimental work on them involved rats placed in Skinner boxes. Every time the rats pressed a lever in the box they would stimulate their dopamine systems. It was found that rats would do so until they dropped dead from exhaustion. For a long time, this was thought to provide the neurobiological basis for addiction, although nowadays scientists realise that addiction is a more complex phenomenon.

Anyway, the important point here is that revenge seems to activate the seeking system. People appear to crave revenge, hoping that it will prove satisfying and rewarding. Studies done by Dominique de Quervain and his colleagues scanned the brains of men who had been wronged in a simple trust game (they entrusted another with some money and that other kept it for himself). The men were given the opportunity to punish the wrongdoer at some cost to themselves. It was found that part of the striatum (a key component in brain’s seeking system) lit up as they pondered the opportunity, and that the more it lit up, the more likely the men were to punish the others. This seems to indicate that reward seeking is part of the motivation for revenge.


2. The Rationality of Revenge
The commonality of revenge, and the fact that people seem to crave it, poses another question: why have we evolved (or been enculturated) to pursue revenge? After all, there is something of a paradox underlying our lust for revenge. It is a costly endeavour, and no matter how much pain we inflict on the wrongdoer, we can never really correct for the historical wrongdoing that provokes our revenge. And yet revenge persists.

Pinker favours a ‘deterrence’ explanation for revenge. We seek revenge, and derive pleasure from it, because it is an effective means of deterring would-be wrongdoers. Now, on a previous occasion, I discussed a whole range of psychological evidence suggesting that people’s punishment-related behaviours did not, in fact, follow the logic of deterrence. Au contraire, those studies suggested that people were natural-born retributivists: they sought revenge because they felt it was important for people to get their ‘just deserts’, and not because it would deter other wrongdoers. But the contradiction between these experimental findings and Pinker’s preferred explanation is more apparent than real. The studies discussed in that earlier post focused on the proximate psychological causes of revenge, i.e. on what best explained individual judgments and patterns of behaviour. Pinker’s explanation focuses on the ultimate societal causes of revenge, i.e. on what best explains the persistence of revenge in spite of its costly nature. His claim is that deterrence is the best ultimate explanation for this persistence. That is perfectly consistent with the claim that most individuals follow a retributivist (non-deterrentist) logic.

What evidence can be adduced in favour deterrence explanation? Pinker discusses two main pieces. Both come from studies of iterated prisoner’s dilemmas (IPDs) (note: I am not going to explain what the PD or IPD is here because I have discussed it on previous occasions - the important point is that PDs are thought to provide a good model for many social dilemmas). The first piece of evidence is largely theoretical, and focuses on computer-based simulations of IPDs. These computer-based simulations seem to confirm the long-term effectiveness of vengeance in achieving deterrence. The second is largely experimental, and focuses on how real people behave in lab-based IPDs. These also seem to confirm the willingness to seek and effectiveness of revenge. (You may dispute my calling the computer-based simulations ‘theoretical’ as opposed to ‘experimental’ evidence. I guess they are a type of experiment, but they are experimental tests of highly formalised strategies, not tests of the behaviour of real people.)

The computer-based simulations of IPDs are fascinating, and have generated a rich literature over the years. As you probably know, the standard PD involves two players, each faced with two choices: cooperate or defect. Collectively, the best strategy is to cooperate; but, individually, the best strategy is to defect (it dominates all other choices). But this is only true if the PD is a once off. If the players repeatedly interact in PD-style games, over multiple rounds and with different opponents, then other strategies can prevail. This is the key insight from the computer-based simulations. One of the earliest, and most enduring, findings from those simulations was that a simple programme called TIT FOR TAT could beat out most competitors in an IPD tournament. The TIT FOR TAT programme embodied the logic of deterrence-based revenge. It involved cooperation on the first round of the tournament, and then a change in subsequent rounds, depending on what the opponent did in the previous round. Thus, for example, if the opponent defected in the first round, TIT FOR TAT would defect in the second round; if the opponent cooperated in the second round, TIT FOR TAT would switch back to cooperation in the third round; and so on. The idea is that this models deterrence-based revenge because it rewards and punishes opponents with a view to changing outcomes in future rounds.

The success of TIT FOR TAT in IPDs is attributed to the fact that it is nice, clear, retaliatory and forgiving. But TIT FOR TAT is not an unbridled success. One difficulty is that it can easily degenerate into an endless cycle of defection (sometimes called a ‘death spiral’), particularly if one TIT FOR TAT is playing against another TIT FOR TAT and they happen to first interact on a round when they are both playing ‘defect’. Alternative strategies can be more effective in the right environments. For instance, GENEROUS TIT FOR TAT, which randomly restarts cooperating on some rounds, or TIT FOR TWO TATS, which avoids immediate retaliation by waiting to see whether its opponent defects in two successive rounds, or CONTRITE TIT FOR TAT, which tries to correct for its own mistakes, can be more effective.

I could go on about the details and variations, but that would be unnecessary. The important point is that strategies that all these strategies incorporate some degree of revenge (and, importantly, forgiveness), and can help to sustain long-term cooperation. This supports the deterrence-explanation. I should probably note at this point that after Pinker published his book there was an interesting paper published by Press and Dyson on IPDs. The paper proved that extortionate strategies (called ‘Zero Determinant’ strategies), i.e. ones that weren’t simply vengeful and forgiving, were optimal in some IPDs. There has been much hype about this result, and you can read explanations of it here, but it doesn’t completely undermine the long-term effectiveness of TIT FOR TAT and its variations.

So much for the theoretical bit of evidence, what about the work done on actual human beings? Since the late 1990s, a whole series of studies have been published showing that costly punishment can help to sustain cooperation in repeated PD-style interactions (researchers refer to the phenomenon as 'altruistic punishment'). The most famous study in this vein comes from Fehr and Gachter. The study involved a Public Goods game wherein people were given the opportunity to contribute to a common investment fund (which would benefit them all), or to free ride on the good will of others who invested. If experimental subjects were allowed to punish free riders, free-riding was eliminated over repeated plays of the game. Furthermore, other experiments have found that people are more likely to punish when they think others are watching. This demonstrates a willingness to seek a reputation for revenge in a social setting. This again seems to confirm the deterrence explanation because a reputation for revenge is important for deterrence.

The upshot is that deterrence — and the pursuit of mutually beneficial cooperation — look like reasonable explanations for the long-term persistence of revenge.


3. The Modulation of Revenge
Granting that revenge is common, and occasionally rational, there remains a challenge: how can we ensure that there is not too much of it? It is clear that too much revenge can be destructive. This is obvious to anyone who has lived through seemingly endless cycles of blood-feuding (the real-world equivalent of the TIT FOR TAT ‘death spirals’). It might be trite and simplistic to put it this way, but such cycles seem to be part of the reason for the persistence of sectarian violence in Northern Ireland. Or, at least, it seemed that way to me as child growing up in the Republic of Ireland.

Is it possible to prevent such destructive cycles of revenge? Would it be possible to create a world in which there was no need to seek revenge, i.e. in which revenge lost its rationality? In his analysis, Pinker identifies five factors which seem to modulate and reduce the need for revenge. I won’t discuss them in too much detail here. Instead, I will simply give short descriptions and links to relevant supporting evidence:

A. The Presence of Leviathan: The Leviathan is, of course, Hobbes’s famous term for the state. The Leviathan effectively functions as a means for outsourcing violence (in particular revenge). We all have Leviathans in our lives. When I was a school-child, I did not necessarily need to lash out at the cruel behaviour of my companions, I could sometimes outsource my revenge to a teacher who could punish the bullies on my behalf. This outsourcing of revenge can have two major benefits. First, the Leviathan can function as a more effective deterrent if it can create the belief that it is ‘all-seeing’ and ‘all-knowing’ (or close enough) and capable of retaliating even if the wrongdoer crushes their victim. Second, the Leviathan may be less prone to the distorting biases that fuel cycles of revenge. It is well-known that victims often overestimate the degree of harm they have suffered, and consequently can punish wrongdoers in excess. Shergill et al performed an experiment in which people placed their finger under a bar that applied a precise amount of force. They were then asked to press down on the finger of another experimental subject with the same amount of force. It was found that they used approximately eighteen times more force than they originally received, highlighting the gap between perceived harm and reality. Pinker refers to this as part of the ‘moralization gap’ and highlights further evidence in support of it. Leviathan, as a third party, may avoid the excesses of this gap.

B. Civic-Mindedness and Perceptions of Governmental Legitimacy: The mere presence of Leviathan is not enough in itself to eliminate destructive cycles of revenge. It is clear that the people who are subjected to the authority of Leviathan must have some degree of civic-mindedness, i.e. must be committed to the institutional basis for Leviathan and perceive them to be legitimate. Herrmann, Thoni and Gachter performed a cross-cultural study of Public Goods games which highlighted this. They found, somewhat surprisingly, that in some cultures players actually punished people who contributed generously to the public investment fund. This is odd since generous contributors of this sort actually benefitted the group as a whole. When they dug into the data a little deeper, Herrmann, Thoni and Gachter found that a major predictor of this willingness to spitefully punish generous contributors was the degree of civic mindedness in the respective cultures. In other words, cultures in which the commitment to the rule of law was weak (e.g. countries where people didn’t pay taxes, cheated on social welfare payments etc.) were more likely to engage in spiteful punishment.

C. Expanding the Circle of Empathy: This is an obvious one. It is well-known that we are more likely to forgive people who fall within our natural circle of empathy (kin, friends etc) for their transgressions. This modulates our desire for revenge. Thus, creating an expanded circle of empathy can help prevent destructive cycles of revenge. The question, of course, is how to do this. Various cultural practices and rituals can help to create ‘fictive kinships’ which are often effective means of expanding the circle of empathy. Religions have been good at this, and often explicitly invoke kinship metaphors (e.g. ‘brothers and sisters in Christ’). But there is a dark side to this too as you can often create an excessive in-group/out-group mentality, which can in turn fuel revenge and associated forms of violence.

D. Shared Goals: A simple way to overcome excessive in-group/out-group mentalities is to generate common interests, i.e. to make the success of one group dependent on the success of another. There was a famous experiment to this effect performed on a group of boys at the Robbers Cave summer camp back in the 1950s. The boys were arbitrarily divided into two separate groups at the start of camp. This generated intense loyalty within the groups, and intense rivalry between them, with acts of provocation and retaliation following soon after. But the experimenters found that they could reduce this rivalry by bringing the groups together and forcing them to work together for mutual benefit, e.g. in having to restore the camp’s water supply. The value of such mutual interdependencies is often highlighted as a major reason why countries that trade with one another are less likely to go to war.

E. Creating a Perception of Harmless: A final way to reduce destructive cycles of revenge is to cultivate a reputation for non-violence. That is: to signal to the other side that you are not going to continue with a destructive conflict. Apologies and reconciliation events are central to this, but apologies are often deemed ‘cheap talk’. They are easy to make and easy to break. There is some suggestion that physiological responses like blushing are a way in which evolutionary forces facilitated costly signaling of apologies. There is also evidence from the study of international and civil conflicts that apologies and reconciliation events are more likely to work when they are costly, involve some symbolic (but incomplete) justice, and involve participants with some shared history. The work of Long and Brecke is the key source here.

I have illustrated these five modulators in the diagram below.





4. Conclusion
To briefly sum up, revenge seems to be common, occasionally rational and capable of being reduced. Its commonality is illustrated by its near-universal endorsement, and the ease with which it can be endorsed. It seems to be undergirded by two major brain systems: the Rage circuit, which facilitates rapid violent responses to perceived harm; and the Seeking circuit, which facilitates reward-seeking behaviours. The rationality of revenge is illustrated by its utility as a deterrence mechanism in iterated versions of the prisoner’s dilemma. And the ability to reduce the amount of destructive revenge is illustrated by the five factors listed above.