Friday, December 16, 2022

102 - Fictional Dualism and Social Robots



How should we conceive of social robots? Some sceptics think they are little more than tools and should be treated as such. Some are more bullish on their potential to attain full moral status. Is there some middle ground? In this episode, I talk to Paula Sweeney about this possibility. Paula defends a position she calls 'fictional dualism' about social robots. This allows us to relate to social robots in creative, human-like ways, without necessarily ascribing them moral status or rights. Paula is a philosopher based in the University of Aberdeen, Scotland. She has a background in the philosophy of language (which we talk about a bit) but has recently turned her attentio n to applied ethics of technology. She is currently writing a book about social robots.

You download the episode here, or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services.



Relevant Links

Tuesday, November 29, 2022

Debating Meritocracy: Arguments For and Against




Note: This article is, essentially, a set of expanded notes from a class I taught about debating meritocracy.

In 1958, Michael Young — now better known as the father of the execrable Toby Young — published The Rise of the Meritocracy. Misunderstood in its own time, the book is a dystopian critique of a meritocratic society. It is set in the future. The year 2034 to be precise (still the future as I write). It is a retrospective history, told from that future, of how meritocracy took root in the UK and how it became a new class system, replacing the old one based on accident of birth. The gist of the critique seems to be that we might think meritocracy is justified and better than the old system (and in many ways it is) but there is a danger that it will end up creating a new, unequal social order.

I’ll be honest. I’ve never read Michael Young’s book. I only know of its contents second-hand. But I recently came across it, again, when reading Adrian Wooldridge’s book The Aristocracy of Talent. Wooldridge’s book is a full-throated defence of meritocracy. It is primarily a historical overview of how meritocratic systems came into popularity, but it also deals with contemporary critiques of meritocracy — particularly those from left-leaning authors like Michael Sandel — and concludes with an extended argument in favour of it.

As with all good books, Wooldridge’s provokes reflection. I don’t know where I stand on meritocracy. I can see its advantages, certainly when compared with historical practices of nepotism and patrimony (though, to be clear, neither of these practices is entirely ‘historical’). But I can also see some of its dangers, including the one highlighted by Young’s dystopia.

In the remainder of this article, I want to review some of the arguments for and against meritocracy. My goal is not to provide a definitive evaluation of those arguments. It is, rather, to clarify the terms of the debate. This should be of interest to anyone who wants to know what the typical arguments are. The analysis is inspired by my reading of Wooldridge, but I do not intend to offer an extended critique or commentary on Wooldridge’s book.

I start, as ever, by clarifying some of the concepts at stake in the debate.


1. Meritocracy and a Toy Model of Society

One of the easiest ways to think about equality and social justice is to create a toy model of society. The diagram below provides one such toy model. I’ve used it on previous occasions.

At the bottom, you have the members of society. They are people defined by a range of characteristics. These could include their talents and abilities (raw intelligence, virtues, physical prowess, emotional intelligence etc) as well as other social and biological traits (race, ethnicity, religious beliefs and so on). It is probably impossible to list the full set of defining characteristics. You can slice and dice people into lots of different categories, but you get the basic idea.

At the top of the diagram there are social outcomes. These are loosely defined to include jobs, educational status, income level, well-being, health and so on. Basically, any outcome variable in which you happen to be interested can be classified as a social outcome. Like personal characteristics, outcomes are not neat and discrete. Many outcomes overlap and intersect. Similarly, outcomes vary across a person’s lifetime. If you look at my income bracket right now, it’s a lot different from what it was like when I was in my twenties.

In the middle of the diagram there are gatekeepers. These are people or social institutions that control or influence the access to social outcomes. They could include educational institutions, doctors, judges, job interviewers and so on.



In an ideal social order, the system for allocating people to different social outcomes would be fully moral justified and non-arbitrary. Everyone would have equal opportunity to pursue their preferred social outcomes and they would not be denied access to those social outcomes for irrelevant reasons. The problem, of course, is that people disagree as to what is a morally justified system of social allocation. For example, many people believed, historically, that it was entirely appropriate to allocate on the basis of race and gender. Nowadays, we think this is inappropriate. Some people think that in order to correct for historically biased forms of social allocation we need to engage in reverse discrimination or affirmative action. This, somewhat paradoxically, means that we should pay attention to characteristics such as gender and race, at least temporarily, in order to achieve a more just system.

I am not going to be able to do justice to the complexity of these debates in this article. Suffice to say, there are many desiderata to balance when figuring out the ideal system of social allocation. It’s quite likely that it is impossible to balance them all to everyone’s satisfaction.

What I will say, for the purposes of evaluating meritocracy, is that we can distinguish between three general systems of allocation. As follows:


Meritocracy: Allocating people to social outcomes on the basis of merit (how good or well-attuned they are to succeed in that outcome). Markers of merit could include intelligence, creativity, physical prowess and so on.
Nepotism/Patrimony: Allocating people to social outcomes on the basis of family, connections or accidents of birth. Think of Donald Trump and how he gave his family members and friends cushy positions in his companies and in his presidential administration.
Representationalism: Allocating people to social outcomes on the basis that we need to achieve proportional representations of certain social groups in those outcome classes (e.g. x% women; y% ethnic minorities and so on)

 

I do not claim that these three systems are exhaustive of the possibilities. You could allocate to social outcomes in other ways, e.g. random allocation (lottos). I also would not claim that these systems are mutually exclusive. Oftentimes particular social institutions will blend elements of each. For example, admissions to elite US universities often involve a mix of nepotism/partimony (legacy admissions), meritocracy and representationalism.

Nepotism is probably the historically most common system of social allocation and it remains a feature of most societies to this day. Even in societies that openly embrace or claim commitment to meritocracy, one can find pockets of nepotism. Representationalism is an odd one. I am not sure that anyone else uses the term or openly embraces it, nevertheless, I think many people nowadays advocate for a form of representationalism. Debates about quotas for female politicians or affirmative action policies in higher education, for example, often seem to presume or favour representationalism.

In any event, in what follows, I will be considering arguments for and against meritocracy that work, in part, by comparing it to these other two systems of social allocation.


2. Arguments for Meritocracy

There are four main arguments in favour of meritocracy. Most of these arguments are consequentialist in nature, i.e. they defend meritocracy on the basis that it produces or incentivises better outcomes for individuals and societies as a whole. It is, however, possible to defend meritocracy on intrinsic grounds and I will consider one such possibility below.

The first argument in favour of meritocracy is the ‘better societies’ argument:


A1 - Better Societies - More meritocratic societies score better on measures of economic growth, innovation and social well-being; less meritocratic societies tend to be more stagnant and have higher rates of emigration.

 

In other words, given certain measures of societal success — GDP, GNP, Human Development Index and so on — societies that are more meritocratic score better than less meritocratic ones. If we grant that these measures are, indeed, positive and something we would like to increase, we have reason to favour meritocracy. For what it is worth, Wooldridge, in his defence of meritocracy, makes much of this argument:


…a glance around the world suggests that meritocracy is the golden ticket to prosperity. Singapore, perhaps the world’s poster child of meritocracy, has transformed from an underdeveloped swamp into one of the world’s most prosperous countries…Scandinavian countries retain their positions at the top of the international league tables…in large part because they are committed to education, good government and…competition. …countries that have resisted meritocracy have either stagnated or hit their growth limits. Greece, a byword for nepotism and ‘clientelism’…has struggled for decades. Italy, the homeland of nepotismo…has been stagnating since the mid-1990s. The handful of countries that have succeeded in combining anti-meritocratic cultures with high standards of living are petro-states that are dependent on an accident of geography… 
(Wooldridge 2021, 368)

 

There is some merit (!) to this argument. If you look up countries such as Singapore or Sweden and see how they do on these measures of societal success, you will find that they do better than countries like Italy and Greece (check out the comparative charts from Our World in Data for more on this). That said, we have to be a little bit cautious when it comes to identifying ‘more’ and ‘less’ meritocratic societies. As the use of language here suggests, it is rare, certainly among European and developed nations, to find a society that is completely committed to nepotism and has no meritocratic elements. Most developed countries have educational systems with standardised merit-based exams and while not all have competitive entry to university, many do and have more or less elite universities that allocate places based on merit. It’s really a question of the balance between meritocratic and other forms of allocation. Furthermore, even in countries that claim to be committed to meritocratic social allocation — and Singapore probably is the best example of this — it is impossible to sustain the commitment across all social outcomes. Singapore, for instance, is primarily meritocratic in its education system and in its allocation of civil service jobs. While private industry may choose to adopt merit-based allocation (and, perhaps, companies that do this do better than those that don’t) it’s probably not feasible to cut out all forms of nepotism or representationalism in those sectors of society.

If you wanted to criticise this argument you might say that the measures of success identified by its supporters are misleading or misguided. For example, a lot of people would criticise the use of GDP as a measure of social success (Ireland’s GDP per capita is very high but that doesn’t reflect the wealth of the people in Ireland; it’s largely because US companies report earnings in Ireland as a way to avoid paying tax). The only problem with this argument, from my perspective, is that the positive comparison for ‘more’ meritocratic societies tends to hold up no matter which measure of success you use, e.g. human development index. Also, while these measures of societal success might overlook or ignore some important things, it is hard to argue that a society that does much worse on those measures is a better place to live. Nowhere is ideal, but these measures do tell us something about relative well-being across societies.

The second argument for meritocracy is the ‘better incentives’ argument:


A2 - Better Incentives - Meritocratic societies provide rewards to people for developing and honing their talents. This leads to better social outcomes because talents produce social goods (e.g. new companies, new jobs, new insights, new creative culture)

 

This is obviously closely related to the first argument. The idea is that meritocratic societies send a signal to their members: if you work hard at honing certain talents and abilities (intelligence, knowledge, physical skill etc), you will be rewarded (better jobs, more money etc). This, in turn, produces better outcomes for societies. I think this argument makes sense, at least in its abstract form, but the devil is in the detail. Is it possible to hone talents in the way that meritocrats presume, or are we just rewarding those that got lucky in the genetic lottery (or through some other means)? What talents are we incentivising and do they really produce social goods? I’ll consider a potential criticism of this second argument in the next section when I look at the ‘wrong measure’ counterargument.

The third argument for meritocracy is the ‘respecting dignity’ argument:


A3 - Respecting Dignity - Meritocracies allow people to develop and hone their talents in the manner that they desire, and to reward them for doing so. This allows them to develop into full human beings. They are not treated as victims of circumstance or representatives of abstract social classes.

 

Unlike the first two arguments, this one is not consequentialist in nature. It is based on the idea that meritocratic systems are intrinsically better, irrespective of their broader social outcomes, because they treat people as individuals and respect them in their full humanity. People are not prisoners of the past or of circumstance. They have the opportunity to develop their full powers of agency. You can think of this as a quasi-Kantian argument: meritocratic societies respect people as ends in themselves, not for some other reason (though, of course, this would need to be balanced against the consequentialist arguments that do not do this). Again, this is an argument that Wooldridge emphasises in his defence of meritocracy:


By encouraging people to discover and develop their talents, [meritocracy] encourages them to discover and develop what makes them human. By rewarding people on the basis of those talents, it treats them with the respect they deserve, as self-governing individuals who are capable of dreaming their dreams and willing their fates while also enriching society as a whole.
(Wooldridge 2021, 373)

 

This is an interesting argument. I think there is a core of good sense to it. Certainly, nepotistic or representationalist societies are in tension with ideals of individualism and autonomy. They do not treat people as masters of their own fate. In such societies, people are not valued for who they are. People are, instead, valued because of where they came from or who they represent. That said, I think it would be a mistake to presume that meritocratic societies are more respectful of individuals. Meritocratic societies can be very unpleasant places to live, given the high anxiety and competitiveness often associated with them. I’ll discuss this in more detail in a moment.

The fourth argument in favour of meritocracy is the ‘best alternative’ argument.


A4 - Best Alternative - Meritocratic social allocation is better than any historic or proposed alternative system of social allocation. Nepotism is often corrupt and stagnant; representationalism would increase the power of the state and perpetuate identitarian thinking; neither system treats people with dignity or respects their individuality

 

This argument has been implicit in much of what has been said already, but it is worth making it explicit. The idea is that, whatever its flaws may be (and we will consider some below), meritocracy is better than alternative systems. Think of this as the Churchillian defence of meritocracy (after Churchill’s alleged defence of democracy against other systems of government). To me, this might be the most persuasive argument, at least when it comes to certain forms of social allocation (i.e. something like healthcare should not be allocated on merit but I don’t think any defender of meritocracy believes that, at least not openly and directly). I have thought about it a lot when it comes to allocating positions to students at university. The country in which I live — Ireland — has a competitive, points-based system for allocating students to university degree programmes. To get into the more competitive (and presumably attractive) universities and degree programmes (like medicine) students have to score highly on a national second-level exam (the Leaving Cert). The system is often criticised, for reasons similar to the ones that I will discuss below, but it’s never been obvious to me what a better alternative would be. Each proposed alternative tends to make the system more complex and opaque, and to insert more potential forms of bias into it. Perhaps a ‘mixed’ system of allocation is best — some positions on merit; some in line with representationalist/reverse discrimination concerns — but I’m not sure what the balance should be or whether introducing some element of the latter just adds confusion, potential for longer-term abuse/misuse, and does not serve students particularly well. I don’t have a fully worked out view to offer here, but, as I say, this Churchillian defence gives me some pause.


3. Arguments Against Meritocracy

What about arguments against meritocracy? I will consider three here. Each of these has been developed from conversations/debates with students in my classes about the topic. I’m sure it is possible to identify other criticisms, and I would be happy to hear about them from readers, but these are the ones that keep coming up in my classes.

The first objection is something I will call the ‘wrong measures’ objection:


CA1 - Wrong Measures: Classic meritocratic tests (e.g. IQ or other standardised aptitude tests) do not measure the full set of relevant talents or merits, relevant to all the forms of social allocation in which we are interested. They may also be inaccurate and generate false negatives/false positives.

 

In other words, the kinds of testing paradigm commonly deployed in aid of meritocracy are too narrow and only consider a limited range of talents. They do not ensure sufficient cognitive or talent-based diversity in social institutions, which is bad because, if you follow the arguments of Scott Page and others, cognitive diversity is a good thing, particularly if we want our institutions to be more successful in solving problems. As a result, it could be the case that the tests reward people we would rather exclude and exclude people we would rather reward.

I think there is some value to this criticism because I am reasonably convinced that some degree of cognitive diversity is important. But this doesn’t mean that meritocracy is the problem, rather, our means of its implementation. Changing the tests so that we have a broader view of the talents that count could patch up the system, at least to some extent. We would still be focused on merits, and not slipping into some other form of social allocation, but we would have a more pluralistic conception of merit. Defenders of IQ tests and other standardised tests may come back on this and argue that their preferred tests are exceptionally well-evidenced and validated and that there is some general factor of intelligence that seems to correlate with a large number of positive social outcomes. I am not going to get embroiled in the IQ wars here, but from the limited materials I have read and listened to on the topic, I am inclined to agree that there is some there there. That said, it is pretty clear IQ is not the only thing that matters. We can have high IQ psychopaths but I am pretty sure we don’t want psychopaths in some decision-making roles. Also, even if such tests are accurate and well-validated, the problem I tend to have is that most competitive examinations systems that I am familiar with are nothing like IQ or similar tests. They tend to be the more typical academic, educational tests (based on a standard set of problem questions, comprehension questions, essay questions and so on). On previous occasions, I have explained why the grading associated with at least some of these forms of testing can be quite arbitrary and unfair. Whatever the results mean, they are probably not always a good signal of underlying raw intelligence. Also, these kinds of tests, and the grades associated with them are much more susceptible to gaming and bias. Which brings me to the next objection.

The second objection is what I will call the ‘biased measures’ objection:


CA2 - Biased Measures: Classic meritocratic tests are biased in favour of existing social elites either because (a) they can pay for coaching or training to excel on the test and/or (b) the tests are designed to suit their cognitive style (e.g. abstraction over concreteness).

 

This objection is importantly distinct from the preceding objection. It is not that the measures are wrong or not indicative of the kinds of talents we wish to reward, it is that even if they are broad-minded and accurate, they are the kinds of measures that wealthy elites can do better on, either because they can invest more money in their children’s education, paying for private tuition and test preparation, and/or because the tests suit their cognitive style.

I mention the latter possibility because I am reminded of Alexandria Luria’s famous experiments suggesting that rural peasants in Russia did less well on certain kinds of test because they were less adept at abstract thinking and that the more industrialised and modernised community found abstract thinking more facile (see Luria Cognitive Development: Its Social and Cultural Foundations) I am not claiming that Luria’s specific studies are relevant to contemporary debates about meritocratic testing. I am mentioning them simply because they illustrate — quite vividly — a key point: that cognitive styles and abilities can be subtly shaped and influenced by one’s developmental niche and unless a testing paradigm is very carefully designed to eliminate this form of bias, it may tend to perpetuate the success of those drawn from a particular niche (e.g. the tests may presume certain ‘shared’ knowledge that is not really shared at all).

That said, I think the other point, about parental investment in education, and the perpetuation of a new wealthy elite, is the more critical one. This is the issue that weighs most heavily on the minds of my students when I discuss meritocracy with them. It is also the objection that has cropped up in most recent criticisms of Singapore’s experience with meritocracy. Findings there suggest that those that initially did well in the meritocratic system can afford to pay more for their children’s schooling and thereby run the risk of entrenching a new wealth and merit-based elite. This experience is similar to that observed around the world. Simon Kuper’s book Chums — which is about how wealthy public school boys came to run modern Britain — comments on this too. Kuper notes that while at one point in time aristocrats and upper-middle class children could succeed based purely on connections and historical wealth, by the 1980s (when he attended Oxford along with Boris Johnson, David Cameron, Michael Gove et al), even the wealthy had to do well in academic tests. And they did. Their elite schools invested heavily in prepping them for success on those tests.

This entrenchment of a new elite was, of course, Michael Young’s big concern about meritocracy in his 1958 book. The counter-response to it could be that, again, we just need to change the form of test and rely on tests that cannot be prepped or gamed. Some aptitude tests bill themselves as such. For instance, Irish medical schools use a HPAT (Health Professions Admission Test) in addition to the traditional end-of-school Leaving Certificate to allocate places at university. The test is based on an Australian proprietary platform which is, allegedly, ungameable because you cannot study or prep for it. Nevertheless, you can find preparatory materials for it and there are plenty of people willing to sell training and/or tuition to help you prepare for it. It seems unlikely that the test is ungameable. Similar experiences with the LSAT and MCAT in the US suggests the opposite. This is not surprising. All tests tend to rely on common styles of question and those that are motivated to do so can pay for some, at least minimal advantage, in taking tests with those common formats of question. Those minimal advantages can accumulate over time.

It’s not clear what the solution to this problem is or ought to be. On the one hand, a defender of meritocracy could tough it out and say that as long as the tests provide the right measures (i.e. identify the relevant range of talents and abilities) who cares if they are gameable or biased towards elites. As long as we are rewarding merit directly that’s all that matters. And, who knows, perhaps some people from less privileged backgrounds may still be able to break through the system. Investment in education might gain some advantage but not enough to completely swamp other factors (raw intelligence, hard work/ambition, luck). Contrariwise, a defender of meritocracy could advocate for constantly tweaking or changing the test format to eliminate the potential for unfair advantage linked to wealth. This strategy might face diminishing returns, however. Whatever tweaks you make would still need to be consistent with the aims of the test (to identify the relevant talents) and a constant arms race between testers and takers may run up many additional costs for little gain.

It could be, however, that this objection gets to one of the tragedies of human social life. That new systems for allocating social goods based on merit can be disruptive when they are initially introduced, shaking up the old social order and threatening established norms, but after a generation or two things settle down into a familiar pattern. If you read Wooldridge’s book you cannot help but come away with the impression that meritocracy really was a disruptive social innovation. But perhaps now its capacity for continued disruption has been largely eroded, at least in countries where it is well-established.

The third, and final, objection is the ‘competitiveness and cruelty’ objection:


CA3 - Competitiveness: Meritocratic societies create perpetual competition for credentials. You have to run faster and faster to say in the same place. This can lead to a very unpleasant and anxious existence with harsh results for those that cannot or do not keep pace.

 

This is an objection that concerns me a lot these days. Like most academics of my age, I am often struck by the scale of mental health problems I see among my students. I’m sure there are many causal factors behind this, and perhaps the problem is exaggerated, or my perception of it is distorted (I only tend to hear from students in distress). Nevertheless, it has struck me as odd and out of line with what I used to experience when I was a student (older colleagues also agree that the scale of the problem has gotten worse). What is of particular interest to me is how many students I encounter expressing anxiety around their exams and degree results. Many feel their lives will be over and their career aspirations ruined if they do not get a 2:1/B average in their degree. Many also feel pressure to pursue additional qualifications to make themselves stand out from the crowd. Doing an undergraduate degree is no longer enough. You have to do at least one postgraduate degree and consider other forms of microcredential or short-course qualifications. I’m not sure that this constant credential seeking is positive or conducive to human flourishing.

But perhaps this is the inevitable consequence of any meritocratic system. The whole purpose of the system is to encourage people to develop their talents. Very few gatekeepers are going to conduct an exhaustive inquiry into people’s actual merits. They are going to rely on credentials to tell them who is worth considering for the opportunity. But if everyone pursues the same credentials, and if social opportunities are scarce in some way, the competitive advantage of those credentials is reduced and people have to pursue other credentials to stand out from the crowd. An arms race mentality kicks in. While some pressure and anxiety might help us to achieve great things, constant pressure and anxiety is debilitating. There is a danger that, over time, this is the kind of social culture embedded by meritocracy. Everybody is racing to standstill and nobody is particularly happy about it.

I would also repeat the obvious point, made above, that relying on meritocracy to resolve all forms of social allocation would be cruel and inhuman. For instance, allocating spaces to healthcare treatment on the basis on educational attainment would be cruel. I would also argue that any biasing or weighting of votes based on merit (as was once proposed by John Stuart Mill) would be cruel and undignified. We might be able to live with the benefits and costs of meritocracy in some areas, but not in all.





4. Conclusion

As I said, my goal was not to provide a definitive evaluation of meritocracy here. Rather, my goal was to clarify the concept and outline a framework for debating its benefits and costs. I hope I have provided that in the preceding. I am happy to hear from people as to how the framework could be modified or developed. Are there other arguments for and against that should be added to the mix?

Monday, November 28, 2022

101 - Pistols, Pills, Pork and Ploughs: How Technology Changes Morality



It's clear that human social morality has gone through significant changes in the past. But why? What caused these changes? In this episode, I chat to Jeroen Hopster from the University of Utrecht about this topic. We focus, in particular, on a recent paper that Jeroen co-authored with a number of colleagues about four historical episodes of moral change and what we can learn from them. That paper, from which I take the title of this podcast, was called 'Pistols, Pills, Pork and Ploughs' and, as you might imagine, looks at how specific technologies (pistols, pills, pork and ploughs) have played a key role in catalysing moral change.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).







Thursday, November 24, 2022

Did the Resurrection Happen? A Sceptical Perspective


The Three Women Discover the Empty Tomb 


I have always wanted to write something about the arguments for and against the resurrection of Jesus. As the central event in the Christian Gospel tradition, and the hingepoint for much Christian theology, the philosophical-historical question of whether the resurrection actually took place is of huge significance.

Like many sceptically-minded people, I have a hard time believing it happened. Indeed, I find it strange that some people think that a strong case can be made for its occurrence based on the scant historical record. You often hear apologists suggesting that there is overwhelming evidence for it, and that it is only those predisposed to disbelieve, or with some bias or ulterior motive, that deny this evidence. But when I look at the evidence I think: that can't be right. I won't say that there is no evidence for the resurrection; there is some. But overwhelming? Undeniable? Surely not.

Anyway, as I say, I have been wanting to write about the topic for a while, but I have been stopped in my tracks by its complexity. There are thousands of years of scholarship dedicated to debating the resurrection. Modern debates for and against its occurrence often fuse together many disciplines: general history and historical method, critical biblical history (requiring knowledge of ancient languages, particularly Greek), forensic science, probability theory, the psychology of belief, the sociology of religion and, of course, philosophy. In addition, the apologetical energy dedicated to defending it has generated a rich seam of argument and counter-argument. It would be impossible to do justice to this dialectical complexity in a (relatively) short article.

So my aims are more modest. My goal, in the remainder of this article, is to provide a general framework for arguing about the resurrection from a sceptical perspective. I hope this will explain why I, and many others, find it hard to believe in its historical occurrence and why I am generally unpersuaded by apologetical defences of it. Even though I am wearing my biases on my sleeve, I hope that I can be fair-minded in what I have to say.

I won't claim any particular originality in my analysis. Much of what I have to say can be found in the writings of others. For those that are interested, my two favourite pieces on this topic are Dale Allison's (long) essay 'Resurrecting Jesus', in his book of the same title (and which he has since updated into an even longer book length treatment of the topic -- which I have not read), and James Fodor's critical analysis of William Lane Craig's case for the resurrection, in his book Unreasonable Faith. Allison is a respected New Testament scholar who, despite his Christian leanings, offers a fascinating and detailed review of the evidence for and against belief in the resurrection. Fodor is, as far as I know, a precocious PhD student in biology who has, nevertheless, written an excellent, and in my view under-appreciated, book-length critique of Craig's philosophical project. Honourable mentions would also go out to Bart Ehrman's many excellent books (in particular Jesus Before the Gospels), Gerd Lüdemann, Michael Alter, Kris Komarnitsky and, of course, David Hume (who, despite the many criticisms of his essay 'On Miracles', remains an important source for sceptical analyses of miracle claims). I'll reference these authors, to some extent, in what follows.


1. The Argumentative Framework

Apologetical arguments for the resurrection take a fairly standard form. They start with a statement of certain 'minimal facts', i.e. facts that can be, uncontroversially, extracted from the historical documents available to us (the various biblical texts and, perhaps some, extra-biblical texts though the value of these is disputable). They then argue that either the best explanation of those minimal facts is that Jesus was resurrected, bodily, from the dead or, alternatively, that the probability of the resurrection hypothesis, given the available evidence, is relatively high and certainly higher than any alternative naturalistic explanation.

The minimal facts approach is interesting. I could be wrong but I believe it was first instituted or made popular by Gary Habermas in his 1980 book on the resurrection. It has since been coopted by most apologists. The idea behind it is to concede, to the sceptic, that there may be certain events reported in the New Testament whose historical reality is debatable. Whether this concession is genuine or not is unclear. My sense is that many proponents of the minimal facts approach lean toward the view that the New Testament is largely historically reliable, but the concession is still important. The claim is that, notwithstanding the disputes about historical reliability, there are certain minimal facts that most reasonable people, even sceptical ones, can agree upon based on the biblical texts. 

The list of these minimal facts varies. Habermas has developed lists based on surveys of biblical scholars. I will focus on three minimal facts for the purpose of my analysis. They are:


F1 - Jesus was buried in a tomb (probably that of Joseph of Arimathea) following his crucifixion and this tomb was found empty three(ish) days later by some of Jesus's followers.
F2 - Several of Jesus's followers, in the days and weeks following his crucifixion, had visions of him in which he appeared to have been raised from the dead.
F3 - The early Christians came to believe that Jesus had been resurrected, bodily, from the dead and based much of their religious faith on this proposition.

 

As I will point out below, with the exception of F3, whether these statements are, indeed, historical facts, is open to some dispute. Nevertheless, these are the kinds of facts upon which the case for the resurrection is built.

With this statement of the minimal facts in place, we can set out a basic argumentative framework for debating the resurrection.


  • (1) There are certain minimal historical facts that need to be explained, namely: F1 (empty tomb), F2 (appearances) and F3 (early faith traditions)
  • (2) The best explanation of these minimal facts is the resurrection hypothesis, i.e. the claim that God raised Jesus, bodily, from the dead:

    • Sub-argument
    • (2.1) The best explanation is the one that either (a) raises the probability of the explanatory hypothesis above some threshold (e.g. 0.5) or above that of rival explanations; or (b) scores highest on some list of explanatory virtues, e.g. fit, scope, plausibility, power.
    • (2.2) The probability of the resurrection hypothesis given the available evidence is higher than the probability of any rival naturalistic explanation (Pr (R|E) > Pr(N|E) - where E is the available evidence, R is the resurrection hypothesis and N is any naturalistic hypothesis) and/or it scores higher on lists of explanatory virtues.

  • (3) It is reasonable to believe or accept the historicity of the best available explanation.
  • (4) Therefore, it is reasonable to believe or accept the historicity of the resurrection.

Other statements of the argumentative framework are possible (e.g. William Lane Craig offers a slightly different formulation in his work) but I think this one captures the basic moves that are made in the debate.

 What should we make of it? Let's take it step by step.


2. What facts need to be explained?

We will start by considering the minimal facts that need to be explained. Is it really true that we have facts about what happened to Jesus after he died? Maybe, but the obvious point to make here is that we don't have direct access to the facts themselves. Instead, what we have are reports or references or allusions to these events in certain texts. The most important texts being: the Pauline Epistles (particularly 1 Corinthians 15), the four canonical Gospels (Mark, Matthew, Luke and John), and some passages in the Acts of the Apostles. Of these, the Pauline Epistles give relatively little information about what happened after Jesus's death. The key passage from 1 Corinthians 15 can be quoted in full (from the American Standard Bible):


For I handed down to you as of first importance what I also received, that Christ died for our sins according to the Scriptures, and that He was buried, and that He was raised on the third day according to the Scriptures, and that He appeared to Cephas, then to the twelve. After that He appeared to more than five hundred brothers and sisters at one time, most of whom remain until now, but some have fallen asleep; then He appeared to James, then to all the apostles; and last of all, as to one untimely born, He appeared to me also.

 

This is just a recitation of a creed. As you can see, it contains no mention of the crucifixion, the empty tomb, the appearance to the women, the appearance on the road to Emmaus and so on. It only mentions appearances to certain individuals and groups. It is the Gospel narratives that give most of the details about the discovery of the empty tomb and the post-resurrection appearances. 

This means that it is wrong to suggest that the resurrection argument proceeds on the basis of a simple inference to best explanation of given historical facts. Each of the alleged minimal facts is, itself, an inference to best explanation of the available textual sources. So, for example, we are only entitled to say that the discovery of an empty tomb is a historical 'fact' on the basis that assuming its facticity is, itself, the best explanation of the texts we have claiming that the tomb was empty. There is, in other words, a double inference to best explanation underlying the resurrection argument: (i) the first inference from the texts to the minimal facts that best explain the texts and (ii) the second inference from those minimal facts to the resurrection hypothesis.

I do not wish to overstate this point. With the exception of recent history -- where we have direct photographic, visual or audio access to historical events -- we usually have to infer historical facts through this method. We have certain sources - texts and artefacts of various forms -- and we infer historical events from these sources. Still, it is always important to ask whether the sources in question are likely to be giving an accurate representation of the historical events. Were they written in a dispassionate, truth-seeking manner? Do they have the character of reliable historical accounts? Are there multiple forms of corroboration? Do they each point to the same basic underlying events?

The New Testament texts present several difficulties in this regard. I won't rehearse all the standard arguments about their dating and the supposed sources that they themselves are based upon. Suffice to say, the Pauline Epistles likely date to quite shortly after Jesus's death (maybe 5 years), but are written by someone that did not meet Jesus before his death and only converted to Christianity after he had his own visionary experience. The Gospels date from later in the first century, possibly as early as 60 AD (for Mark) and as late as 120 AD (for John), though most people date them within this range. It is widely believed that the Gospels are based on an oral tradition, and a book of sayings of Jesus (Q). Assuming the existence of these sources is, again, an inference to best explanation of the texts. It is also commonly accepted that Mark is likely to be the first of the Gospels and that Matthew and Luke used Mark as source material due to the similarities across the texts. None of the Gospels were written by eyewitnesses to the historical events. This is uncontroversial. But they may be based on the testimony of eyewitnesses, as handed down through oral tradition.

You can make what you will of that. I don't think it is overly important, nor sufficient to cast doubt on the alleged minimal facts, but, as I will point out later, the layers of translation and reinterpretation between the historical events and the texts does lend credence to certain naturalistic explanations for why the texts take the form that they do.

More important than dating, in my opinion, is the character of the New Testament texts. They are not dispassionate historical treatises. They are evangelical texts, written by people trying to convert, exhort and reassure religious believers. They have certain theological agendas. Matthew, for instance, is commonly believed to be writing in order to argue that Jesus was the promised Jewish Messiah. Luke, by contrast, was writing with a more universalist message to Gentile communities. On top of this, the Gospel writers do some weird things with their stories that make it hard to sort fact from fiction or legendary embellishment. I'll give an example of this in a moment, but it is something that has been documented at length by others. Randel Helms, for instance, in his book Gospel Fictions argues that the New Testament writers frequently took older biblical stories and retold them with Jesus as the main character. Helms may overstate his case but some of his examples are striking. Similarly, Michael Alter, in his long book The Resurrection: A Critical Inquiry, gives hundreds of examples of odd features of the resurrection narratives, including tensions in dates, sequences of events and literary motifs. I find Alter's book hard to follow -- it is broken down into many sub-sections that make for a disjointed reading -- but it is extraordinarily detailed and one comes away from it with the impression that none of the Gospel narratives could be (nor could have been intended to be) accurate representations of historical events. Many other works of 'Higher Criticism' illustrate the same point. This does not mean that there is no truth underlying the texts. Maybe there is eyewitness testimony supporting some events; maybe some of the events really took place. But it takes a lot of work to figure this out and, even then, all we have is reasonable inference, not truth.

We see this if we turn our attention to the three minimal facts mentioned earlier, starting with the empty tomb. If Jesus's tomb was discovered empty, then this would be a significant fact. As I will point out later on, adding F1 to the list of facts that needs to be explained, makes the sceptic's task a little bit harder than it might otherwise be (though not overwhelmingly so). Is there any reason to doubt that the tomb was discovered empty? I cannot rehearse all the apologetical back and forth on this topic. There is, however, at least some reason to doubt it. The obvious one is that the empty tomb is not mentioned in the earliest sources. Paul never talks about it and does not include it as part of the tradition that he hands down to his followers. The Gospel narratives do include it, but there are different degrees of elaboration across those narratives. Mark has a spare empty tomb narrative, telling us that the women found the tomb empty, saw Jesus, and then ran away and told no one. Matthew, Luke and John add a lot more detail, suggesting to many a degree of legendary embellishment and an attempt to respond to counter-apologetical arguments. The net result is that while I'm willing to concede the empty tomb as a 'minimal' fact, I would not be overly confident in it.

What about the post-resurrection appearances(F2)? Here, it seems like we are on firmer footing. All the texts recount post-crucifixion appearances. Paul mentions the appearances to Peter (Cephas), the disciples, James (the brother of Jesus), the 500 and himself. The Gospels recount appearances to the women at the tomb, the disciples, travellers on the road to Emmaus, other apostles and, even, the eventual ascension into heaven. Furthermore, since belief in these appearances and the idea of the resurrection became central to early Christianity it seems plausible to suppose that there must be some underlying truth to them, i.e. that at least some of Jesus's followers had visions of him after his death.

But must we explain each and every appearance listed in the New Testament? Probably not. Dale Allison goes through the appearances in detail in Resurrecting Jesus and highlights some problems they pose for those wishing to unearth the historical truth. I won't recount everything he says but a few examples are telling. First, take the appearance to Paul. This seems to be the one case in which we have direct eyewitness testimony, i.e. somebody that actually had a vision of the risen Jesus is telling us, in his own words, about that vision. This is not, it would seem, a story we are receiving second hand. But was Paul's vision a vision of the resurrected Jesus? The details given to us are sketchy. Paul mentions that he was 'untimely' or 'abnormally' born (translations vary). This suggests that his vision occurred out of sequence with the other post-resurrection appearances. What's more, he talks only about hearing Jesus's voice and (maybe) seeing a bright light. He did not see the post-resurrection body, nor would he have been able to recognise it if he did since he never met Jesus in his lifetime. On the whole, there is little to distinguish Paul's experience from the many other experiences of Jesus, recounted by religious believers up to the present day. On top of this, Paul references appearances that are not mentioned elsewhere in the Gospel texts. For instance, the appearance to James and the 500. If Jesus's brother really did see the post-resurrection body, and if 500 people saw it at the same time, one would expect this to be mentioned elsewhere in the New Testament. The silence on this matter is not dispositive but it is suggestive.

What about the appearances mentioned in the Gospels? As Allison points out, there are a number of weird, narrative features to these appearances. I'll just give one example. The appearance, on the Sea of Galilee, to Simon-Peter, Thomas and others. This is recounted in the Gospel of John (21:1-17). You probably know the story. The disciples have been fishing all night, they catch nothing, then they see Jesus on the shore, he tells them to try again, and they catch a huge number of fish. They then share a meal with the resurrected Jesus.

I say you probably know the story, but you might know it as one of the alleged miracles of the living (not resurrected!) Jesus in the Gospel of Luke (5:1-11). The stories are practically identical but Luke claims that the events took place before the crucifixion; John places them after the crucifixion. Clearly something weird is going on here. The Gospel authors have taken the same story and placed it in different points of the narrative. It's hard to know what the 'original' location of the story was. Allison thinks a strong case can be made for thinking that Luke is transposing the story and that John has it in its original location. But either way, when you analyse the example, you realise that the post-resurrection appearances in the Gospels aren't entirely reliable. Maybe Jesus appeared to his followers in Galilee, but, equally, maybe this is just a legendary embellishment designed to make a point about Jesus's teachings and miracles.

The upshot of all this, for me, is that while some post crucifixion visions, most likely those to the disciples, are likely to have some grain of truth to them and may warrant explanation, many of the specific appearances recounted in the Gospels probably do not.

Finally, what about the third fact: that Christians came to believe in the resurrection. As noted, this seems undeniable. Indeed, it's not clear why this is a 'fact' that needs to be explained. However, the apologetical argument is typically that belief in the resurrection is unusual given the religious-social context of the early Christians. No one believed, prior to Jesus, that the Messiah was going to be executed. This was contrary to prophecy. So the fact that the early Christians continued to believe that Jesus was the Messiah, that he had been bodily resurrected, and were willing to die for this belief, demands some explanation. The suggestion, once again, is that the best explanation of this is that Jesus really was resurrected. Whether that's plausible, or not, is something I will return to in a bit. 


3. The Role of Background Beliefs

If we agree on the minimal facts to be explained, we can proceed to evaluate the different explanatory hypotheses. We will consider two general classes of explanation below: naturalistic explanations (of which there are many -- all sharing the idea that you do not have to suppose that Jesus really did rise from the dead to explain the historical facts) and supernaturalistic explanations (of which the resurrection hypothesis is but one). Our goal is to work out the probability of the resurrection hypothesis, given the available evidence (the minimal facts) vis-a-vis alternative explanations.

Before we try to work out that probability, we need to say something about the role of background beliefs in this evaluative process. It is a trite and obvious point, familiar to anyone with a dash of Bayesianism in them, that background beliefs matter a lot. If you lean in favour of naturalism, and think that supernatural forces are very unlikely to play a role in our universe, then the resurrection hypothesis faces an uphill battle. You will need very convincing evidence to swing you away from naturalistic explanations toward to the resurrection hypothesis. I certainly lean toward naturalism and so this is my bias. Why do I lean that way? I cannot state the full case now (and, to be fair, I don't think I have a full and overwhelming case for that worldview), but there are many reasons, most of them boiling down to the fact that I think naturalistic explanations of phenomena generally work a lot better (provide more insight, explanatory scope, predictive power and control).

What if you lean towards supernaturalism? You might suppose that the situation reverses itself. You are more open to the idea of non-natural forces affecting our universe, perhaps including the idea of miracle-working forces. Consequently, the resurrection hypothesis faces less intrinsic opposition. But this may not be quite right. Even if you do lean toward supernaturalism, or indeed theism, it might still be the case that background beliefs weigh heavily against the resurrection hypothesis. This is where Hume's famous argument against miracles, for all the criticism it has faced, has to be factored in.

I have written several articles about Hume's argument in the past. Most of them attempt to vindicate Hume's insight from his many critics, including the mathematically sophisticated ones like John Earman. I won't repeat everything I said in those articles here. Follow the link for the details. The important point is that Hume's argument is not dependent on a prior commitment to naturalism or supernaturalism. It presupposes a degree of scepticism about specific miracle claims, but this seems eminently reasonable. There are many miracle claims; few of us accept them all. Furthermore, the weight of individual and collective experience suggests that, even if miracles do occur, these are very rare events. So we have reason to start from a position of, at least moderate, scepticism. In addition to this, Hume argues that we know that eyewitnesses (and, with the exception of Paul, we don't have direct eyewitness testimony for the resurrection) often make mistakes. They misperceive or misinterpret events, they mislead and fabricate evidence, sometimes inadvertently and unconsciously, sometimes deliberately. Given these two background beliefs, the evidence for a miracle will need to be pretty strong to overcome our doubts. To be precise, Hume argues that the probability that witnesses are misleading or lying to us would need to be lower (more miraculous) than the probability that the miracle occurred. This is a heavy evidential burden to discharge.

Related to this Humean point, there is an important error that people often make when evaluating evidence that we need to avoid. This is the so-called 'prosecutor's fallacy', although that name is slightly misleading. In brief, we need to remember that the probability of a hypothesis, given some body of evidence (Pr H|E), is very different from the probability of some body of evidence given a hypothesis (Pr E|H). Here's a simple example. The probability that a card drawn from a deck is the Ace of Spades, given that it is black, is 1/26. But the probability that a card is black, given that it is the Ace of Spades is 1.

Why does this matter? Because if you presuppose your hypothesis, and if you have crafted the hypothesis to fit the available data, then it is often possible to make the hypothesis confer a high likelihood on the available evidence (i.e. for Pr E|H to be reasonably impressive). This can mislead you into thinking that the hypothesis has a high posterior probability when it doesn't (that depends on the background beliefs and other alternative hypotheses). In the case of the resurrection hypothesis, it is the posterior probability we are interested in (Pr R|E) not the likelihood (Pr E|R). We want to know how probable the resurrection is, given the minimal facts; not how likely the minimal facts are, given a resurrection.

Conflating these two probabilities is related to another problem, implied in the work of many Christian apologists, who presuppose several Christian-specific beliefs in order to make the resurrection hypothesis seem more likely that it actually is. In other words, they tend to assume that the prior probability of the resurrection is quite high because: God exists, Jesus was God's son, and there was a specific plan for salvation that involved Jesus's death and resurrection. Given those assumptions, then the observed minimal facts are quite likely. But that's reasoning in reverse. What we want to know is whether we should believe in the resurrection, given the historical data, not whether the historical facts make sense, given a bunch of prior Christian theological assumptions. To me, there is something deeply unsatisfying and unpersuasive in using prior Christian belief to support the probability of the resurrection hypothesis. I think we should be going in the opposite direction: using the evidence to determine whether to accept Christian-specific beliefs. After all, the resurrection is the lynchpin of the Christian belief system: it's what is supposed to convince us that Jesus was the messiah and died for our sins. To use prior commitment to Christianity to support your belief in the resurrection seems to get things wrong.

I'll discuss the problem of assuming Christian beliefs in order to make sense of the minimal facts in more detail below. Before that, I want to consider whether I, as a sceptic/naturalist, could be convicted of making a similar mistake. Apologists might be quick to point out that I am using my naturalistic predilections or sceptical leanings to increase the probability of a naturalistic explanation. There is some truth to this but I think the problem, in my case, is less severe than it is in the Christian case. I am not presupposing a particular explanatory hypothesis (like God wishing to raise Jesus from the dead); I'm open to several. Furthermore, as noted above, and as will be emphasised again in what follows, I think that even if you lean towards supernaturalism, there are reasons to think that the prior probability of the resurrection is quite low. This means approaching the minimal facts with the scales tipped against the resurrection is a reasonable starting point.

And, to be absolutely clear, I am not saying that Christians are wrong to believe as they do. Obviously I think they are, but their beliefs are their beliefs. If they come to the resurrection debate with a prior commitment to Christianity, I cannot dissuade them of this. My only point is that they should not then use this prior commitment to assume that the resurrection hypothesis is more probable than the evidence actually suggests.


4. Naturalistic Explanations

Let's now consider naturalistic explanations for the minimal facts. At this point, I need to re-invoke something I said at the outset. This article does not purport to provide a comprehensive examination of the different possible explanations of each fact. It will not follow every thrust and parry of the apologetic and counter-apologetic debate. All it will aim to do is to provide a general framework for thinking about the most plausible naturalistic explanation for the historical record, and some reasons to doubt the viability of the resurrection hypothesis. The sources noted at the start of this article, particularly Allison's essay and Fodor's book, are, in my opinion, the best places to go for fuller details. I'll mention some other sources in what follows.

Also, before I discuss specific naturalistic explanations for the minimal facts, I want to make two general points about how to think about naturalistic explanations. First, naturalistic explanations are rarely unitary. In other words, they do not stipulate a single explanatory mechanism that accounts for all minimal facts; instead, they tend to involve a number of different potential explanatory mechanisms. In what follows, I will talk about one-off historical events well-known psychological and sociological mechanisms, and I will argue the combination of these events and mechanisms can plausibly account for the minimal facts. The appeal to multiple mechanisms can, however, make naturalistic explanations seem quite complex and some will criticise them as a result for lacking simplicity (a la Ockham's razor). This is a mistake. The fact that naturalistic explanations require multiple mechanisms and the resurrection hypothesis just requires one (God's desire and omnipotence) does not make the latter a 'simpler' hypothesis. James Fodor discusses this point in his book so I will quote from him in full:


Occam's razor does not say that 'simpler explanations are more likely to be true'. Rather, it states that explanations which require fewer new (that is previously unestablished) assumptions are more likely to be preferred...the length of time it takes to [state] a hypothesis, or the number of internal parts that it has, is not relevant when judging simplicity or plausibility. 
(Fodor 2018, p 241)

 

This strikes me as being right. Furthermore, as I will point out in the next section, the apparent simplicity of the resurrection hypothesis is misleading. Its simple form often masks a bunch of hidden assumptions that make it much less simple and less plausible as an explanation.

The other introductory point I want to make is about the distinction between evidence and explanatory hypotheses. An explanatory hypothesis is what you posit to explain the available evidence. Sometimes we don't have other corroborating evidence for an explanatory hypothesis; it is simply a posit that accounts for the evidence under consideration. This might seem like a trivial point but it is important when it comes to assessing the credibility of naturalistic explanations. Apologists sometimes claim that sceptics have 'no evidence' to support their explanations (e.g. that Jesus's body was stolen from the tomb). Sometimes that accusation is fair. Sceptics are sometimes just positing a hypothesis that accounts for the data; they don't have other evidence for that hypothesis. However, that, in itself, is not a shortcoming. After all, apologists rarely have other corroborating evidence for their preferred explanation. What matters, when it comes to evaluating hypotheses for which there is no corroborating evidence, is whether the hypothesis is plausible given background beliefs and other factors (fit, scope etc). In addition to this, it is worth noting that sometimes the accusation is not fair. Sometimes we do have corroborating evidence for the hypothesis. For instance, several of the psychological factors discussed below are very well-evidenced, e.g the evidence for the prevalence of grief hallucinations (or visions). It is true that we don't know, for sure, whether the disciples had grief hallucinations, but we have enough examples of such hallucinations from other sources to suggest that such hallucinations are very common and provide a plausible explanation for their experiences.

But I am getting ahead of myself. What about naturalistic explanations for F1 (the fact that the tomb was found empty). Sceptics have two options here. They can explain away, i.e. argue that the empty tomb story is likely to be a historical fiction or a result of legendary embellishment. Or they can posit some specific set of unique, one-off historical events that could account for the tomb being found empty.

In terms of explaining away, the typical sceptical argument will be to point out that the empty tomb story does not appear in the earliest sources (Paul) and when it is told in later sources it bears the hallmark of legendary and theological embellishment. There are a variety of ways that apologists will rebut these claims, pointing in particular to the common core of the empty tomb story, the claim that women, not men, discovered it (unusual in Judaic culture), and the fact that if it was not empty this could have been easily disproved by the Jewish authorities. There are counter-rebuttals to these claims. Dale Allison, who is a supporter of the empty tomb, suggests that the two best arguments on the sceptical side are (a) the fact that 'missing body' stories are common across different religious and mythical traditions and while some of these stories may be influenced by Christianity not all of them are; and (b) the fact that the early apostles and Gospel writers were clearly willing to believe in and write about events that are very unlikely to have been historical (e.g. the legendary embellishment in the Gospel narratives such as the doubting Thomas story). This sounds reasonable to me, although I don't think either argument is decisive. If you are interested in reading a longer argument for why the Gospel narratives might be fictitious, at least when it comes to the empty tomb, I would suggest reading Jeff Lowder's article on the empty tomb. If you are interested in reading an alternative burial account, I would suggest reading Kris Komarnitsky's book Doubting Jesus' Resurrection, in which he argues that Jesus is more likely to have been buried in a mass grave (as many 'criminals' were at the time).

In terms of alternative explanations for the empty tomb, the two leading ones are either (a) grave robbery (which was not uncommon at the time) or (b) quick reburial, i.e. Jesus was interred in a tomb (probably Joseph of Arimathea's) but was quickly reburied by the tomb-owner and hence the tomb was discovered empty a few days later. James Fodor argues for the latter hypothesis in his book and I think he makes a decent case for it. The gist of his argument is that Joseph may have agreed to bury the body in his family tomb in order to comply with Jewish law requiring bodies to be taken down before nightfall (Deuteronomy 21:22), or he may have agreed to it because he was secretly sympathetic to Jesus and his cause. But once the Sabbath was over, he reburied the body without informing the disciples. Fodor responds to a lot of criticisms of this argument and also considers the robbery hypothesis. There are other theories that could account for the empty tomb, e.g. that Jesus wasn't really dead (swoon hypothesis) and somehow managed to escape, or that one of his followers hid the body for theological or political reasons. These hypotheses might sound far-fetched and will usually be laughed off the table by apologists. I agree that they seem like a bit of stretch, but, then again, are they really less intrinsically plausible than the claim that Jesus was resurrected from the dead? The empty tomb is, for me, the hardest thing for the sceptic to explain, but I think there a sufficient number of plausible naturalistic hypotheses to account for it.

What about the fact that Jesus's followers experienced visions of the risen Jesus? Fortunately, here the sceptic is on much firmer ground. We have a wealth of psychological evidence suggesting that many people experience visions of the dead. Grief hallucinations, in which you see or experience someone recently deceased, are common. Various surveys have been done over the years suggesting that between 10-40% of people experience such visions. What's more, the trend is confirmed across cultures and history. In fact, it's a good bet that either you yourself or someone you know has had such a vision. I won't review the wealth of evidence on this myself. Allison and Fodor are both excellent on this topic, providing dozens of references to papers and books. I highly recommend reading what they have to say. The naturalistic hypothesis drawn from this literature is simple enough: since we know that such hallucinations/visions are common without being veridical (i.e. the people seen have not truly been raised from the dead), it is plausible to suppose that something similar happened in the case of Jesus's disciples. They had visions but those visions were not veridical. A theological narrative was then constructed around these experiences to make sense of them and these narratives founded the faith.

Apologists will sometimes resist analogies to common grief hallucinations on the grounds that there was something exceptional or unusual about the visions of Jesus. For example, they will argue that Jesus was not experienced as a ghostly apparition or an ethereal presence but as a physical body raised from the dead. Or they will argue that while one or two visions might be explainable by natural means, the sheer number of visions of Jesus, including the alleged appearance to the 500, makes it much more difficult to explain in naturalistic terms.

There is a lot to be said in response to these claims. On the first point, it is worth noting that there is nothing particularly unusual about the visions of Jesus. Many grief hallucinations do not involve ghostly apparitions or ethereal presences. The deceased will often seem quite real and solid to the people experiencing them, even if they do not share all the attributes of physical bodies (neither did Jesus, if the Gospel accounts are to be believed). Allison gives several examples of this in his book. One particularly compelling example comes from members of his own family who had visions and other experiences of his deceased father. He recounts these visions in some detail and then shows how one could, from them, construct a narrative very similar to that found in 1 Corinthians 15:


If I were looking for reasons to believe in my father's survival of bodily death, I suppose I could construct a little list like Paul's and regard it as evidential: "Clifford [Allison's father] passed away in the hospital, after which he communicated to Kris [Allison's wife]; then he appeared to Andrew [Allison's son] and spoke with him; then he gave guidance to John [Allison's brother], after which his presence made itself felt to Bill and Virginia and Jane [other relatives]; and last of all he appeared to Emily [Allison's daughter]; five of them are still alive, although two have died. 
(Allison 2005, 277)

 

And, of course, Allison's example is just one among thousands (possibly millions) of similar experiences that could be fashioned into similar narratives.

On the second point, there is nothing unusual about multiple people having visions or experiences of the same dead person (as is shown by Allison's narrative), nor in the phenomenon of mass visions or hallucinations, akin to the appearance to the 500. Fodor is particularly strong on this point in his book, giving dozens of examples of mass hallucinations across different religious and cultural traditions. The best attested and studied are probably the visions of Mary, which are common in Catholicism (biographical side note: I grew up in Ireland at a time when thousands of people thought they saw moving statues of Mary; they used to gather in large groups to watch the statues moving; this was before the internet and Netflix). The famous Miracle of the Sun is perhaps the best known example, which occurred in Fatima, Portugal in 1917 and involved thousands of people seeing the Sun dance across the sky. But visions of Mary are far from the only example. We see mass hallucinations across all traditions.

Of course, an apologist could just bite the bullet on these examples and accept them as veridical. In other words, they could say:


'Yes, people do experience the dead across multiple cultures; and mass visions of miraculous phenomena are commonplace, but that's just because miracles are more common than we think and people do rise from the dead, in some fashion. Indeed, the commonality of these things raises the background probability of Jesus rising from the dead and thus improves the case for the resurrection".

 

Gary Habermas does actually make something like this argument when he uses the literature on near-death experiences to argue that supernatural explanations have a higher background probability than sceptics might suppose.

But as an apologetical strategy, this strikes me as problematic for at least two reasons. First, if veridical experiences of the dead are really common, then there is nothing particularly special or unique about the disciples' experiences of Jesus. This is contrary to what most Christians seem to believe. They seem to think that there is something special about Jesus being raised from the dead and that its historical uniqueness is critical to accepting Christianity as a wider belief system. If it is not particularly unusual, then it's hard to see what all the fuss is about. They could have had those experiences and they could have constructed a faith around it, committed to Jesus's status as the Messiah, but there would be no reason to think their interpretation of the experience was accurate. After all, one could construct similar narratives around all the other similar experiences of deceased persons. This is not to deny that the reality of an afterlife would have significant repercussions for one's worldview. It would. But it wouldn't necessarily support a Christian-specific worldview. Second, if you do accept these experiences as veridical, then you run into the problem of conflicting traditions and narratives. Other religious believers claim to have well attested miracles that support their belief systems. They deny Christianity and deny that Jesus was the Messiah. Why favour Christianity over them? Why accept Jesus as the Messiah when there are other, widely endorsed and well attested, belief systems that reject this notion?

What about F3 - the fact that early Christians believed in the resurrection and built their faith around this. The apologetical claim is typically twofold: that this belief was unusual in the relevant cultural context (nobody thought the messiah would be killed and resurrected) and it's hard to understand why people would commit themselves so whole-heartedly to something that was false or easily provable to be false. Here, again, I think the naturalist is on sturdy ground and can point to a number of plausible naturalistic mechanisms that account for this fact. These mechanisms are psychological and sociological in nature.

The first, and most widely discussed, psychological mechanism at play is likely to be 'cognitive dissonance reduction'. This is a species of motivated reasoning, which is probably the most common and widely-evidenced psychological bias among humans. As a general rule, humans like to make the world fit their preconceptions and beliefs. They seek out evidence that confirms their prior beliefs and, when they encounter evidence that contradicts or undermines their beliefs, they will explain it away or reinterpret it in a way that fits those beliefs. The latter is what we call cognitive dissonance reduction: an effort to reduce the dissonance (disconnect/disharmony) between evidence and prior beliefs. You could argue that this entire article is a form of motivated reasoning -- it's me trying to make sense of contradictory evidence in light of my prior beliefs -- and this would not be an unfair criticism (though see my earlier comments about background beliefs and the role I think they play in this debate). Motivated reasoning of this sort can lead people to some extremely bizarre and self-serving places. Cognitive dissonance reduction has been widely documented among religious groups, particularly those with millenarian or apocalyptic ideologies. In fact, I believe the first use of the term 'cognitive dissonance reduction' came from the book When Prophecy Fails by Leon Festinger, Henry Riecken and Stanley Schacter. That book was based on a small scale study conducted by the authors of a UFO religion called 'The Seekers' that was based in Chicago in the early part of the 20th Century. The Seekers based their belief system on a prophecy by a woman called Dorothy Martin who claimed to be receiving messages from a superior alien race. She predicted that the world as we know it would be destroyed on the 21st of December 1954. When that date came and went, and everything was still standing, one might suppose this would be devastating to The Seekers' worldview. Nothing could be more dissonant than a failed prophecy. But, of course, many of the believers found a way to reinterpret the prophecy and the events to fit their worldview. As a result, far from dealing them a knockout blow, some members of the group became even more committed to their faith (although, to be fair, some did end up leaving the group).

The Seekers are just one example of this phenomenon. There is a long history of failed prophecies and an equally long history of religious believers who have a found a way to square those failed prophecies with their faiths. The repeated failed apocalyptic prophecies of the Jehovah Witnesses, the 7th Day Adventists and the Millerites (from whom the adventists emerged) are some of the best known examples of this. But there are many others. The sect that grew up around Sabbatai Sevi is a famous one because of its clear parallels with Christianity. Given how common it is, and how it is grounded in the best-evidenced psychological bias, it is not too much of a stretch to suppose that cognitive dissonance reduction explains the origins of Christianity. If we grant that Jesus had a loyal group of followers that believed him to be the Messiah, that this loyal group of followers experienced extreme dissonance when Jesus was executed (contrary to what they thought should happen to the Jewish messiah), and then experienced grief hallucinations of him, it is not surprising that they would reinterpret events to fit their prior beliefs. Maybe he wasn't really dead? Maybe he was 'raised' from the dead, thereby fulfilling the prophecy? Maybe the messiah wasn't quite what we were led to previously believe?

The best book length treatment of this topic is, to my mind, Kris Komarnitsky's book Doubting Jesus' Resurrection, which provides an extensive discussion of cognitive dissonance reduction and how it can explain the early faith of the disciples. He provides a good two-page summary of his theory late on in the book. I'll quote a little bit from it (note: as mentioned previously Komarnitsky does not accept the historicity of the empty tomb, but that is, I think, irrelevant to the larger point of his theory):


[After his execution, Jesus' followers] returned home to Galilee...some of his followers found it impossible to accept that Jesus was not the Messiah as they had hoped. To resolve this conflict between their beliefs and the harsh reality of Jesus' death, some of them rationalized as a group that Jesus died for our sins, that God raised him up bodily to heaven, and that he would be back very soon as the Messiah should... it became a highly charged religious environment of excitement and anticipation of Jesus' imminent return. Anticipating the yet to be realized return of Jesus and experiencing the normal feelings associated with the absence of a recently deceased loved one, Peter had a hallucination of Jesus that he interpreted as a visitation by Jesus... Following Peter's vision, a handful of others had individual hallucinations of Jesus. Still others heard Jesus speak to them, felt his presence, and shared ecstatic group experiences... 
(Kormanitsky 2014, p 140-141)

 

And so on. You don't have to agree with every detail of this, or the precise sequencing of events (e.g it is plausible to me that the hallucinations came first and then the rationalizations), to agree that something like this, particularly given how common and widely-documented similar stories are, could explain the origins of Christianity. The basic process of dissonance reduction would then be amplified by various groupthink mechanisms, whereby groups reinforce and refine a narrative or belief system by sharing and policing its key elements. The narrative could be embellished with legend or fiction through a process of false memory implantation (another widely documented psychological quirk of humans). None of the people involved would believe they were sharing fictions. It would seem very real to them, but the experiences and memories then written down would not be veridical. This can all happen very quickly, as is clear from sociological studies such as Festinger's study of the Seekers.

What then of the apologetical claim that there was something unusual about believing in a resurrected Messiah? Maybe there was but this is not an explanatory puzzle. Most belief systems have quirks and innovations that are particular to those that found and share them. Humans are imaginative motivated reasoners. We find ways to reinterpret and make sense of what we experience that can seem quite novel and unexpected. I wouldn't have expected people to believe that a pizza parlour in Washington DC was a hotbed of human trafficking and child sexual abuse, but in the midst of heated election campaign between Hilary Clinton and Donald Trump, a lot of people did. So much so that some were motivated to carry out armed attacks on the pizza parlour. I also wouldn't have expected leading Hollywood actors to fall for a worldview grounded in the idea that superior space aliens seeded the planet earth, but, apparently, many have. All these belief systems show quirks and innovations. We see this within Christianity and its multiple denominations too. Mormons, for instance, think it makes sense to suppose that Jesus revealed himself to the Native Americans after revealing himself in the Middle East. Most Christians reject this bit of religious innovation (just as most Jews rejected the innovations of the early Christians). To suppose that innovative belief systems that don't quite fit prior commitments are impossible or hard to explain, is naive and ahistorical.

And what of the claim that it would be odd for people to die believing in a fiction? This doesn't really need a response. As an aside, it's not actually clear that the early Christians did die for their beliefs. But even if we accept that they did, this is not surprising. For one thing, people are willing to die (or, at least, sacrifice a great deal) for all manner of bizarre beliefs be it Soviet communism, QAnon conspiracism, or antivaxxerism. Some people follow faddish diets like eating grass, or consuming nothing but air, into the grave. For another thing, there is no reason to suppose that any of the early Christians were insincere in their beliefs or thought that Jesus was not resurrected. They may very well have believed these things. But their believing so, no matter how fervent the belief was, doesn't make resurrection more probable; just as being a fervent Nazi doesn't mean that white supremacism is true.

In short, the three minimal facts can be readily explained by a variety of naturalistic mechanisms. The empty tomb is perhaps the hardest fact to explain in naturalistic terms, simply because the explanation will have to appeal to a unique set of historical events that is unrecoverable from the historical record that we have. These events are explanatory posits, typically not corroborated from other sources, but there is nothing particularly implausible about them. The other two facts can be easily explained by common psychological and sociological mechanisms of belief formation.


5. The Resurrection Hypothesis

Finally, let's consider the alternative explanatory hypothesis favoured by Christians, namely: God raised Jesus from the dead. Is this plausible? As noted above, it might seem plausible if you already assume a Christian worldview but that gets things backward. The resurrection is used to prop up many of key tenets of Christianity; it doesn't make sense to turn around and use those tenets to prop up belief in the resurrection.

Let's be more precise about this problem. To accept the resurrection hypothesis as a plausible explanation of the minimal facts, you have to (a) accept that God exists and (b) that God had some reason or desire to raise Jesus from the dead [and also, as James Fodor points out, that (c) Jesus had some reason or desire to be seen by his followers]. Obviously, the existence of God is controversial and open to doubt, but even if you accept (a), it does not automatically follow that God would have a reason to raise Jesus from the dead. Abstract forms of theism, such as those embraced by some philosophers, do not easily allow us to infer what God's intentions or desires might be. Maybe there is a creator of the universe and maybe the creator is all powerful. It does not follow that he would want to create a son (who is also, him, in some sense) that would become incarnate in Palestine about 2000 years ago and would then be executed in order to atone for the sins of all the other humans that the very same God created. There are other faith traditions, that also believe in God, that do not accept these specifics and don't think they follow from belief in God. For example, as mentioned earlier, Michael Alter, who wrote a sceptical book about the resurrection, is a believing and practicing Jew. There are, presumably, over a billion Muslims in a similar camp. The point here is that the bare hypothesis of theism does not entail or allow one to infer any of the specifics of Christian belief. Indeed, at least when I think about what an all-powerful being might be interested in, the whole thing seems bizarre and implausible. Why create humanity with the propensity to sin in the first place? Why punish them when they act on this propensity? Why would you think that sacrificing your own son atones for these sins? Why only reveal your son to a group of illiterate peasants in the Middle East before there was reliable video and audio recordings that would allow his message to be shared with greater fidelity?

Furthermore, if you are willing to accept supernatural or other historically unusual explanatory forces, then why would the resurrection hypothesis be the first and most obvious stopping point in the explanatory quest? Perhaps Jesus was an agent of Satan and the whole story was cooked up to deceive us? Perhaps Jesus is a rival God to the one true God? Perhaps superior space aliens came to the planet earth around the time of Jesus and managed to reanimate his corpse after his death? They then sat back and observed what happened as part of some sociological experiment. These speculations may sound implausible and odd, but it's not clear why they are less plausible than the resurrection hypothesis. And once you are willing to be unconstrained by naturalistic explanations, all bets are off. I think Dale Allison captures the basic conundrum well in the following passage invoking the space alien explanation:


Someone could, if so inclined, conjecture that aliens, ever since discovering our planet... have followed our play of hopes and fears with great curiosity. Intrigued by human psychology, and learning... of an extraordinary character, Jesus of Nazareth, and of the religious expectations surrounding him... they reanimated his corpse or transplanted his brain into a new and better body (which would explain why Mary Magdalene and others had trouble recognizing him). Then they convinced him that he had conquered death by divine intervention, set him before his disciples and sat back to take notes... While there is not a sliver of evidence for such a fantastic sate of affairs, it cannot be dismissed as inconceivable... It also raises the question, which must be faced in all seriousness, of how Christians have come to the view that invoking space aliens beggars belief whereas crediting God with a resurrection is sensible. 
(Allison 2005, 340)

 

The bottom line here is that the resurrection hypothesis should not be given a free pass in this debate. It may seem plausible and simple, but that's only because it has become normalised over the course of history. It actually contains a number of hidden assumptions and auxiliary hypotheses that are not particularly plausible.

If we take this point, and combine it with the slight background commitment to scepticism and the relative plausibility of naturalistic explanations, it's not hard to understand why someone like me might be sceptical of the resurrection.


Further Reading 

 
Sceptical 

Christian/Non-Sceptical