Friday, February 26, 2021

88 - The Ethics of Social Credit Systems

Should we use technology to surveil, rate and punish/reward all citizens in a state? Do we do it anyway? In this episode I discuss these questions with Wessel Reijers, focusing in particular on the lessons we can learn from the Chinese Social Credit System. Wessel is a postdoctoral Research Associate at the European University Institute, working in the ERC project “BlockchainGov”, which looks into the legal and ethical impacts of distributed governance. His research focuses on the philosophy and ethics of technology, notably on the development of a critical hermeneutical approach to technology and the investigation of the role of emerging technologies in the shaping of citizenship in the 21st century. He completed his PhD at the Dublin City University with a Dissertation entitled “Practising Narrative Virtue Ethics of Technology in Research and Innovation”. In addition to a range of peer-reviewed articles, he recently published the book Narrative and Technology Ethics with Palgrave, which he co-authored with Mark Coeckbelbergh.

You can download the episode here or listen below.You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).


Show Notes

Topics discussed in this episode include
  • The Origins of the Chinese Social Credit System
  • Historical Parallels to the System
  • Social Credit Systems in Western Cultures
  • Is China exceptional when it comes to the use of these systems?
  • The impact of social credit systems on human values such as freedom and authenticity
  • How the social credit system is reshaping citizenship
  • The possible futures of social credit systems

Relevant Links

Thursday, February 25, 2021

The Technological Mediation of Morality: Explained

3D Ultrasound - Does this change our moral perception of the unborn child?

People have been talking about the death of privacy for at least three decades. The rise of the internet, mass surveillance and oversharing via social media have all been seen as knells summoning it to the grave. In our everyday behaviours, in our choices to use platforms that engage in routine and indiscriminate digital surveillance, we supposedly reveal a preference for digital convenience and social interaction that indicates a willingness to sacrifice our privacy. Despite this, privacy advocates claim that privacy has never been more alive than it was before. Indeed, they argue that it is precisely because privacy is under threat, and because we are forced to make compromises with respect to privacy in our day-to-day lives, that we should care about it more than we did before.

This is just one example of how technology seems to have an effect on our moral values. On the one hand, the creation of new technologies — in this case the internet and smart devices — has created new opportunities for tracking, surveillance and spying. This puts privacy in the vice. On the other hand, the increased pressure on privacy activates it in our minds and makes us worry about it more than ever. We respond by calling for new social norms with respect to the use of surveillant technologies, as well as legal reforms and protections.

Philosophers of technology sometimes explain this phenomenon by using the concept of technological mediation. The idea, in brief, is that technology mediates our relationship to the world: it changes how we perceive ourselves, our actions and our relationship to the world. This, in turn, has an effect on our moral perceptions and actions. Technology is never really value neutral: it comes loaded with moral significance and meaning. But its value-ladenness is not something beyond our control. All people involved in the design and use of a technology have some say in the moral significance of that technology.

In this article, I want to explain this concept of technological mediation and how it affects our moral reasoning. I’ll do so in three parts. First, I will briefly explain Don Ihde’s classic theory of human technology relations. Second, I will outline Peter Paul Verbeek’s key insights into the technological mediation of morality. Third, I will consider the practical significance of the technological mediation of morality.

This may all sound a little dry and theoretical, but I promise it is interesting and may change how you think about technology.

1. Don Ihde’s Four Types of Human-Technology Relationships

”Mediation” is one of those fancy academic terms that can be obscure to outsiders. If my experience is anything to go by, academics love to throw the word into some otherwise banal sentence to make their thoughts sound more sophisticated than they really are. So, for example, you will commonly hear people at conferences say something like “Facebook mediates our perception of social reality”, to which others will nod their heads in agreement as though that says something informative or significant.

It doesn’t have to be so obscure or fancy-schmancy. The etymology of the term ‘mediate’ lies in the Latin verb for ‘to be placed in the middle of’ and that’s a pretty good first approximation of what academics mean when they talk about technological mediation. They mean that technological artifacts place a layer of some sort between humans and the world around them — that the technology stands between us and the world. This then has an effect on how we perceive the world. Consider, a trivial example: my eyeglasses. I wear them on my head everyday. They mediate my perception of reality: they bend light rays in such a way that I can see more clearly. Without the mediation provided by my glasses, I would have much poorer eyesight.

But mediation is a little more complex than that. In his now classic work on the philosophy of technology, Technology and the Lifeworld, Don Ihde outlines four kinds of relationships that humans can have with technologies and the world around them. They are:

Embodiment Relations: These arise when humans use technology as an extension of their own bodies/perceptual faculties. My use of eyeglasses and the blind person’s use of a cane are examples of embodiment relations. They are a particular kind of mediation where the technology is an extended part of who we are. Ihde schematises embodiment relations in the following way:
(Humans — Technology) → World
Hermeneutic Relations: These arise when humans use technology to reinterpret or reframe their perception of the world, perhaps by creating new concepts or categories to understand what they are seeing, or perhaps by appropriating old ones to make sense of the new perception. A classic example is the use of processed images in science, e.g. MRI scans or astronomical photography using non-visible electromagnetic radiation. In this type of mediation, the technologies are representing the world to us and we see them as joined to this external world, not to ourselves. This can be schematised as follows:
Humans → (Technology — World)
Alterity Relations: These arise when humans have to relate directly to a technological artifact. In other words, the artifact doesn’t represent or reinterpret the external reality for us; it is, rather, the external reality with which we must interact. The rest of the world fades into the background. The kinds of relationships we have with robots or ATMs are thought to be classic examples of this kind of relationship. In some ways, alterity relations are the antithesis of mediation insofar as the technologies in this instance do not mediate between us and the world. They are, in a sense, the world. Nevertheless, this can still be viewed as a logical extension of mediation. Furthermore how we perceive and understand technologies in alterity relations can affect other perceptions we might have of the world around us. I’ll get back to this later. Alterity relations can be schematised in the following way:
Humans → Technology(World)
Background Relations: These arise when technologies fade into the background and are not seen as something separate from the world. Rather they are just part of the background canvas upon which we experience reality. Artificial lighting and heating are sometimes given as examples of this kind of relation. This may represent the logical extreme of mediation when the technology is no longer seen to mediate our interaction with reality but is, simply, part of the stage on which external reality presents itself. These relations can be schematised as follows:
Humans (Technology/World)


People have built on Ihde’s framework over the years, proposing different kinds of human-technology relations (e.g. augmentation, immersion). But I still think his original is probably the most useful. One of the key ideas to be drawn from it is that how technologies are perceived and understood, and how they mediate our relationships with the world, is not something that is stable or fixed. It depends a lot on our cultural context, experiences and uses of the technology. What might be part of the background for us (e.g. electrical lighting) might be part of the foreground for others (e.g. those coming from pre-electrical societies). And what might have been part of the background for us in one context (e.g. air conditioning) might be something we have to relate with directly in another (e.g. when the system breaks down and needs to be repaired). This instability is important when it comes to understanding how technology mediates morality.

2. Verbeek’s Theory of Moral Mediation

Working from a similar perspective to that of Ihde, Peter Paul Verbeek has developed a theory for understanding how technology mediates our moral perception and engagement with the world. In other words, Verbeek claims that technology not only changes how we relate to the world in a descriptive or non-normative sense, but also how we relate to it in a moral sense. It presents us with new moral choices and moral frameworks for action.

Here’s how he characterises the idea himself:

[The technological mediation approach] studies technologies as mediators between humans and reality. The central idea is that technologies-in-use help to establish relations between human beings and their environment. In these relations, technologies are not merely silent ‘intermediaries’ but active ‘mediators’ that help to constitute the entities that have a relationship ‘through’ the technology… …By organizing relations between humans and world, technologies play an active, though not a final, role in morality. Technologies are morally charged, so to speak. They embody a material form of morality, and when used, the coupling of this ‘material morality’ and human moral agency results in a ‘composite’ moral agency. 
(Verbeek 2013, pp 77-78)


What does all that mean? I think we can break it down and make more straightforward by focusing on three key insights from Verbeek’s work.

The first two insights relate to the effect that technology has on morality. Verbeek claims that technology mediates our moral relationship with the world in two distinctive ways. First, it pragmatically mediates our relationship with the world. This means that it changes the space of options and actions available to us and this, in turn, has moral significance. Consider two ways in which this might happen:

Technology makes options available that once were unavailable - For example the creation of projectile weapons, missiles and ultimately nuclear weapons made killing at distance and at a massive scale possible. Similarly, the creation of the cell phone/mobile phone made it possible to connect with anyone at anytime in (virtually) any place.
Technology can close off options that were once available - For example speed bumps on the road can prevent us from driving at high speeds. Alcohol interlocks in cars can prevent us from driving while drunk. Internet blocking devices can prevent us from surfing the web during work hours.


The net effect of this is that technology can thrust new moral choices upon us or, alternatively take them away from us. We have to engage our existing moral values and normative theories to decide what we ought to do in these new circumstances. Is killing at a distance less bad than killing up close and personal? Is it okay to call someone at anytime and in any place or should we limit our connectivity in some way?

In addition to this, technology also hermeneutically mediates our relationship with the world. That is to say, it changes how we perceive and understand aspects of the real world (e.g. the concepts and analogies we apply to it) and this can have an impact on our moral decision-making. This new mode of moral seeing is in addition to any choices that the technology might add or take away.

Verbeek has a go-to example of hermeneutic mediation: obstetric ultrasound. This is a technology that allows people to see the foetus in utero at various stages of development. According to Verbeek, ultrasound images are not presented to us in a neutral way. On the contrary, they encourage us to see the foetus as an independent entity, separate from its mother (though present inside her), and as a possible patient for certain treatments or interventions (most obviously, abortion). Here’s how he puts it:

This technology is not merely a neutral interface between expecting parents and their unborn child: it helps to constitute what this child is for its parents and what the parents are in relation to their child. By revealing the unborn in terms of variables that mark its health condition, like the fold in the nape of the neck of the fetus, ultrasound ‘translates’ the unborn child into a possible patient, congenital diseases in preventable forms of suffering (provided that abortion is an available option) and expecting a child into choosing for a child, also after the conception. 
(Verbeek 2013, p 77-78)


Another example of this hermeneutic mediation might be the combination of the cameraphone and social media. By having a device on us at all times that allows for the recording of our everyday experiences, we are encouraged to see those experiences in a new way. They are not things to be enjoyed in and of themselves. They are now to be seen as opportunities for sharing with others, bragging, self-promotion and monetisation. We suddenly focus on the instrumental value of our experiences, not their intrinsic value.

This leads, in turn, to Verbeek’s third key insight. You may have heard the famous phrase that all technologies/artifacts have a politics (an ideology or set of values embedded within them). The classic illustration of this comes from Langdon Winner’s observation about the bridges over the highways on Long Island: they were not high enough to accommodate buses. Because they were less likely to own cars, Winner pointed out that this excluded poor (black) people from the beaches on Long Island. In Winner’s analysis, this was a deliberate design decision by Robert Moses, the planner behind the road network, who let his values shape their construction.

Verbeek agrees with this basic picture but finesses it somewhat. Technologies are indeed value-laden (“dripping with morality” in one memorable phrase) but their values are not entirely shaped by their designers. Oftentimes technologies are interpreted and used in ways that designers do not anticipate or intend. For example, I doubt that Facebook intended for their livestreaming feature to be used by rampage shooters on mass killing sprees. They probably intended it to be used for more benign purposes. Nevertheless, the technology made this possible. According to Verbeek, while designers have a significant part to play in the mediating effect of their technologies, users and regulators also have a role to play. Users and regulators can appropriate technology for new ends, encourage specific uses of it, and develop new interpretations of its moral significance.

This is both an uplifting and dispiriting thought.

3. Practical Significance of Moral Mediation

What does all this mean in practice? There are a number of key lessons here, some of which have been implicit in the discussion so far but are worth specifying.

First, as should be obvious, technological mediation puts the lie to the neutrality of technology. Technology is not some value-neutral tool over which we have complete moral autonomy. It comes with certain values and choices embedded in its design. A speed bump encourages us to slow down: it is biased in favour of slower driving. A cameraphone with internet connectivity and social media encourages the sharing and archiving of everyday life. You still have some choices as to whether you use technologies for their intended purpose but you often have to fight against the in-built biases.

Second, the fact that technologies mediate our moral perceptions and actions is important when it comes to the risk assessment of new technologies. Oftentimes, technological risk assessments focus heavily on what Verbeek and others call the ‘hard’ impacts of technology: the health risks, the possibility of environmental damage, the safety concerns and so on. These hard impact assessments use existing moral frameworks and evaluative standards (e.g. energy efficiency, radiation exposure) to determine whether the technology falls within acceptable parameters. This overlooks the potential ‘soft’ impacts, in particular the impact on social values and norms. What if the rise of the smartphone undermines the value of privacy? Is that not something we should factor into our risk assessment? Of course it is very hard, in practice, to assess these soft impacts (for reasons I won’t get into here) but they are worth considering nonetheless.

Third, and leading on from this, in order to meaningfully assess the soft impacts we need to know whether there are particular patterns to moral mediation. In other words, will it be easy to predict the future course of moral mediation or is it simply chaotic and unpredictable? We know, in general, that technology tends to add moral choices and dilemmas to our lives; it tends not to take them away. Indeed, the examples I gave earlier of technologies that eliminate options are all examples of technologies designed to take away an option that an earlier technology made possible. The alcohol interlock takes away the option of driving while drunk, but we would not have had that option if the automobile had not been invented in the first place. Furthermore, the creation of the interlock adds another choice: the choice of whether to use it or not. So it seems fair to say that the net effect of technological innovation is to add moral complexity to our lives, but can we say anything more specific and predictively useful? I’m not sure, but developing detailed case studies of technological mediation and extrapolating lessons from them looks like a good start.

Fourth, and more pessimistically, as Kudina and Verbeek (2019) have argued, technological mediation adds another dimension to how we think about the Collingridge Dilemma. This dilemma is something that is widely discussed in the world of responsible innovation and design. The classic version of the Collingridge Dilemma works like this:

Classic Collingridge Dilemma: When technology is at an early stage of development we have the power to control it, but we don’t know what its social impacts will be. When technology is at a late stage of development, we know what its social impacts are, but we lose the power to control it.


In short, once a technology proliferates in society it will be but it will be too late to do anything about its social impacts. As Kudina and Verbeek argue, there is a moral variation on the dilemma that arises from our awareness of the technological mediation of morality.

Moral Collingridge Dilemma: “[W]hen we develop technologies on the basis of specific value frameworks, we do not know their social implications yet, but once we know these implications, the technologies might have already changed the value frameworks to evaluate these implications.” (Kudina and Verbeek 2019, 293)


This moral variation on the dilemma is interesting to me because it reminds me of what the philosopher L.A. Paul’s has said about transformative experiences. Briefly, Paul has argued that some life choices cannot be rationally evaluated in advance because they transform who we are. Her main example of this is the decision to have children. To know whether having children is a good choice for you, you need to actually have them and acquire the experiential knowledge of what it is like to have a child. No amount of advance reading or consultation with friends will give you this. (Having now had a child, I think I disagree with Paul but let’s set that disagreement to the side for now)

One way of understanding Paul’s argument is that undergoing a transformative experience has an effect on the evaluative frameworks you use to rationally assess different choices. Anecdotally, it does seem to me like having a child changes how you value different aspects of your life. So the metrics you use to evaluate the choice of having a child will be different after you have had the child. What Kudina nad Verbeek are suggesting is that something similar is true when it comes to the development of technologies. The very act of developing and using the technology might change how we evaluate its merits. We could, in short, undergo a kind of moral transformation that makes it nearly impossible to rationally assess a technology in advance.

That’s a pessimistic thought on which to end and I merely offer it as a suggestion. I’m not sure that any technologies have resulted in transformative moral changes. The development of the internet does seem to have affected how much we value communication and connectivity. So much so that many people now demand internet connectivity as something close to a basic human right. But I’m not sure if that is a transformative moral change since we always valued those things to some extent.

It’s something to ruminate on if nothing else.

Thursday, February 11, 2021

Does Parenting Style Shape Our Moral Culture?

A moral culture is the set of beliefs and practices in a society that specifies the values and norms that (people believe) ought to be adopted by the people living in that society. There are many different moral cultures. Psychologists and sociologists frequently talk, for example, about honour-based moral cultures. These are cultures in which the moral worth of each individual is not equal. It depends on the honour of each individual. Consequently, gaining and protecting one’s honour is the focal point of the moral beliefs and practices in such a culture. Honour-based cultures are sometimes contrasted with dignity-based moral cultures, which essentially hold that all people are of equal moral worth and this equality must be respected by the society’s moral beliefs and practices.

These are just illustrative examples. The concept of a moral culture is broader than that. Since a moral culture is, in essence, just a particular constellation of moral beliefs and practices, usually held together by some common underlying moral theory or paradigm, we could also talk about individualist, communitarian, and egalitarian moral cultures.

As you may know, I’ve recently been writing quite a bit about the idea of moral change and moral revolution. It is an obvious historical fact that people’s moral beliefs and practices change over time. The more dramatic moral changes — the revolutions — often involve changes in the underlying moral culture. For instance, the shift from honour-based morality to dignity-based morality is often thought to be a significant one. But here’s an interesting question: does parenting style make a difference to moral culture? And can shifts in parenting style precipitate or cause moral revolutions?

I recently came across a paper that addresses these questions. It was by Markus Christen, Darcia Narvaez and Eveline Gutzwiller-Helfenfinger (hereafter ‘Christen et al’) and it was called ‘Comparing and Integrating Biological and Cultural Moral Progress’. The paper looks at a number of different issues in the philosophy, psychology and history of morality. I’m not going to consider them all. But the comments it made about parenting style struck me as being quite important, particularly in light of my own recent reflections on the appropriate ethical style of parenting. So I want to review and critically analyse what they have to say.

I’ll do this in three parts. First, I’ll explain why parenting style might be an important shaper of moral culture. Second, I’ll consider a contrast between ancient and modern parenting styles and how this might be impacting on our moral culture. Third, I’ll consider some problems with the claim that modern parenting style is having a negative impact on our moral culture.

1. Why Does Parenting Style Matter to Moral Culture?

In an earlier article, I set out to explain the ‘mechanics’ of moral change. In that article, I made a few observations about evolution and brain development that seem to bear repeating here.

Very roughly, we would expect evolution to play some significant role in shaping the kinds of rules that we adopt in our lives. After all, it does this for other animals. For example, sea-dwelling Salmon follow reasonably strict rules when it comes to their own reproduction: swimming back to the rivers in which they were spawned to continue the cycle of life.

But one of the interesting things about human evolution is how relatively flexible our behavioural rules actually are. Although there are some things that humans have to do in order to survive and thrive, they are surprisingly few in number. They certainly can’t account for all the things humans think they have to do. Indeed, the rules and norms humans follow in their lives — including oddities such as not eating pork and remaining voluntarily celibate for the purposes of religious devotion — are diverse, culturally contingent, and not always obviously linked backed to evolutionary pressures. How did this diversity arise?

Part of the answer lies in the evolution of the human brain. Instead of coming into the world with a complete set of pre-programmed behavioural rules and fixed action patterns, humans come with a flexible learning machine — the brain — that allows them to create and learn behavioural rules in response to cultural, geographical and other contingent historical factors. Evolving that brain came at a cost. Given the constraints of the human birth canal, human babies cannot be born in a mature and capable state. They have to be born in a helpless and immature state. This makes them highly dependent on their parents, particularly their mothers, as well as their wider families and caregiving networks for nurturance and guidance in their early years.

This results in something of a tension when it comes to human moral development. On the one hand we have a flexible capacity to learn lots of different moral rules, but on the other hand we have evolved to attach to and be dependent on our parents and other caregivers in our early years. This means that the style of parenting to which we are exposed can have an important nudging effect on the kinds of moral rules we are inclined to follow later in life.

This is where the influence of parenting style on moral culture can be observed. Learning styles or behavioural rules are not necessarily equivalent to moral cultures, but they are a substantial part of them. Parents and caregivers provide opportunities for and place constraints on their children. These opportunities and constraints either explicitly or implicitly teach children what to value and what to do. This shapes their future moral beliefs and practices, predisposing them to favour certain forms of moral culture.

This can also have an effect on how susceptible people are to moral change later in life. Intuitively, it would seem that the stricter and more conservative one’s upbringing, the less open to moral change one is likely to be in the future. But this is just a rough guess. One’s disposition to moral change is going to be influenced by more than just parenting style. It will also be influenced by genetic factors as well as other wider social factors. For instance, the moral beliefs and practices that prevail in a time of war or famine might be very different from those that prevail in a time of peace and plenty. This is adaptation is not necessarily linked to parenting style.

2. Ancient versus Modern Parenting Styles

You will notice that in the previous section I equivocated somewhat between parents and wider caregiving communities in some of my comments. That equivocation was deliberate but it needs to be cleared up now before it leads to unnecessary confusion.

Nowadays we think of parents (one or two adult individuals) as the primary caregivers for children. But, of course, it is rare for one or two individuals to be solely responsible for the care of children. Children are raised in communities, which consist of extended family members (aunts, uncles, grandparents), peers (friends, neighbours) and social institutions (schools, churches etc). It is these wider caregiving communities, and not just biological or adoptive parents, that raise children.

This prompts a reformulation of the question I asked at the outset. Instead of asking: does parenting style make a difference to a moral culture? It is probably more correct to ask: does caregiving style (where this includes what parents and wider caregiving communities do) make a difference to a moral culture?

This reformulation is important when it comes to understanding the claims made by Christen et al in their paper. Although they make comments about parenting style and, specifically, the role of mothers, in shaping moral culture, it’s pretty clear that they are focused on caregiving as a whole, and not just on what mothers and fathers might do.

So what argument do they make about caregiving styles? They draw a contrast between our ancestral evolved form of caregiving and modern caregiving. Like many psychologists and anthropologists, they assume that humans evolved in small hunter-gatherer bands. Some such bands still exist today and there are ethnographic records of such bands dating back a few centuries. Looking at such hunter-gatherer bands, a particular style of caregiving can be observed. Christen et al argue that this caregiving style is the original, evolved form of caregiving for human beings.

What are the distinctive features of this ancient caregiving style? In her previous work, Darcia Narvaez (one of the co-authors on the Christen et al paper) has enumerated its main features. Four are particularly important:

Affectionate Touch - Children are kept in close (skin to skin) contact with their mothers and breastfeed regularly, often up to the age of four.
Responsivity - Parents are available to respond to their children when they are in distress and regularly do so.
Free Play - Children are given lots of time to play on their own and with other children in a relatively free and open form, often including rough-and-tumble play.
Alloparenting and Social Support - Children are not just cared for by the parents or mothers but by wider social networks within the hunter-gatherer band.


These features are found across most hunter-gatherer bands and, according to Narvaez, they characterise the evolved developmental niche (or EDN) for human beings. In other words, it is to this caregiving style that human development, particularly brain development, has been adapted. Narvaez’s work focuses a lot on the role of mothers and maternal touch within this EDN, but, as can be seen from the list of features just given, this style of caregiving is about more than just mothers. It’s also about the opportunities for free play and social interaction that are given to children.

This ancestral and evolved form of caregiving is contrasted with the modern style of caregiving, particularly the one that has emerged in the USA and that can also be found, to perhaps a lesser extent, in other developed countries. Having read through a few papers by Narvaez on caregiving styles, I’m still not entirely sure what the key features of the modern style of caregiving are, but it seems that they are best understood as the opposite of the evolved style. So, in other words, modern parenting seems to be characterised by less affectionate touch (less, close, physical bonding with mothers in particular), less parental responsivity (children left to cry or left in daycare), less free play and a more isolated parenting style (single or dual parents do the majority of caregiving with some, distant, institutional support). There is also a greater use of punishment and coercion in this form of parenting to ensure that children adopt certain behavioural norms. This seems to be absent from the evolved caregiving style.

What effect does all this have on moral culture? The argument from Christen et al (and supported by Narvaez’s empirical work) is that it is having a noticeable, and arguably negative effect on our moral culture. They claim that the ancestral caregiving style supports a prosocial, ‘engagement ethics’. Children are taught to share and care for other members of their groups. They are taught to have empathy for others; to see themselves as members of supportive communities and not as isolated individuals. They often then look on resources as shared communal property, not something that just belongs to certain people. Contrariwise, the modern caregiving style supports a more isolationist, ‘self-protection’ ethics. Children are taught to see the outside world, including others, as a source of potential threats to their existence. They are taught resources are subject to property rights (some stuff is ‘mine’ and other stuff is not) and not communal property.

There’s more, but that’s the gist of the thesis: the contrasts in caregiving style support very different moral cultures. And one of them, according to Christen et al, is ‘aberrant’ and contrary to human flourishing. No prizes for guessing which one.

Christen et al don’t get into this in their paper but it struck me that what they argue lends support to the thesis developed by Jason Manning and Bradley Campbell in their work on ‘Victimhood culture’. Very roughly, Manning and Campbell argue that we (in the West, specifically the USA) are undergoing a shift in our underlying moral culture. As noted above, we have previously shifted from an honour-based culture to a dignity-based culture. The key contrast between those cultures had to do with how we perceived the moral worth of the individual and the rights and responsibilities that flowed from this perception. In an honour-based culture, worth is something you must gain and maintain: if it is under threat, you have the right to protect your own honour. In a dignity-based culture, everyone has equal moral worth and the institutions of power respect and protect this. Individuals are then free to live their lives as they see fit, with some moral limits involving respect to others. In a victimhood culture, moral worth is, once more, under unstable and under threat (everyone is a potential or actual victim of such a threat). In this case, moral worth is linked to identity and authenticity. Unlike an honour-based culture, however, being a victim in this culture actually adds to your respect. Furthermore, you don’t protect yourself from threats; you look, instead, to authorities (parents, schools, states) to do so. Caregiving style, according to Manning and Campbell, has a role to play in shaping this culture, by setting a certain conception of self-worth and highlighting threats and risks. I think you can see how the protectionist style of parenting could support this.

3. Some Critical Reflections

A large portion of this argument rings true to me. I certainly think that there are aspects of modern parenting, particularly of the helicopter style, that support a self-protectionist ethics. As I have noted before, many parents in my extended peer group (middle-class, college-educated, living in economically developed countries) are highly protective, competitive and interventionist when it comes to their children. They shield them from threats, try to optimise their education and health, while also maintaining full time careers themselves (careers that often mean they are separated from their children for large portions of the day/week). My sense is that this style of parenting induces a lot of anxiety among both parents and children.

This is not to condemn those parents or to suggest that I am immune from these practices myself. I’m not. It’s just what I see in my peer group. This chimes with what Christen et al say about modern parenting. Furthermore, and more significantly, Darcia Narvaez has, in her empirical work, amassed a reasonable amount of evidence to suggest that we can trace the effects of this parenting style through a child’s cognitive and moral development. I recommend reading it and reviewing what she and her collaborators have to say.

Still, I have some worries about the thesis. First, I worry about the over-moralisation of caregiving styles. As noted, it’s very clear from the way they present it, that Christen et al think that modern caregiving style is morally defective or inferior when compared to evolved caregiving style. This argument has a stench of the naturalistic fallacy to me: because we evolved to develop in that caregiving niche that is the one that is optimal, morally speaking, for us. That may be true, and there are ways to make this claim more plausible and remove the whiff of naturalistic fallacy from it, but there are also reasons to think that modern parenting might be a morally appropriate response to changes in social and technological development (Christen et al allude to this themselves).

The modern world is, after all, orders of magnitude more socially and technically complex than what you find in small hunter-gatherer bands. This means that there more opportunities for people in the modern world (more things to do, people to interact with, experiences to have etc) but this comes with increased threats and risks to people’s welfare too (threats from other people, from the choices they might make, and from the technological world that we now inhabit). Being more protective in such a world might be appropriate. Furthermore, I think you could argue that modern parenting represents a reasonable tradeoff between different values and interests. Parents value having rewarding careers and families, children need to be provided for with respect to their education and future. Given these values and interests, more parental investment in work, more reliance on daycare and more separation from children, may be morally preferable. At the very least, if it is morally sub-optimal, it’s not something that parents themselves can easily correct without institutional and legal support (more paid parental leave, cheaper property and education costs and so on).

To be clear, it’s not that I am a huge fan of modern caregiving style. I’m not. I’ve written previously that I think parents can be too protective and too invested in trying to control their children’s development. But I don’t think we can morally condemn it all that easily.

This brings me to a second critical point. I am somewhat sceptical that we can easily delineate between modern and evolved caregiving styles. The presentation given above, and in Christen et al’s paper, draws a sharp contrast between the two styles. We adopt the modern style; others adopt the evolved style. But I imagine, in practice, that the lines are more blurry and the contrast less obvious. It varies from culture to culture, and locale to locale. Speaking from my own experience of parenting, I find that many of the features of the evolved style of caregiving are present, actively encouraged and supported (perhaps to an excessive degree). For example, breastfeeding and affectionate touch have been both advocated for and normalised for my daughter. Furthermore, we have lots of social support from wider family when it comes to caring for her. The COVID-19 pandemic has unfortunately impacted on this, but it has its advantages too — the main one being that both her parents have been far more involved in her day-to-day care than might otherwise have been the case.

I may just be lucky but the point here is not that my anecdotal experience represents the norm but that caregiving styles are probably not so black-and-white. If that’s true, the effects on moral culture may be more subtle and nuanced than we would expect.

Friday, January 29, 2021

A Taxonomy of Possible Moral Changes

I’ve recently been studying the history of moral change and moral revolution. The purpose of this has been to get a handle on the mechanisms of moral change over time and to use this to predict and plan for future moral changes. I’ve written a lot of half-baked thoughts about this over the past 18 months or so. In this article, I want to collect some of those thoughts together and present a taxonomy of the types of moral change that can occur in human societies.

This will, necessarily, be an abstract discussion. I’m not going to be focusing on specific examples of moral change over time; I’m going to be focusing on the high level forms of moral change instead. Nevertheless, I will provide some concrete examples as I go along. These examples are not always intended to be historically accurate or even plausible. They are just intended to illustrate a relevant concept or idea.

1. The Elements of Moral System

I’m something of a traditionalist when it comes to understanding human morality. I agree with the majority of moral philosophers in stipulating that there are two main branches to any moral system: an axiological branch and a deontological branch.

The axiological branch is concerned with values. What is good? What is bad? What is important? What is worth promoting and celebrating? And so on. There is both a positive (the good) and negative (the bad) dimension to value. Values comes in degrees: things can be more or less good or more or less bad. This means that we often try to rank the relative value of different things. That said, value propositions (statements claiming that something or other is good or bad) are essentially binary in nature: something is either on the good side of the ledger or it is on the bad side. There may be some strictly neutral things: things that are neither good nor bad, but I suspect true neutrality is rare.

Values can attach to people, events and states of affairs. For example, we can say that pleasure (a subjective state) is good; one person helping another (an event) is good; and that Martin Luther King (a person) was good. Values can also be either intrinsic or instrumental in form. Intrinsic values are valuable in and of themselves (irrespective of their consequences or extrinsic properties). Instrumental values are things that are valuable because of their consequences or extrinsic properties. Pleasure is said to be the quintessential example of something that is intrinsically good; pain is the quintessential example of something that is intrinsically bad. But sometimes pleasure can be instrumentally bad (e.g. where it leads to greater suffering in the long run) and pain can be instrumentally good (e.g. where it leads to greater pleasure in the future). Many of the things that we value we value for both intrinsic and instrumental reasons. For example, loving intimate relationships are often thought to be intrinsically valuable, but they are also alleged to have an number of instrumental benefits (financial security, personal health and well-being etc).

The deontological branch of morality is concerned with the rightness or wrongness of human action. The terminology may be somewhat confusing. The deontological branch of a moral system is not synonymous with deontology as a general normative theory. Deontology as a normative theory is associated with the claim that we ought to do certain things irrespective of their consequences (i.e. that we have relatively fixed duties). It is usually contrasted with a consequentialist normative theory. The deontological branch of a moral system is more general and less prescriptive than that. It is concerned with answering questions such as: What is permissible? What is forbidden? What is obligatory? And so on.

Unlike value propositions, deontic propositions come in more than two flavours. Indeed, deontic logic is highly complex and multivariate. For what it is worth, I think there are essentially four flavours of deontic proposition:

  • X is forbidden (i.e. you ought not to do X)
  • X is permissible (i.e. you can do X but you are not obliged to do so)
  • X is obligatory (i.e. you ought to do X)
  • X is supererogatory (i.e. X is a really good thing and is above and beyond the call of duty)

I know there are others that think there are other forms of deontic proposition (e.g. that something can be omissible and not just permissible), but this four-flavour view seems to cover most of the relevant ground.

Unlike values, deontic properties attach to actions by people, and not to people themselves nor to general events or states of affairs. We can say that Martin Luther King was a good person, but we cannot say that he was a forbidden or obligatory person. That wouldn’t make sense. We can, however, say that his leading the march on Washington was permissible (perhaps even supererogatory). The deontological branch of morality is often complex and messy because obligations can sometimes conflict. Say you promised two people that you would meet them at the same time on the same day, but in different locations. Technically, we might say that you are obliged to meet them both, but practically speaking it is impossible for you to satisfy both obligations. How can we resolve such conflicts? Is this simply a dilemma that cannot be resolved? Much ink has been spilled over these matters but it would be a distraction to get into them now. The important point is that, like values, deontic classifications often need to be ranked relative to one another, particularly when it comes to obligations. We need to know if one obligation ranks higher than another and so on.

I believe that the axiological and deontological branches are closely related to one another, but in an asymmetrical way. I believe that our values play a fundamental role in shaping what we think is right or wrong. Very roughly, I believe that we are permitted and perhaps obliged to perform actions that produce or honour or celebrate good people, events and states of affairs; we are forbidden from performing actions that produce, honour or celebrate bad people, events and states of affairs. This understanding of the relationship between axiology and deontology may, however, be controversial, at least from a causal perspective. It’s possible, given what we know about human psychology, that what we are permitted (and able) to do has an impact on what we think is valuable. Indeed, I suspect changes in social behaviour often feedback into changes in societal values. So it is perhaps best to think the relationship I just outlined as a logical one, not a sociological or behavioural one.

2. The Types of Moral Change

In any event, given that there are these two branches to morality, it follows that there are two main types of moral change: axiological change (i.e. changes in values) and deontological change (i.e. changes in what we think is right and wrong). Let’s consider, in slightly more detail, these two possible forms of axiological and deontological change.

Axiological change is the most straightforward, at least from a conceptual perspective. As mentioned earlier, values attach to persons, events and states of affairs. Most human societies have identified a class of things that they think are good and a class of things they think are bad. For example, pleasure, education, friendship, loyalty, democracy, freedom, responsibility (etc) are all widely classed as good; pain, ignorance, isolation, treachery, autocracy, slavery and recklessness (etc) are commonly classed as bad. (Yes, I know, there are lots of nuances and variations here). Within these respective classes of good and bad things, we often try to rank and prioritise the respective items. They might decide that freedom is more valuable than pleasure (or vice versa). Sometimes such rankings may seem unfeasible or illogical. People might just throw their hands up and say all these things are equally important or not capable of being ranked relative to one another. That’s fine, but I see that strong value pluralism as a kind of ranking in itself (a neutral or flat ranking). Furthermore, I suspect that most people, in practice, rank their values even if this ranking is only implicit and only applies for certain practical purposes. I suspect that strong value pluralism is the preserve of philosophers alone. This is important insofar as the study of moral change, as I conceive it, is concerned with how social moral beliefs and practices change over time and not with changes in the ideal form of morality that is commonly studied by philosophers).

Anyway, with all this in mind, it seems to me that there are three basic forms of axiological change:

Axiological Additions: New people, events or states of affair get added to the set of values and given a classification and ranking. I suspect this arises primarily from social and technical innovation. For example, when social media was invented people started to evaluate it and so started to classify it as either a good or bad thing and assign it some sort of ranking.


Axiological Reprioritisations: There is a change in how people rank the relative value of something within the set of good or bad things. For example, with the rise of the knowledge economy, literacy and numeracy, which were always values to some extent, became more valuable and more important than they were in an agricultural or manufacturing economy. People may also, of course, decide that things we thought were better are worse or worth the same as other things.


Axiological Reclassifications: People switch something from the set of good things to the set of bad things. For example, where once upon time some people believed that female servility and passivity was a good thing, many (though sadly not all) people now believe it is a bad thing. You can probably think of reclassifications as an extreme form of reprioritisation.


You may wonder why I don’t include subtractions among the possible forms of moral change. If something can be added to the set of values can it not also be taken away? I’m not convinced of this. I tend to think that humans exhaustively evaluate all people, events and states of affairs that they encounter. If it exists and people are aware of it, it probably has some value classification and ranking. That said, it’s possible that some evaluations are highly uncertain or unstable or neutral. For example, we might have no stable or agreed upon classification or ranking for new innovations like lab-grown meat or genetically engineered offspring (though there are lots of strong opinions out there).

What about deontological changes? These are trickier to taxonomise. If we grant that there are four basic forms of deontic proposition (forbidden, permissible, obligatory and supererogatory) then it is possible for any existing deontic belief or practice to shift from one of those forms to another. For example, if we currently believe that giving lots of money to charity is supererogatory, we may, in the future, come to believe that it is forbidden, merely permissible or obligatory. Likewise, although most people now believe that it is permissible to eat meat, it is possible (in principle) that we may in the future believe that it is forbidden, obligatory or supererogatory. It may be hard to believe in some of these possible moral changes right now but they are, in principle, possible. Furthermore, radical deontological shifts have happened in human history. As recently as 50 years ago, many people thought homosexual sex (never mind marriage) was forbidden. Nowadays, most people think it is permissible.

Anyway, if we accept that there are these four types of deontic proposition, and that any existing deontic proposition can, in principle, shift to one of the other three types, we can use some simple combinatorics to work out the total number of possible deontological changes. There are 12 of them (3 for each of the 4 types of deontic proposition).

But, of course, the possible forms of deontological change don’t end there. New technologies and new social arrangements make new forms of action and interaction possible. These new actions will require some deontic classification. For example, the creation of the internet and social media has made cyberbullying a new possible form of human action. We need to figure out how to classify that action. Is it forbidden in the same way that physical bullying is forbidden? Is it more or less serious a form of wrongdoing? I’ve spend a lot of my academic career looking at the new forms of action made possible by technology and figuring out how it should be classified from a deontic perspective. Consider, for example, my work on virtual sexual assault and robotic rape. It’s possible, of course, that new actions don’t require new deontic rules. They may just be subsumed under a old deontic rule. For example, the deontic proposition stating that you ought not bully people could be expanded to include cyberbullying (a process some people refer to as semantic deepening, i.e. an existing concept is found to have a broader scope of application). But either way, the new actions requires some classification.

Finally, given that some deontic claims have to be ranked relative to one another (e.g. which obligation takes priority in the case of limited time and resources), it is also possible for deontological change to take place via a re-ranking or re-prioritisation of deontic claims. For instance, in a time of global pandemic, the obligation to prevent the spread of disease might take priority over the obligation to maintain one’s social commitments.

In short, even though it is a more complex phenomenon, it seems that there are three main types of deontological change and they line up with the three main types of axiological change:

Deontic Additions: New actions become possible and must be assigned to one of the four types of deontic status: obligatory, permitted, forbidden, supererogatory. For example, cyberbullying is assigned the status of being forbidden.


Deontic Reprioritisations: The relative ranking of different deontic claims is changed. For example, preventing the spread of disease takes priority over our usual social obligations in the time of a global pandemic.


Deontic Reclassifications: An action that was once classified as forbidden/obligatory/permitted/supererogatory is reclassified and assigned one of the other deontic statuses. For example, slavery was once permissible but it is now forbidden. In principle, deontic reclassifications can take 12 different forms.


The diagram below summarises the taxonomy of possible moral changes.

I hope this taxonomy of possible moral changes is useful. It may seem a little obvious in retrospect (now that you’ve read through this explanation) but when you are thinking about moral change in the abstract it can seem like an intimidatingly diverse phenomenon. It’s useful to put some limits on its possible forms.

Wednesday, January 27, 2021

The Moral Problem and Nozick's Theory of Value

Gyges of Lydia finds the Ring

The moral problem was first clearly articulated by Glaucon in Plato’s The Republic. It can be summed up with a simple question: Why be moral? If I always do the right thing, will I be rewarded? Are people really motivated to do good? Glaucon was doubtful. He recounted the myth of the Ring of Gyges to support his point. Imagine you were given a magical ring that rendered you invisible. Under the cloak of invisibility would you not — like the shepherd in the myth — do all manner of evil?

Philosophers have struggled with the problem over the years. Some argue that doing the right thing is its own reward. Some argue that we should do the right thing lest we risk the wrath of God. Others pose more abstruse and technical solutions, claiming that doing the right thing is essential if we are to be rationally consistent.

In his book Philosophical Explanations, Robert Nozick proposes a unique answer to the question: he argues that if we don’t do the right thing then we live less valuable lives. To support this he develops an odd theory of what it means to live a valuable life. I find this theory intriguing and I haven’t seen many people discuss it.* So, in what follows, I want to summarise and evaluate its key features. I won’t be overly critical — indeed, I find Nozick’s defence of it to be almost too sketchy and programmatic to enable much criticism — but I will try to give a reasonable account of its main aspects. This requires me to condense 50 pages of Nozick’s text into a short article. I’ll try my best.

I will proceed as follows. First, I will look at Nozick’s critique of other solutions to the moral problem, specifically the inconsistency/rational contradiction solutions. His comments on this are, in my view, illuminating and worth sharing. Second, I will outline his theory of value — the organic unity theory — and explain how he justifies it. Third, I will consider the merits of this theory and how it is supposed to solve the moral problem. As we shall see, the solution is a modest one but maybe that’s the best we can hope for.

One interpretive comment before we begin: the discussion that follows presupposes that we can make some moral judgments and some value judgments. In other words, it presupposes that people generally agree on what the right thing to do is (don’t cheat, don’t kill without justification etc) and are capable of making some shared assessments of what is valuable. The challenge is to explain the grounding for these judgments. If you are sceptical about the capacity to make the judgments in the first place, you won’t be overly impressed by Nozick’s theory.

1. Being Moral on Pain of Contradiction

One of the most popular philosophical solutions to the moral problem is to argue that we ought to be moral if we wish to avoid rational inconsistency. This solution can be taken in a variety of directions but they all share the same general form:

  • (1) Agent S (perhaps qua agent) is committed to X.
  • (2) Being committed to X entails being committed to doing the right thing.
  • (3) Therefore, agent S is committed to doing the right thing (on pain of self-contradiction/inconsistency).

For illustrative purposes, consider one of the more philosophically technical and sophisticated defences of this view: Alan Gewirth’s principle of generic consistency (as defended, subsequently, by Deryck Beyleveld). I have covered this on a previous occasion. In brief, it claims that if you act purposefully to achieve a goal (which is the essence of rational agency) you must believe that this goal is good. This, in turn, commits you to the belief that you have a right to the pursuit of that goal. And this, in turn, with some additional reasoning (insert hand-waving here), commits one to recognise the same right in other rational agents. This gives us a basic principle of moral respect for others.

I’ll say nothing about Gewirth’s argument in particular. What interests me here is that all such arguments, as Nozick’s points out, have a problem. They assume that people have a motivation to avoid inconsistency in their expression of rational agency, but it is not clear that people, in general, have this motivation or that it is particularly strong.

Go back to the brief sketch of the argument from inconsistency that I outlined earlier on. What Nozick is saying is that there is a missing premise in this argument:

  • (1) Agent S (perhaps qua agent) is committed to X.
  • (2) Being committed to X entails being committed to doing the right thing.
  • (*Missing Premise*) Agent S is committed to being consistent in his/her beliefs, desires and actions.
  • (3) Therefore, agent S is committed to doing the right thing (on pain of self-contradiction/inconsistency).

When we evaluate that missing premise we find it lacks the desired motivational oomph. Philosophers, as Nozick points out, may be committed to being rationally consistent. But not everyone feels so strongly about it. Indeed, it is not even clear that reflective and well-educated people always want (or are capable of being) entirely consistent. Many of us, for example, accept that we are fallible in forming beliefs about the world. This means that we must believe that some of our current beliefs — beliefs that we are otherwise committed to — are false. But believing in your own fallibility and believing in your own specific beliefs is inconsistent. Yet we all do it, all the time, and don’t know how to avoid it (this is, technically, a paradox that has puzzled philosophers over the years).

If a philosopher talks to an immoral man about the inconsistency of his beliefs and practices, will he care? Nozick doubts it:

Consider now the immoral man who steals and kills to his overall benefit or for some cause he favors. Suppose we show that some X he holds or accepts or does commits him to behaving morally. He now must give up at least one of the following: (a) behaving immorally; (b) maintaining X; (c) being consistent about this matter in this respect. The immoral man tells us, “To tell you the truth, if I had to make the choice, I would give up being consistent.” 
(Nozick 1981, 408)


I think there is something to this critique. I doubt that most people care about rational consistency in the way that philosophers sometimes suppose. That said, I suspect that many people wouldn’t be as glib as Nozick’s hypothetical immoral man appears to be. I suspect that many people accused of immoral behaviour — behaviour that they themselves might have once classified as immoral — simply rationalise or justify it to themselves. They believe that it serves a higher good or, in some cases, that it is the moral thing to do. This desire to avoid moral cognitive dissonance, which seems widespread, might suggest that people care about consistency more than Nozick’s suspects.

2. Being Moral is More Valuable

Despite his scepticism about traditional philosophical solutions to the moral problem, Nozick does believe that there is a solution to it. At least, a solution of sorts. The cost of immorality, according to Nozick, is that one lives a less valuable life. And this can lead to important contradictions or inconsistencies in itself insofar as the pursuit of immorality destroys some of the value people purport to care about pursuing in their lives. This is true even if the immoral person does not feel the cost of immorality in their own lives:

The immoral person thinks he is getting away with something, he thinks his immoral behavior costs him nothing. But that is not true; he pays the cost of having a less valuable existence. He pays that penalty, though he doesn’t feel or care about it. Not all penalties are felt. 
(Nozick 1981, 409)


The plausibility of this hinges on Nozick’s belief that there is some unifying theory of value that we can use to assess the cost of immorality. What might that theory be? Nozick has an interesting proposal and method for answering this question. True to the methodology he follows throughout his book Philosophical Explanations, he starts with some general judgments of value that he and others seem to have and works from them to a theory that might explain those judgments.

The general judgments are assessments people have of the relative worth of different things. These are what Nozick calls “value rankings”. Looking first to the arts, Nozick argues that when people judge the relative worth of different artworks, they seem to rate artworks that unify diverse material more highly than those that do not. By this he means that paintings and sculptures that seem to unify different forms, textures, colours, tones, themes (and so on) are more aesthetically valuable than those that are more simplistic and monotonic. Similarly, he argues in the assessment of scientific theories, people rate theories that unify and explain diverse data more highly than those that only explain one or two phenomena (think about why Newton’s theory is better than Kepler’s). Finally, in the realm of biology, Nozick notes that organismic biologists use degrees of unity to explain how different plants and animals are formed and the relative degrees of unity displayed by these organisms seems to match with our ranking of the relative worth of organisms. For example, according to most value rankings, humans are valued more highly than worms and this correlates with the fact that humans are more complex, diverse but nevertheless unified organisms.

So what’s going on across these different domains? What general principle or theory explains the different value rankings? Well, I’ve given away the answer to some extent already. Nozick claims that ‘organic unity’ is the general property that can explain the different value rankings. Nozick doesn’t ever really define this concept with precision. You have to read between the lines. Roughly, organic unity seems to be achieved through a combination of diverse elements or aspects that are unified in some way that they can work together as an organic whole. For example, a human being consists of billions of different specialised cells, along with a small ecosystem of bacteria, working together as an organic whole. Hence they have a high degree of organic unity and hence a high degree of value. Organic unity comes in degrees and always involves a tradeoff between diversity and unity.

In short, Nozick supports the following argument (which he never explicitly formulates):

  • (4) People have value rankings of objects/entities across different domains, e.g. arts, sciences, biology, social systems.
  • (5) The best explanation of these different value rankings — i.e. the property that explains why people rank objects/entities in the way that they do — is degrees of organic unity.
  • (6) Therefore (probably) degrees of organic unity represents our underlying theory of value across multiple domains.

I won’t comment much on the logical form of this argument except to note that it is an inference to best explanation and, like all such inference, is defeasible and probabilistic in nature.

There are a couple of potential misconceptions of the argument. Nozick himself addresses these so I’m just going to summarise what he says. First, in saying that degrees of organic unity represents the best theory of value across multiple domains, Nozick is not claiming that organic unity is the only thing that is valuable. There could be other valuable states of affair (e.g. experiencing pleasure). He is, however, arguing that organic unity explains most of what is valuable across multiple domains and hence is the most general and important dimension of value. Second, one reason why Nozick isn’t precise about what organic unity consists in is because he thinks it can take different forms across different domains. In general, it involves the unification of diverse phenomena but what those diverse phenomena are, and what it takes to unify them, could mean something different in different contexts. For instance, in the case of human biology, organic unity might involve the collaboration of billions of specialised cells toward the goal of continued survival and reproduction. In the case of human rational agency, it might involve diverse beliefs, desires and intentions being fitted together into a coherent life plan or identity.

3. Evaluating Nozick’s Theory of Organic Unity

What should we make of Nozick’s proposal? There is one obvious problem with it. In claiming that organic unity is what best explains our value rankings across multiple domains, Nozick doesn’t offer much in the way of evidence to suggest that our value rankings do in fact correlate with the property of organic unity.

Somebody once said that you should always check the footnotes to an academic book. That’s where the bodies are buried. Nozick buries some bodies in his footnotes. In supporting his claims about value rankings across different domains, Nozick does cite sources that appear to support his perspective on aesthetics, biology and philosophy of science. But he does not attempt a systematic survey of the relevant fields nor does he engage with contrary views. The result is an argument that is, to put it frankly, underwhelming.

From my own perspective, I think Nozick is probably right about the relative worth of scientific theories: in general we do favour theories that unify more diverse data. But there is something of a tradeoff when it comes to the virtues of different theories. We also want theories with good predictive/explanatory power and sometimes they can be more complex and limited in scope than we would like. When it comes to aesthetics, I’m not sure whether he right. I don’t know enough about aesthetics and theories of artistic worth. I’m probably something of a subjectivist when it comes to judgments of aesthetic value: I’m not sure that objective value exists in that domain. That said, I typically prefer artworks (movies, songs, pictures) that have some pleasing tradeoff between simplicity and thematic depth. For example, in movies, I tend to prefer simple storylines that raise lots of questions or provoke intense thought and speculation. I don’t like overly complex narratives. Maybe that’s just me though. Finally, when it comes to biology and the relative worth of different beings, I think Nozick might be right to suggest that organic unity is an important marker of value but I think it is complicated. I’m not averse to the idea that there is some hierarchy of value between animals and plants. I do think a human being is more valuable than a worm. But it gets much trickier when it comes to assessments of the relative worth of humans and, say, higher primates or the relative worth of different humans. For example, I would be very sceptical of the idea that organic unity could be used to measure the relative worth of human lives. Indeed, the idea that some human lives are worth more or less than others is anathema to me. At the same time I doubt that all human lives are equal with respect to their ability to unify diverse elements. So I’m not sure what to do with that claim.

Fortunately, this is not all that Nozick has to say. He doesn’t simply claim that his theory offers the best explanation of value rankings across different domains. He goes on to formulate further desiderata that a theory of value should satisfy. Some of these are highly abstract and Nozick spends more time formulating them in highly abstract terms than he does defending the claim that his theory satisfies them. I’ll simplify quite a bit and focus on the three main desiderata that he discusses:

The Pluralism Desiderata: The best theory of value should explain why value seems to be plural and, nevertheless, why philosophers are obsessed with providing unitary theories of value.


The Valuing Value Desiderata: The best theory of value should explain why we think it is a good thing for people to promote, care for, celebrate (etc) valuable things, i.e. why we value values (and, conversely, why we think it is a bad thing for people to destroy, breakdown, eliminate (etc) valuable things).


The Allure Desiderata: The best theory of value should explain why we find valuable things to be alluring and inspiring.


The argument is that the theory of organic unity satisfies these desiderata. Let’s see how this works.

The pluralism desideratum is probably the most straightforward. Philosophers have long commented on the apparent pluralism of values. Humans seem to value many different things: friendship, pleasure, knowledge, family, sex, beauty, truth and so on. Despite this many philosophers are obsessed with trying to find a single dimension of value that explains these plural values. Nozick argues that his theory helps to explain this philosophical dance with value pluralism. On the one hand, given that value is constituted by organic unity, we should expect it to take plural forms: since what counts as organic unity across different domains can mean different things, and there are many different forms that organic unity can take. Organic unity in art is distinct from organic unity in human life, and so on. On the other hand, given that value is constituted by organic unity, we can understand why philosophers try so desperately to find a single theory that explains all forms of value: organic unity is the single explanation.

That said, there are limits to how much pluralism can be explained by the theory of organic unity. Nozick distinguishes between two forms of pluralism. Strong pluralism holds that different values are radically different and cannot be reconciled with one another. In other words, it holds that there are ineradicable tradeoffs between different values. Weak pluralism simply holds that values take diverse forms but that it may be possible, under ideal circumstances, to satisfy them all. The theory of organic unity supports weak pluralism, not strong pluralism.

The valuing value desideratum is a little bit more complicated. Nozick spends pages and pages of his book explaining what this is in highly technical terms. He also formulates several different variants on this desideratum. In brief, the idea is that most people agree that we should be positively disposed toward valuable things. So much so that this positive disposition is seen to be a valuable thing in itself. To celebrate and promote valuable art, for example, is valuable. But why is this? Nozick argues that the theory of organic unity can explain why. As he puts it, the verbs that characterise the necessary positive disposition to values (celebrate, promote, care for (etc)) are ‘verbs of unity’. If you celebrate something you are joining yourself to that something in some way. Why is this a good thing? Well, because joining yourself to the valuable thing is to attain a new kind of organic unity with that thing. Contrariwise, disvaluing something of value involves disunifying or rupturing your connection from it. Hence, organic unity can explain why this is not valuable.

This is bit too abstract for my liking. It’s not obvious why this is a significant desideratum for a theory of value. Nevertheless, Nozick’s claim that the theory of organic unity can, at least in part, account for why valuing value is, itself, of value sounds somewhat plausible.

The allure desideratum is perhaps the trickiest of the three. It also brings us back, closest, to the territory of the moral problem as originally formulated. After all, if value is alluring then we have some reason to expect that people might be motivated to live valuable lives. But, despite reading the relevant portion of Nozick’s text several times, I’m not sure I know what he means when he claims that value is alluring. The idea seems to be that reviewing the historical record reveals that valuable people and valuable experiences and objects (etc) are inspiring to us. They hold some sort of allure across time and space. Sure, in some historical epochs (Nazi Germany for example) people were allured by evil things, but this can be explained away by some distortion of historical circumstance and from the ‘frustrated envy of value’ among certain groups of people:

For whatever reason, the person himself will not achieve or embody value, and he prefers that no one else achieve it either; he chooses to thwart or oppose others’ achievement of value, so that they too will not have the value he lacks. 
(Nozick 1981, 437)


Under the valuable conditions, free from these distortions or frustrations, value is alluring to us. How does organic unity help to account for this? As best I can tell, Nozick offers no defence of this claim. This is disappointing.

4. Conclusion and Final Thoughts

This article has covered a lot of ground in a relatively short space. It started with the moral problem: why be moral? It then looked at Nozick’s dissatisfaction with traditional philosophical solutions to this problem and his alternative proposal: living an immoral life comes at the cost of living a less valuable life. This solution, however, is minimal insofar as we may not be motivated to live more valuable lives. The cost of immorality, as Nozick puts it, is not always a felt cost.

This led us to consider Nozick’s theory of value. As we have seen, Nozick believes that organic unity is the property that best explains human value judgments across multiple domains. It is also the theory that satisfies other desiderata on a theory of value. Nozick’s defence of these claims is, at times, underwhelming. His theory is programmatic and sketchy. Not fully worked out or persuasive.

Nevertheless, I find it intriguing. It seems to me that there is something to the idea that organic unity is an important underlying dimension of value. Is it the only or most important dimension of value? Of that I am less convinced. One problem for me is that it doesn’t seem to have the same intuitive attraction as other underlying theories of value. For instance, if someone claims that subjective pleasure is the primary form of value, this makes sense to me. It seems intuitively obvious that subjective pleasure is intrinsically valuable. How could it not be? (Yes, there are many caveats to be added here, e.g. just because it is intrinsically valuable doesn’t mean it is always instrumentally valuable). To say that organic unity is the primary form of value, doesn’t have the same intuitive appeal. Why is value constituted by organic unity? I’m not sure anything more can be said apart from the fact that, if we follow Nozick’s argument, it provides the best explanation of our value judgments and practices.

Maybe that’s all we can hope for.

* Nozick isn’t the only person to discuss the importance of organic unity to a theory of value. G.E. Moore famously did this but Moore’s concept was much more limited in scope and specifically concerned the fact that a whole can have a different value than the sum of it parts. Similarly, Plato and Aristotle also discussed the importance of organic unity in literature: drama being a unification of diverse parts. These other concepts of organic unity have been widely discussed. Nozick’s, as best I can tell, has not.

Friday, January 22, 2021

The Argument from Religious Experience: An Analysis

Knock Shrine, Ireland

On the 21st of August 1879, in a small rural village called Knock in Ireland, an unusual event took place. At the gable end of the local church, the Virgin Mary, along with St Joseph and St John the Evangelist is alleged to have appeared to a group of villagers. According to their reports, she wore a large crown with a single golden rose, and her eyes and hands were raised toward heaven in prayer. The villagers watched her and the two other saints for nearly two hours. They could not touch her but they could see her clearly. They were convinced that she was real.

Reports of religious experiences of this sort are not uncommon. They occur throughout history and across virtually all religions and cultures. Some of these experiences are like the one had by the villagers in Knock: people report actually seeing and perhaps even touching supernatural beings as if they were ordinary human beings. Others are more mystical or ineffable: people report a strong sense of a divine presence in their lives.

What I want to consider in this article is whether experiences of this sort can form the basis of a strong argument in favour of the existence of divine beings. In other words, suppose you have had such a religious experience. Are you then warranted in believing in the existence of a God or gods? Should someone else believe on the basis of your reports of this experience?

This is something that religious believers have written about and debated for centuries. Two of the most prominent defenders of the view that religious experiences can justify religious belief are Richard Swinburne and William Alston. Both write from a Christian philosophical perspective. In what follows, I will be evaluating their arguments in some detail. Overall, my evaluation will be a negative one. It seems to me highly implausible that religious experiences can justify belief in God. But my goal is not simply to defend that conclusion. It is, rather, to explain how these arguments work and what their weaknesses might be.

1. Understanding the Argument from Religious Experience

It’s worth beginning with a general characterisation of how the argument from religious experience works. It starts, obviously enough, with the experience itself: a person or group of persons has some experience that they interpret as having religious significance. It is important to realise that there are two elements to this experience: (i) the raw phenomenological data of the experience (what it looks like, feels like etc) and (ii) the interpretation or explanation of that experience that is adopted by the person who has it.

Consider, once more, the villagers in Knock. The raw phenomenological data of their experience was simply that they saw three human-like beings at the end of the local church. They explained this data by supposing that they were seeing the Virgin Mary, St Joseph and St John. But this explanation wasn’t part of the phenomenology itself. It was an explanation of that phenomenology (albeit a very natural or obvious explanation to those villagers given their cultural background).

In other words, the experience itself is not an argument. To go from the experience to the conclusion that the experience provides evidence in favour of some religious view, you need to appeal to some principle that warrants the belief that the perceptual experience is, to use the common jargon, veridical. This means that the experience is linked to some underlying reality and that you are justified in accepting it as, prima facie, evidence for that underlying reality. Furthermore, given the nature of most religious experiences, you need to show that the best explanation of the experience is some particular religious view of what that underlying reality is.

In the case of the villagers in Knock, they believed that their phenomenological experience was best explained by the fact that there are supernatural beings linked with the Christian tradition and that these beings made an appearance to them. No doubt they believed this, in part, because they were already religious. They operated from a cultural and personal worldview that made the religious explanation of their experiences plausible. It’s unlikely that they became believers as a result of the experience (though the experience could certainly have firmed up their faith).

In the case of someone with no prior religious belief, making this leap from the experience to the religious explanation of the experience might require more work. They might need to be convinced that no alternative explanation — a non-veridcal hallucination; local teenagers playing a sophisticated prank — is fully satisfying. Ideally, of course, this is what all rational people should do: they should carefully scrutinise the evidence for and against certain explanations of their experiences. But most people take shortcuts and we often think it is acceptable to do this: life is too short to spend all our time assessing the evidence. Whether taking such shortcuts is permissible in the case of religious experiences, given their potential importance, is another matter. Religious beliefs are high stakes beliefs. There is a lot resting on them from a personal and social point of view. They may, consequently, warrant higher scrutiny. This, however, is something that arguments from religious experience often try to deny, as we shall see below.

All of this is to focus on religious experiences from the ‘insider’s view’, i.e. from the perspective of the person who had the experience. As we have now seen, there are a couple of epistemic bridges that need to be crossed from the insider’s perspective before the experience can justify a religious belief: is the experience veridical? What is the best explanation of that experience? From the outsider’s perspective — i.e. from the perspective of someone hearing about a religious experience from someone else that has had one — an additional epistemic bridge needs to be crossed. They need to be sure that the person’s testimony regarding the experience, and their explanation of the experience, are veridical. It’s hard to imagine that this bridge can be crossed in practice, though it is not impossible. David Hume’s famous argument about miracles, which is really an argument about whether we should believe in testimony regarding miracles, remains the focal point for discussions of the outsider’s perspective, though it limits its focus to miracles in particular and not religious experiences more generally. I have covered that argument in detail in previous articles. I won’t repeat myself here. The important point is that, for the remainder of this discussion, the insider’s view will be assumed.

So the question before us is this: if someone has what they take to be a religious experience, are they warranted in believing it provides good evidence of some underlying religious reality (typically that God exists)? Can you defend the argument from (personal) religious experience?

2. Swinburne’s Version of the Argument

One of the chief defenders of the argument from religious experience is Richard Swinburne. As with most of his work, Swinburne’s defence of the argument is technical and sophisticated. Swinburne knows how to dance the analytical philosophy dance.

Swinburne starts his version of the argument by using something called the principle of credulity:

Principle of Credulity (PC): If I have perceived X to be the case, then I am warranted in believing that X is the case.

The PC is a philosopher’s way of codifying common sense. To put it in layman’s terms, it says that if you have an experience of something you are, usually, warranted in believing that this something exists. As I look at the desk in front of me, I can see a half-empty coffee cup. Consequently, applying the PC, I am warranted in believing that there is, in fact, a half empty coffee cup on the table.

The PC is exactly what we need to show that our experiences are, in the usual course of events, veridical. It is easy to slot it into an argument from religious experience:

  • (1) I have had an experience of God’s existence.
  • (2) If have perceived X to be the case, then I am warranted in believing that X is the case.
  • (3) Therefore, I am warranted in believing in God’s existence.

What can be said in favour of this argument? In relation to premise (1), Swinburne distinguishes between five different types of experiences of God that religious believers can have. They span quite a range and each has been reported by one or more religious believers over the years:

TYPE 1 - Sensing a divine or supernatural being in an ordinary perceptual object - e.g God in a waterfall. 
TYPE 2 - Sensing a supernatural being that is a public object and using ordinary perceptual language to refer to it. E.g. the Knock Villagers’ vision of Mary, Joseph and John. 
TYPE 3 - Same as type 2 but it is a wholly private experience. No one else can perceive it. 
TYPE 4 - A private sensation of a supernatural being that involves a sixth sense and so is not describable using ordinary perceptual language. 
TYPE 5 - A private experience of a supernatural being that does not seem to involve any senses at all, e.g. Teresa of Avila’s consciousness of Christ at her side.


The claim is that the PC can be applied to each of these five types of religious experience. Whether that is really the case is something we shall return to later on when we consider criticisms of Swinburne’s argument.

In relation to premise (2), Swinburne accepts that there are some defeaters to the PC, i.e. scenarios in which it cannot be relied upon, but he argues that these defeaters ordinarily do not apply to religious experiences. He mentions four defeaters in particular. Let’s quickly run through them.

The first defeater claims that an experience is non-veridical if you can show that the subject of the experience is generally unreliable or that the experience occurred under conditions that have been shown, in the past, to be unreliable, e.g. under the influence of drugs. Swinburne claims this defeater doesn’t apply to most religious experiences since most religious believers appear to be otherwise reliable (we’re not including Joseph Smith here!) and ordinarily do not experience God while under the influence of hallucinogenic drugs or other distorting conditions. We won’t get into this in too much detail but it is worth noting that this latter point discounts the long tradition of religious drug-taking (particularly common in non-Christian religions) and the potential impact of extreme religious practices (fasting, meditation) on the reliability of our experiential faculties.

The second defeater claims that an experience is non-veridical if it concerns something or occurs in a circumstance in which similar perceptual claims have been shown, in the past, to be false. Examples of this might include perceptual experiences that involve widespread disagreement or perceptual experiences of things that are beyond our usual ken. It seems like this defeater would apply to experiences of God, but Swinburne claims it does not because we can have some confidence in our ability to perceive a person of great power and capacity. He also suggests that religious diversity is not that great and there is reason to think that all cultures are experiencing essentially the same thing (I’ll return to the problem of diversity at a couple of points later on in this article).

The third defeater claims that an experience is non-veridical when there is already strong evidence to think that the alleged perceptual object does not exist. This is, in a sense, Hume’s famous point about the credibility of miracle testimony: it’s very unlikely that they would occur and so testimony of them is not veridical. But according to Swinburne this doesn’t work to undermine direct experiences of God because the evidence would have to be very very strong to work against general theism — i.e. the belief in a personal being underlying all of reality. As I interpret it, the idea here is that when it comes to grand metaphysical theses — such as whether theism or naturalism is the foundation of reality — there is little reason to think that theism is significantly less probable than naturalism and so there is no strong, a priori reason to think that God does not exist. I have some sympathy for this view since I think it is quite difficult to apply probability estimates to such grand metaphysical claims, but I also think that philosophers such as Paul Draper and Jeffrey Jay Lowder have provided some decent arguments for thinking that naturalism is a simpler hypothesis than theism and hence likely to be more probable irrespective of the evidence. That said, even if they are right, this may not render theism sufficiently improbable to think that an argument from religious experience wouldn’t work in the way that Swinburne wants it to. You would have to get into assessing other forms of evidence for that purpose (e.g. evidence of evil or suffering) and it’s impossible to provide a complete assessment of that evidence in an article of this sort. Suffice to say, I think that other evidence suggests that God, as traditionally conceived, is unlikely to exist, but Swinburne sees it differently.

Finally, the fourth defeater claims that the experience is non-veridical if there is an alternative, sufficiently credible, explanation for the experience. This is probably the defeater I would be most inclined to fall back on, but Swinburne argues that this does not apply to theistic experiences because if God exists then he plays some role in all potential explanations of our experiences - i.e. there is no independent natural explanation that undermines our confidence in the experience. That’s a slippery bit of reasoning. It could be taken to suggest that no evidence could ever undermine the existence of God. It seems tantamount to claiming that if God exists, then everything that happens must be explained by him in some way. Therefore, if God exists, there can be no alternative, non-theistic, explanations of events. But this reasoning leaves the crucial question unanswered: does God exist?

Now that we have reviewed the key elements of the argument, we can turn to its critical assessment. Is it any good?

3. Problems with Swinburne’s Argument

Swinburne’s argument has a number of flaws. Many writers have pointed these out over the years and often repeat the same criticisms. Here, I will use some of the claims made by Herman Philipse in his book-length analysis of Swinburne’s arguments, occasionally supplementing his comments with observations from others. Nothing I am about to say is particularly original, though I do hope the presentation is more user-friendly than Philipse’s discussion.

The first problem with Swinburne’s argument is that even if the PC did apply to perceptions of God it is not clear that it would provide good evidence for his existence. The PC is something we rely upon when it comes to ordinary sense perceptions but even in those cases it provides, at best, defeasible support for the existence of those sensory objects. Consider, once more, the example of the half-empty coffee cup on my desk. I see it therefore I believe it exists. But sometimes my sensory perceptions lead me astray. Maybe the light is reflecting oddly off the shiny desk surface, tricking me into seeing the cup as half empty when it is, in fact, full. Maybe I’m really tired and having a mild hallucination. Maybe I’m only seeing it out of the corner of my eye and mistaking what appears to be a cup for what is, in fact, a caddy for holding pens. And so on. The reality is that sense perceptions are often misleading, particularly on a first pass. For ordinary sensory objects we have ways of verifying and reinforcing our initial perceptions. We can get up and look at the object from different angles. We can reach out, touch it, and manipulate it with our hands. We can ask another person to take a look and confirm what we are seeing. Though there are some reported religious experiences that allow for some of this additional sensory confirmation (I’m thinking, in particular, of the story of doubting Thomas) many don’t. They are fleeting glimpses or feelings of the presence of God in another object or some profound emotional experience. They are often not public (as Swinburne points out) and so cannot be confirmed by others. All of these factors make the PC of limited utility to religious experiences.

The second problem with Swinburne’s argument is that it is not clear that the PC should apply to most perceptions of God. Look once more to Swinburne’s five types of religious experience. Several of them involve indirect or non-traditional forms of sensory perception and even, in one case, no sensory perception at all. For example, he claims that you can perceive God in another object or using a sixth sense (whatever that might be) or through some consciousness of his presence. The PC applies to ordinary sense perception and not to these more fanciful or unusual forms of perception. It’s not clear that we are warranted in believing in the objects of our perception in these cases. As Philipse points out, there is something of a tension here. On the one hand, it makes sense to assume that God would not be at all like an ordinary sensory object. He is, after all, supposed to be a bodiless, transcendent and all-powerful being. But these differences undermine the application of the PC to his perception. We shouldn’t expect the PC to apply to a being like God.

Philipse’s point here can be linked to an unusual argument made by Nicholas Everitt in his book The Non-Existence of God. For the most part, Everitt presents standard critiques of Swinburne’s argument, but he does add a unique one of his own. He claims that God could not control all the conditions of his perception in the way that Swinburne supposes he could (i.e. appear to some people as a direct sensory object; to others as present in physical objects; and to others through a sixth sense). Everitt’s point is a logical/metaphysical one. He claims that any mind-independent entity — i.e. anything that is not simply a product of our minds — must obey some consistent causal laws. This applies even to God, as a matter of metaphysical necessity. But if this is true, then God cannot change the causal laws to which he is subject in order to be perceived in radically different ways by different people at different times. At least, he cannot do this and remain the same object or being over time. I’ll quote from Everitt in full on this point (full disclosure: I’m changing the sequence and tense of some aspects of this quoted passage to make it fit better with this discussion):

[The] Swinburnean concept [of God]…envisages a being who can control not just this or that of its perceivable properties, but every property by which it could be detected in any way at all. The sceptic might well try to argue that it is not logically possible for there to be any such objects… The very being of an object [is] partially constituted by the causal powers and limitations that it [has]. It could not lose all its existing causal powers and limitations in favour of another set, and yet still remain the same object; and it could not lose all its causal powers and limitations and remain an object. 
(Everitt 2004, 164-165)


I’m not sure I can fully wrap my head around this point, and Everitt himself admits that it is controversial, but it could at least undermine Swinburne’s claim that it is possible for there to be a being that could be perceived in such radically different ways. The problem with this, however, is that a religious believer could easily adapt their view in response to Everitt’s argument by accepting that there are some limitations on how God can be perceived and hence only some forms of religious experience that are veridical.

The third problem with Swinburne’s argument is that if the PC did apply to perceptions of God (or any other religious experiences) it could have perverse consequences for the believer. Two perverse consequences are of particular importance. The first is that if the PC applies to perceptions of God then a negative version of the PC should apply to the absence of such perceptions. In other words, if a non-believer fails to perceive the presence of God (in any form) then they too should be warranted in believing that God does not exist. This is because a negative principe of credulity seems to be as good as a positive principle:

Negative Principle of Credulity (NPC): If it seems to a subject S that X is absent, then X is probably absent.


Swinburne rejects the NPC. He claims that experiencing the absence of X is not the same thing (not self-verifying) in the same way that experiencing the presence of X is, at least when it comes to God. In making this claim he deploys an asymmetry argument. He claims that not seeing a chair in front of you is good reason to think the chair is not there because you know what to expect if the chair is there. But because God is so different from other perceptual objects we do not know what to expect if he is absent. So just because we fail to perceive his existence it does not follow that he does not exist.

But as Michael Martin points out in his classic book Atheism: a Philosophical Justification this leads to all sorts of problems for Swinburne’s defence of the argument from religious experience. The ability to inductively infer the existence of an object from an experience of that object is crucially dependent on the capacity to know that a failure to experience that object under the right conditions would imply the non-presence of that object. This is true in the case of our perception of ordinary objects like tables and chairs. It is only because we know that they are unlikely to exist if they are not perceived under certain perceptual conditions that we can infer they are likely to exist when they are experienced under the same conditions. If the PC is to apply to perceptions of God, then the same logic should hold. Swinburne cannot engage in special pleading regarding God’s unusual nature to get around this. If he wants to do that, then he needs to drop the application of the PC to perceptions of God. Furthermore, as Martin points out, background knowledge seems to play a key role in determining whether positive or negative perceptual claims should be taken seriously. To use his example: 50 people claiming to have seen dodos in Antarctica is not necessarily good evidence for the presence of Dodos on that continent. Contrariwise, 50 people failing to see Dodos on the island of Mauritius, despite looking repeatedly for them, sounds like good evidence for their absence. This is, in large part, because we know where to expect to see Dodos. When we don’t know what to expect, then it is hard to grant perceptual evidence any real credence.

The other perverse consequence of applying the PC to perceptions of God is that it seems to force the religious believer to deal with the diversity of religious experiences. If a Muslim perceives the presence of Mohammed in a waterfall, does this provide justification for his religious worldview? What about the Hindu who believes he has perceived Vishnu? There are two options open to the religious believer in these cases:

Universalism: They accept that all of these experiences are veridical and provide support for some particular religious beliefs (or that they all point to the existence of the same underlying religious reality). The problem with the universalist response is that it often explains away (or simply ignores) the differences in content across these difference religious experiences.


Exceptionalism: They argue that their religious experiences (linked to their religious tradition) are veridical but those from rival religions are not. The problem with this is that it often seems like special pleading and tends to rely on some prior commitment to a particular religious tradition. In other words, the experiences themselves are not self-justifying. It is a background commitment to a particular faith that justifies treating experiences linked to that faith as veridical.


The fourth and final problem with Swinburne’s argument is that, contrary to what he claims, there are sometimes (perhaps even often) alternative naturalistic explanations of religious experiences that undermine their credibility (hallucinations; visual illusions; tricks of the light; suggestibility; emotional trauma; over-interpretation of a mundane experience etc). If a religious believer accepts that some experiences are non-veridical, such as those from a rival tradition, and that there are alternative explanations available in those cases, then they at least have some prima facie reason to be sceptical of their own. That said, there are ways for committed believers to avoid the allure of alternative explanations. They can highlight disanalogies between their experiences and those of other people. And since no naturalistic explanation is likely to adequately explain every religious experience this can end up like a game of explanatory whack-a-mole: you “might be able to explain those experiences but you cannot explain mine!” Similarly the believer can take Swinburne’s line and just argue that God must feature in the explanation of everything since he is the foundation of all that exists. The problems with this strategy have already been noted.

In sum, there are several problems with Swinburne’s argument. Taken collectively, these problems suggest that, at a minimum, a religious experience by itself cannot be strong support for the existence of God. That experience must pass other epistemic tests and a believer would more than likely require additional argumentation to support the inference from the experience to the existence of God.

4. Alston’s Argument from Mystical Practice

Another famous defender of the argument from experience is William Alston. In his book, Perceiving God, Alston defends a variation on the argument that focuses on the dependability of different epistemic practices (i.e. practices for generating knowledge). In brief, his claim that mystical practice is its own, self-supporting, epistemic practice and, in the absence of good reasons for thinking that this practice is unreliable, a person is entitled to infer that their religious experiences are veridical.

Alston’s book is a sophisticated bit of epistemology, cut from a similar cloth to that of Alvin Plantinga’s defence of reformed epistemology. I won’t be able to do justice to all its intricacies here, but there are some good critiques of it in the literature, such as those from Nicholas Everitt, JL Schellenberg and Keith Augustine (the latter is a particularly useful explanation and critique of Alston’s work).

Alston’s argument is both similar to and different from Swinburne’s. Both start from the claim that ordinary sensory perception is justified. Indeed, it is self-justifying. When I see the half-empty coffee cup before me, nothing further is required to justified my belief in its presence. The sensory perception itself is enough. Alston adds to this the claim that any attempt to find a justification for the sensory perception will be circular: you’ll end up claiming that your sensory perception is justified because of some other, direct or indirect, sensory perception (e.g. perceiving the object from a different angle; asking someone else what they perceived). But where Swinburne sees religious experiences as particular forms of sensory perception (with the exception of Type 5 perceptions), and hence justifiable as forms of sensory perception, Alston sees religious experiences as distinct things. He views perceptions of the presence of God as a distinct source of knowledge about His existence that are not the same as ordinary sensory perceptions. They are mystical perceptions.

What justifies the belief in the veridicality of mystical experiences? Well, according to Alston there is no non-circular epistemic justification. We are in the same predicament as we are when it comes to sensory perception. Instead, we have to focus on the general reliability of the mystical practice of which those experiences are part and assess how that practice fares relative to other belief-forming practices such as sensory practice. Alston claims that mystical practice involves more than just perceptions of God. It also involves reflections on the meaning and reliability of those perceptions. Furthermore, within particular religious traditions, sages and mystics have developed criteria for establishing which perceptions are generally reliable indicators of the presence of God and so participants within mystical practices should apply those criteria to their own perceptions of God. When they do this, they can generate reliable beliefs from religious experiences.

In sum, Alston argues that mystical practice, like sensory practice, is its own thing: its own set of belief-forming and reliability-checking rules. Anyone who has a mystical experience and abides by the norms of their mystical tradition (and, to be clear, Alston is primarily concerned with Christian mystical traditions) can be justified in believing in the veridicality of their experiences. Or, perhaps more accurately, their justification of their religious perceptions is no worse than the way in which most people justify their ordinary sensory perceptions.

Also accepts that there are limits to this commitment to a particular mystical traditional. It could be that the believer has some reason to think that the entire mystical tradition is erroneous or an exercise in psychopathology or something of that sort. But, at least in the case of Christian mystical practice, Alston argues that there is no reason to accept this. Contributors to that tradition appear to be honest, mentally normal (or no less abnormal) truth-seekers and there are some reasons to think it is a reliable practice. Hence, it is possible to defend an argument from religious experience from within that tradition.

5. Problems with Alston’s Argument

Alston’s argument is ingenious in some ways. It sidesteps many of the issues with Swinburne’s argument, in large part because it accepts that there are many philosophical problems with our ordinary sensory belief-forming practices. But this means that its conclusion is more modest than Swinburne’s. Where Swinburne is claiming that we have good reason to think that religious perceptions are veridical, Alston is, at best, saying that mystical experience is not epistemically worse than ordinary sensory perception. But if ordinary sensory perception is in bad shape, then it’s not clear that this says all that much. We could take Alston’s argument to warrant a more general form of philosophical scepticism about sensory perception.

Very few people want to embrace a more general form of scepticism so, if we are not inclined to doubt all the evidence of our senses, is there anything else to be said about Alston’s argument? Indeed there is. It’s not clear that it meets even its own modest aims. There are at least four reasons to think that mystical practice is in worse shape than ordinary sensory practice and that it is not a particularly reliable belief-forming practice.

The first reason for this is that it is not clear that mystical practice really is a distinct belief-forming practice. Think back to Swinburne’s list of different types of religious experience. With the exception of Type 4 and 5, most of them just seem like different sub-types of sensory perception. Consider once more the experience of the villagers in Knock: they allegedly saw three supernatural beings. Why would we not assess the reliability of those experiences against the standards we usually apply to sensory experiences? What makes those experiences a distinct belief-forming practice? If it’s nothing, then these experiences are subject to the same criticisms given above of Swinburne’s argument.

The second reason is that mystical traditions seems to generate contradictory and inconsistent experiences and beliefs, even when viewed from an internal perspective. Keith Augustine makes a lot out of this point in his discussion of Alston’s argument, highlighting contradictions in Christian mystical practices: different forms of perception of God; different meanings/interpretations of those perceptions. Alston is aware of this problem and responds by highlighting that other belief-forming practices generate inconsistencies too (e.g. different witnesses see different things; different scientists develop different theories to explain the same data). But even Alston accepts that mystical traditions seem to generate more inconsistencies than other practices and so may warrant less credence as a result.

The third reason is that Alston’s argument seems to generate a powerful version of the problem of religious diversity: there are many different religious mystical traditions and participants within those traditions have distinct and incompatible religious experiences. They can’t all be right, can they? If a person has a religious experience, and then encounters another person with an incompatible religious experience, and if there are no reasons to think that their mystical tradition is more or less reliable than yours, then you don’t have any good reason to accept the veridicality of your experiences. This is, admittedly, something that religious believers sometimes deny, but JL Schellenberg makes what I think is a simple but persuasive argument on this point. Imagine three witnesses to a car accident, each of whom perceives the car to be a different colour. Suppose you are one of those witnesses. If you have no reason to think the other witnesses’ sensory perceptions are defective or misleading, then the mere fact that you each have incompatible experiences gives you reason to doubt the veridicality of your own. The same logic should apply to believers coming from different religious traditions.

Of course, it is possible to avoid this extreme form of relativism. But this brings us to the fourth reason to discount Alston’s argument. In order to avoid relativism between different belief-forming practices, you have to appeal to some practice-independent criteria for establishing the reliability of such practices. This is, in fact, a key feature of Alston’s argument: we don’t assess particular experiences, per se, but rather the belief-forming practices of which they are a part. But if we appeal to practice-independent criteria for reliability, two distinct problems arise:

(a) The kinds of criteria to which Alston appeals to distinguish true religious experiences from false ones are a bit odd. For example, he claims that if the religious experience is concerned with something useful and generates internal peace, trust in God, patience, sincerity and charity, then it is more likely to veridical. Conversely, if it is concerned with useless affairs and generates perturbation, despair, impatience, duplicity and pharisaical zeal, then it is more likely to be non-veridical (Alston 1991, 203). But why on earth should we suppose that those factors are associated with the veridicality of experience? And how do we account for the fact that non-believers can display most of the positive traits (patience, charity etc) without experiencing God? Does this imply that their failure to experience God is also veridical? If so, this gives rise to a new version of the problem from the negative principle of credulity.


(b) There are tensions between the beliefs generated by different practices. Famously, for example, there are some tensions between traditional Christian beliefs and the beliefs generated by science and history (e.g. biblical historical studies). It’s not possible to do a full accounting of those practices and their reliability here, but there are good reasons to think that these other practices are generally reliable, possibly more reliable than Christian mystical practice. But if that is true then the believer needs to do further work to resolve the tensions between these practices. Again, the religious experiences themselves cannot be self-justifying and do all the work.


In short, for all its ingenuity, Alston’s argument doesn’t seem to fare much better than Swinburne’s. In reaching this assessment, I have focused on particular features of Alston’s argument. It is worth adding that many of the other criticisms of arguments from experience mentioned previously — that there are alternative naturalistic explanations or that God cannot be an object of perception in the supposed way — could also apply to this argument.

6. Conclusion

In this article, I have considered the argument from religious experience, focusing on versions developed by two of its proponents: Richard Swinburne and William Alston. Both of the arguments raise a number of fascinating philosophical questions, particularly questions concerning the relationship between perceptual experiences and the veridicality (or non-veridicality) of such experiences. That said, for all their technical sophistication and analytical rigour, I don’t find either of the arguments persuasive.