Friday, January 22, 2021

The Argument from Religious Experience: An Analysis



Knock Shrine, Ireland


On the 21st of August 1879, in a small rural village called Knock in Ireland, an unusual event took place. At the gable end of the local church, the Virgin Mary, along with St Joseph and St John the Evangelist is alleged to have appeared to a group of villagers. According to their reports, she wore a large crown with a single golden rose, and her eyes and hands were raised toward heaven in prayer. The villagers watched her and the two other saints for nearly two hours. They could not touch her but they could see her clearly. They were convinced that she was real.

Reports of religious experiences of this sort are not uncommon. They occur throughout history and across virtually all religions and cultures. Some of these experiences are like the one had by the villagers in Knock: people report actually seeing and perhaps even touching supernatural beings as if they were ordinary human beings. Others are more mystical or ineffable: people report a strong sense of a divine presence in their lives.

What I want to consider in this article is whether experiences of this sort can form the basis of a strong argument in favour of the existence of divine beings. In other words, suppose you have had such a religious experience. Are you then warranted in believing in the existence of a God or gods? Should someone else believe on the basis of your reports of this experience?

This is something that religious believers have written about and debated for centuries. Two of the most prominent defenders of the view that religious experiences can justify religious belief are Richard Swinburne and William Alston. Both write from a Christian philosophical perspective. In what follows, I will be evaluating their arguments in some detail. Overall, my evaluation will be a negative one. It seems to me highly implausible that religious experiences can justify belief in God. But my goal is not simply to defend that conclusion. It is, rather, to explain how these arguments work and what their weaknesses might be.


1. Understanding the Argument from Religious Experience

It’s worth beginning with a general characterisation of how the argument from religious experience works. It starts, obviously enough, with the experience itself: a person or group of persons has some experience that they interpret as having religious significance. It is important to realise that there are two elements to this experience: (i) the raw phenomenological data of the experience (what it looks like, feels like etc) and (ii) the interpretation or explanation of that experience that is adopted by the person who has it.

Consider, once more, the villagers in Knock. The raw phenomenological data of their experience was simply that they saw three human-like beings at the end of the local church. They explained this data by supposing that they were seeing the Virgin Mary, St Joseph and St John. But this explanation wasn’t part of the phenomenology itself. It was an explanation of that phenomenology (albeit a very natural or obvious explanation to those villagers given their cultural background).

In other words, the experience itself is not an argument. To go from the experience to the conclusion that the experience provides evidence in favour of some religious view, you need to appeal to some principle that warrants the belief that the perceptual experience is, to use the common jargon, veridical. This means that the experience is linked to some underlying reality and that you are justified in accepting it as, prima facie, evidence for that underlying reality. Furthermore, given the nature of most religious experiences, you need to show that the best explanation of the experience is some particular religious view of what that underlying reality is.

In the case of the villagers in Knock, they believed that their phenomenological experience was best explained by the fact that there are supernatural beings linked with the Christian tradition and that these beings made an appearance to them. No doubt they believed this, in part, because they were already religious. They operated from a cultural and personal worldview that made the religious explanation of their experiences plausible. It’s unlikely that they became believers as a result of the experience (though the experience could certainly have firmed up their faith).

In the case of someone with no prior religious belief, making this leap from the experience to the religious explanation of the experience might require more work. They might need to be convinced that no alternative explanation — a non-veridcal hallucination; local teenagers playing a sophisticated prank — is fully satisfying. Ideally, of course, this is what all rational people should do: they should carefully scrutinise the evidence for and against certain explanations of their experiences. But most people take shortcuts and we often think it is acceptable to do this: life is too short to spend all our time assessing the evidence. Whether taking such shortcuts is permissible in the case of religious experiences, given their potential importance, is another matter. Religious beliefs are high stakes beliefs. There is a lot resting on them from a personal and social point of view. They may, consequently, warrant higher scrutiny. This, however, is something that arguments from religious experience often try to deny, as we shall see below.

All of this is to focus on religious experiences from the ‘insider’s view’, i.e. from the perspective of the person who had the experience. As we have now seen, there are a couple of epistemic bridges that need to be crossed from the insider’s perspective before the experience can justify a religious belief: is the experience veridical? What is the best explanation of that experience? From the outsider’s perspective — i.e. from the perspective of someone hearing about a religious experience from someone else that has had one — an additional epistemic bridge needs to be crossed. They need to be sure that the person’s testimony regarding the experience, and their explanation of the experience, are veridical. It’s hard to imagine that this bridge can be crossed in practice, though it is not impossible. David Hume’s famous argument about miracles, which is really an argument about whether we should believe in testimony regarding miracles, remains the focal point for discussions of the outsider’s perspective, though it limits its focus to miracles in particular and not religious experiences more generally. I have covered that argument in detail in previous articles. I won’t repeat myself here. The important point is that, for the remainder of this discussion, the insider’s view will be assumed.

So the question before us is this: if someone has what they take to be a religious experience, are they warranted in believing it provides good evidence of some underlying religious reality (typically that God exists)? Can you defend the argument from (personal) religious experience?


2. Swinburne’s Version of the Argument

One of the chief defenders of the argument from religious experience is Richard Swinburne. As with most of his work, Swinburne’s defence of the argument is technical and sophisticated. Swinburne knows how to dance the analytical philosophy dance.

Swinburne starts his version of the argument by using something called the principle of credulity:


Principle of Credulity (PC): If I have perceived X to be the case, then I am warranted in believing that X is the case.


The PC is a philosopher’s way of codifying common sense. To put it in layman’s terms, it says that if you have an experience of something you are, usually, warranted in believing that this something exists. As I look at the desk in front of me, I can see a half-empty coffee cup. Consequently, applying the PC, I am warranted in believing that there is, in fact, a half empty coffee cup on the table.

The PC is exactly what we need to show that our experiences are, in the usual course of events, veridical. It is easy to slot it into an argument from religious experience:


  • (1) I have had an experience of God’s existence.
  • (2) If have perceived X to be the case, then I am warranted in believing that X is the case.
  • (3) Therefore, I am warranted in believing in God’s existence.

What can be said in favour of this argument? In relation to premise (1), Swinburne distinguishes between five different types of experiences of God that religious believers can have. They span quite a range and each has been reported by one or more religious believers over the years:


TYPE 1 - Sensing a divine or supernatural being in an ordinary perceptual object - e.g God in a waterfall. 
TYPE 2 - Sensing a supernatural being that is a public object and using ordinary perceptual language to refer to it. E.g. the Knock Villagers’ vision of Mary, Joseph and John. 
TYPE 3 - Same as type 2 but it is a wholly private experience. No one else can perceive it. 
TYPE 4 - A private sensation of a supernatural being that involves a sixth sense and so is not describable using ordinary perceptual language. 
TYPE 5 - A private experience of a supernatural being that does not seem to involve any senses at all, e.g. Teresa of Avila’s consciousness of Christ at her side.

 

The claim is that the PC can be applied to each of these five types of religious experience. Whether that is really the case is something we shall return to later on when we consider criticisms of Swinburne’s argument.

In relation to premise (2), Swinburne accepts that there are some defeaters to the PC, i.e. scenarios in which it cannot be relied upon, but he argues that these defeaters ordinarily do not apply to religious experiences. He mentions four defeaters in particular. Let’s quickly run through them.

The first defeater claims that an experience is non-veridical if you can show that the subject of the experience is generally unreliable or that the experience occurred under conditions that have been shown, in the past, to be unreliable, e.g. under the influence of drugs. Swinburne claims this defeater doesn’t apply to most religious experiences since most religious believers appear to be otherwise reliable (we’re not including Joseph Smith here!) and ordinarily do not experience God while under the influence of hallucinogenic drugs or other distorting conditions. We won’t get into this in too much detail but it is worth noting that this latter point discounts the long tradition of religious drug-taking (particularly common in non-Christian religions) and the potential impact of extreme religious practices (fasting, meditation) on the reliability of our experiential faculties.

The second defeater claims that an experience is non-veridical if it concerns something or occurs in a circumstance in which similar perceptual claims have been shown, in the past, to be false. Examples of this might include perceptual experiences that involve widespread disagreement or perceptual experiences of things that are beyond our usual ken. It seems like this defeater would apply to experiences of God, but Swinburne claims it does not because we can have some confidence in our ability to perceive a person of great power and capacity. He also suggests that religious diversity is not that great and there is reason to think that all cultures are experiencing essentially the same thing (I’ll return to the problem of diversity at a couple of points later on in this article).

The third defeater claims that an experience is non-veridical when there is already strong evidence to think that the alleged perceptual object does not exist. This is, in a sense, Hume’s famous point about the credibility of miracle testimony: it’s very unlikely that they would occur and so testimony of them is not veridical. But according to Swinburne this doesn’t work to undermine direct experiences of God because the evidence would have to be very very strong to work against general theism — i.e. the belief in a personal being underlying all of reality. As I interpret it, the idea here is that when it comes to grand metaphysical theses — such as whether theism or naturalism is the foundation of reality — there is little reason to think that theism is significantly less probable than naturalism and so there is no strong, a priori reason to think that God does not exist. I have some sympathy for this view since I think it is quite difficult to apply probability estimates to such grand metaphysical claims, but I also think that philosophers such as Paul Draper and Jeffrey Jay Lowder have provided some decent arguments for thinking that naturalism is a simpler hypothesis than theism and hence likely to be more probable irrespective of the evidence. That said, even if they are right, this may not render theism sufficiently improbable to think that an argument from religious experience wouldn’t work in the way that Swinburne wants it to. You would have to get into assessing other forms of evidence for that purpose (e.g. evidence of evil or suffering) and it’s impossible to provide a complete assessment of that evidence in an article of this sort. Suffice to say, I think that other evidence suggests that God, as traditionally conceived, is unlikely to exist, but Swinburne sees it differently.

Finally, the fourth defeater claims that the experience is non-veridical if there is an alternative, sufficiently credible, explanation for the experience. This is probably the defeater I would be most inclined to fall back on, but Swinburne argues that this does not apply to theistic experiences because if God exists then he plays some role in all potential explanations of our experiences - i.e. there is no independent natural explanation that undermines our confidence in the experience. That’s a slippery bit of reasoning. It could be taken to suggest that no evidence could ever undermine the existence of God. It seems tantamount to claiming that if God exists, then everything that happens must be explained by him in some way. Therefore, if God exists, there can be no alternative, non-theistic, explanations of events. But this reasoning leaves the crucial question unanswered: does God exist?

Now that we have reviewed the key elements of the argument, we can turn to its critical assessment. Is it any good?


3. Problems with Swinburne’s Argument

Swinburne’s argument has a number of flaws. Many writers have pointed these out over the years and often repeat the same criticisms. Here, I will use some of the claims made by Herman Philipse in his book-length analysis of Swinburne’s arguments, occasionally supplementing his comments with observations from others. Nothing I am about to say is particularly original, though I do hope the presentation is more user-friendly than Philipse’s discussion.

The first problem with Swinburne’s argument is that even if the PC did apply to perceptions of God it is not clear that it would provide good evidence for his existence. The PC is something we rely upon when it comes to ordinary sense perceptions but even in those cases it provides, at best, defeasible support for the existence of those sensory objects. Consider, once more, the example of the half-empty coffee cup on my desk. I see it therefore I believe it exists. But sometimes my sensory perceptions lead me astray. Maybe the light is reflecting oddly off the shiny desk surface, tricking me into seeing the cup as half empty when it is, in fact, full. Maybe I’m really tired and having a mild hallucination. Maybe I’m only seeing it out of the corner of my eye and mistaking what appears to be a cup for what is, in fact, a caddy for holding pens. And so on. The reality is that sense perceptions are often misleading, particularly on a first pass. For ordinary sensory objects we have ways of verifying and reinforcing our initial perceptions. We can get up and look at the object from different angles. We can reach out, touch it, and manipulate it with our hands. We can ask another person to take a look and confirm what we are seeing. Though there are some reported religious experiences that allow for some of this additional sensory confirmation (I’m thinking, in particular, of the story of doubting Thomas) many don’t. They are fleeting glimpses or feelings of the presence of God in another object or some profound emotional experience. They are often not public (as Swinburne points out) and so cannot be confirmed by others. All of these factors make the PC of limited utility to religious experiences.

The second problem with Swinburne’s argument is that it is not clear that the PC should apply to most perceptions of God. Look once more to Swinburne’s five types of religious experience. Several of them involve indirect or non-traditional forms of sensory perception and even, in one case, no sensory perception at all. For example, he claims that you can perceive God in another object or using a sixth sense (whatever that might be) or through some consciousness of his presence. The PC applies to ordinary sense perception and not to these more fanciful or unusual forms of perception. It’s not clear that we are warranted in believing in the objects of our perception in these cases. As Philipse points out, there is something of a tension here. On the one hand, it makes sense to assume that God would not be at all like an ordinary sensory object. He is, after all, supposed to be a bodiless, transcendent and all-powerful being. But these differences undermine the application of the PC to his perception. We shouldn’t expect the PC to apply to a being like God.

Philipse’s point here can be linked to an unusual argument made by Nicholas Everitt in his book The Non-Existence of God. For the most part, Everitt presents standard critiques of Swinburne’s argument, but he does add a unique one of his own. He claims that God could not control all the conditions of his perception in the way that Swinburne supposes he could (i.e. appear to some people as a direct sensory object; to others as present in physical objects; and to others through a sixth sense). Everitt’s point is a logical/metaphysical one. He claims that any mind-independent entity — i.e. anything that is not simply a product of our minds — must obey some consistent causal laws. This applies even to God, as a matter of metaphysical necessity. But if this is true, then God cannot change the causal laws to which he is subject in order to be perceived in radically different ways by different people at different times. At least, he cannot do this and remain the same object or being over time. I’ll quote from Everitt in full on this point (full disclosure: I’m changing the sequence and tense of some aspects of this quoted passage to make it fit better with this discussion):


[The] Swinburnean concept [of God]…envisages a being who can control not just this or that of its perceivable properties, but every property by which it could be detected in any way at all. The sceptic might well try to argue that it is not logically possible for there to be any such objects… The very being of an object [is] partially constituted by the causal powers and limitations that it [has]. It could not lose all its existing causal powers and limitations in favour of another set, and yet still remain the same object; and it could not lose all its causal powers and limitations and remain an object. 
(Everitt 2004, 164-165)

 

I’m not sure I can fully wrap my head around this point, and Everitt himself admits that it is controversial, but it could at least undermine Swinburne’s claim that it is possible for there to be a being that could be perceived in such radically different ways. The problem with this, however, is that a religious believer could easily adapt their view in response to Everitt’s argument by accepting that there are some limitations on how God can be perceived and hence only some forms of religious experience that are veridical.

The third problem with Swinburne’s argument is that if the PC did apply to perceptions of God (or any other religious experiences) it could have perverse consequences for the believer. Two perverse consequences are of particular importance. The first is that if the PC applies to perceptions of God then a negative version of the PC should apply to the absence of such perceptions. In other words, if a non-believer fails to perceive the presence of God (in any form) then they too should be warranted in believing that God does not exist. This is because a negative principe of credulity seems to be as good as a positive principle:


Negative Principle of Credulity (NPC): If it seems to a subject S that X is absent, then X is probably absent.

 

Swinburne rejects the NPC. He claims that experiencing the absence of X is not the same thing (not self-verifying) in the same way that experiencing the presence of X is, at least when it comes to God. In making this claim he deploys an asymmetry argument. He claims that not seeing a chair in front of you is good reason to think the chair is not there because you know what to expect if the chair is there. But because God is so different from other perceptual objects we do not know what to expect if he is absent. So just because we fail to perceive his existence it does not follow that he does not exist.

But as Michael Martin points out in his classic book Atheism: a Philosophical Justification this leads to all sorts of problems for Swinburne’s defence of the argument from religious experience. The ability to inductively infer the existence of an object from an experience of that object is crucially dependent on the capacity to know that a failure to experience that object under the right conditions would imply the non-presence of that object. This is true in the case of our perception of ordinary objects like tables and chairs. It is only because we know that they are unlikely to exist if they are not perceived under certain perceptual conditions that we can infer they are likely to exist when they are experienced under the same conditions. If the PC is to apply to perceptions of God, then the same logic should hold. Swinburne cannot engage in special pleading regarding God’s unusual nature to get around this. If he wants to do that, then he needs to drop the application of the PC to perceptions of God. Furthermore, as Martin points out, background knowledge seems to play a key role in determining whether positive or negative perceptual claims should be taken seriously. To use his example: 50 people claiming to have seen dodos in Antarctica is not necessarily good evidence for the presence of Dodos on that continent. Contrariwise, 50 people failing to see Dodos on the island of Mauritius, despite looking repeatedly for them, sounds like good evidence for their absence. This is, in large part, because we know where to expect to see Dodos. When we don’t know what to expect, then it is hard to grant perceptual evidence any real credence.

The other perverse consequence of applying the PC to perceptions of God is that it seems to force the religious believer to deal with the diversity of religious experiences. If a Muslim perceives the presence of Mohammed in a waterfall, does this provide justification for his religious worldview? What about the Hindu who believes he has perceived Vishnu? There are two options open to the religious believer in these cases:


Universalism: They accept that all of these experiences are veridical and provide support for some particular religious beliefs (or that they all point to the existence of the same underlying religious reality). The problem with the universalist response is that it often explains away (or simply ignores) the differences in content across these difference religious experiences.

 

Exceptionalism: They argue that their religious experiences (linked to their religious tradition) are veridical but those from rival religions are not. The problem with this is that it often seems like special pleading and tends to rely on some prior commitment to a particular religious tradition. In other words, the experiences themselves are not self-justifying. It is a background commitment to a particular faith that justifies treating experiences linked to that faith as veridical.

 

The fourth and final problem with Swinburne’s argument is that, contrary to what he claims, there are sometimes (perhaps even often) alternative naturalistic explanations of religious experiences that undermine their credibility (hallucinations; visual illusions; tricks of the light; suggestibility; emotional trauma; over-interpretation of a mundane experience etc). If a religious believer accepts that some experiences are non-veridical, such as those from a rival tradition, and that there are alternative explanations available in those cases, then they at least have some prima facie reason to be sceptical of their own. That said, there are ways for committed believers to avoid the allure of alternative explanations. They can highlight disanalogies between their experiences and those of other people. And since no naturalistic explanation is likely to adequately explain every religious experience this can end up like a game of explanatory whack-a-mole: you “might be able to explain those experiences but you cannot explain mine!” Similarly the believer can take Swinburne’s line and just argue that God must feature in the explanation of everything since he is the foundation of all that exists. The problems with this strategy have already been noted.

In sum, there are several problems with Swinburne’s argument. Taken collectively, these problems suggest that, at a minimum, a religious experience by itself cannot be strong support for the existence of God. That experience must pass other epistemic tests and a believer would more than likely require additional argumentation to support the inference from the experience to the existence of God.


4. Alston’s Argument from Mystical Practice

Another famous defender of the argument from experience is William Alston. In his book, Perceiving God, Alston defends a variation on the argument that focuses on the dependability of different epistemic practices (i.e. practices for generating knowledge). In brief, his claim that mystical practice is its own, self-supporting, epistemic practice and, in the absence of good reasons for thinking that this practice is unreliable, a person is entitled to infer that their religious experiences are veridical.

Alston’s book is a sophisticated bit of epistemology, cut from a similar cloth to that of Alvin Plantinga’s defence of reformed epistemology. I won’t be able to do justice to all its intricacies here, but there are some good critiques of it in the literature, such as those from Nicholas Everitt, JL Schellenberg and Keith Augustine (the latter is a particularly useful explanation and critique of Alston’s work).

Alston’s argument is both similar to and different from Swinburne’s. Both start from the claim that ordinary sensory perception is justified. Indeed, it is self-justifying. When I see the half-empty coffee cup before me, nothing further is required to justified my belief in its presence. The sensory perception itself is enough. Alston adds to this the claim that any attempt to find a justification for the sensory perception will be circular: you’ll end up claiming that your sensory perception is justified because of some other, direct or indirect, sensory perception (e.g. perceiving the object from a different angle; asking someone else what they perceived). But where Swinburne sees religious experiences as particular forms of sensory perception (with the exception of Type 5 perceptions), and hence justifiable as forms of sensory perception, Alston sees religious experiences as distinct things. He views perceptions of the presence of God as a distinct source of knowledge about His existence that are not the same as ordinary sensory perceptions. They are mystical perceptions.

What justifies the belief in the veridicality of mystical experiences? Well, according to Alston there is no non-circular epistemic justification. We are in the same predicament as we are when it comes to sensory perception. Instead, we have to focus on the general reliability of the mystical practice of which those experiences are part and assess how that practice fares relative to other belief-forming practices such as sensory practice. Alston claims that mystical practice involves more than just perceptions of God. It also involves reflections on the meaning and reliability of those perceptions. Furthermore, within particular religious traditions, sages and mystics have developed criteria for establishing which perceptions are generally reliable indicators of the presence of God and so participants within mystical practices should apply those criteria to their own perceptions of God. When they do this, they can generate reliable beliefs from religious experiences.

In sum, Alston argues that mystical practice, like sensory practice, is its own thing: its own set of belief-forming and reliability-checking rules. Anyone who has a mystical experience and abides by the norms of their mystical tradition (and, to be clear, Alston is primarily concerned with Christian mystical traditions) can be justified in believing in the veridicality of their experiences. Or, perhaps more accurately, their justification of their religious perceptions is no worse than the way in which most people justify their ordinary sensory perceptions.

Also accepts that there are limits to this commitment to a particular mystical traditional. It could be that the believer has some reason to think that the entire mystical tradition is erroneous or an exercise in psychopathology or something of that sort. But, at least in the case of Christian mystical practice, Alston argues that there is no reason to accept this. Contributors to that tradition appear to be honest, mentally normal (or no less abnormal) truth-seekers and there are some reasons to think it is a reliable practice. Hence, it is possible to defend an argument from religious experience from within that tradition.


5. Problems with Alston’s Argument

Alston’s argument is ingenious in some ways. It sidesteps many of the issues with Swinburne’s argument, in large part because it accepts that there are many philosophical problems with our ordinary sensory belief-forming practices. But this means that its conclusion is more modest than Swinburne’s. Where Swinburne is claiming that we have good reason to think that religious perceptions are veridical, Alston is, at best, saying that mystical experience is not epistemically worse than ordinary sensory perception. But if ordinary sensory perception is in bad shape, then it’s not clear that this says all that much. We could take Alston’s argument to warrant a more general form of philosophical scepticism about sensory perception.

Very few people want to embrace a more general form of scepticism so, if we are not inclined to doubt all the evidence of our senses, is there anything else to be said about Alston’s argument? Indeed there is. It’s not clear that it meets even its own modest aims. There are at least four reasons to think that mystical practice is in worse shape than ordinary sensory practice and that it is not a particularly reliable belief-forming practice.

The first reason for this is that it is not clear that mystical practice really is a distinct belief-forming practice. Think back to Swinburne’s list of different types of religious experience. With the exception of Type 4 and 5, most of them just seem like different sub-types of sensory perception. Consider once more the experience of the villagers in Knock: they allegedly saw three supernatural beings. Why would we not assess the reliability of those experiences against the standards we usually apply to sensory experiences? What makes those experiences a distinct belief-forming practice? If it’s nothing, then these experiences are subject to the same criticisms given above of Swinburne’s argument.

The second reason is that mystical traditions seems to generate contradictory and inconsistent experiences and beliefs, even when viewed from an internal perspective. Keith Augustine makes a lot out of this point in his discussion of Alston’s argument, highlighting contradictions in Christian mystical practices: different forms of perception of God; different meanings/interpretations of those perceptions. Alston is aware of this problem and responds by highlighting that other belief-forming practices generate inconsistencies too (e.g. different witnesses see different things; different scientists develop different theories to explain the same data). But even Alston accepts that mystical traditions seem to generate more inconsistencies than other practices and so may warrant less credence as a result.

The third reason is that Alston’s argument seems to generate a powerful version of the problem of religious diversity: there are many different religious mystical traditions and participants within those traditions have distinct and incompatible religious experiences. They can’t all be right, can they? If a person has a religious experience, and then encounters another person with an incompatible religious experience, and if there are no reasons to think that their mystical tradition is more or less reliable than yours, then you don’t have any good reason to accept the veridicality of your experiences. This is, admittedly, something that religious believers sometimes deny, but JL Schellenberg makes what I think is a simple but persuasive argument on this point. Imagine three witnesses to a car accident, each of whom perceives the car to be a different colour. Suppose you are one of those witnesses. If you have no reason to think the other witnesses’ sensory perceptions are defective or misleading, then the mere fact that you each have incompatible experiences gives you reason to doubt the veridicality of your own. The same logic should apply to believers coming from different religious traditions.

Of course, it is possible to avoid this extreme form of relativism. But this brings us to the fourth reason to discount Alston’s argument. In order to avoid relativism between different belief-forming practices, you have to appeal to some practice-independent criteria for establishing the reliability of such practices. This is, in fact, a key feature of Alston’s argument: we don’t assess particular experiences, per se, but rather the belief-forming practices of which they are a part. But if we appeal to practice-independent criteria for reliability, two distinct problems arise:


(a) The kinds of criteria to which Alston appeals to distinguish true religious experiences from false ones are a bit odd. For example, he claims that if the religious experience is concerned with something useful and generates internal peace, trust in God, patience, sincerity and charity, then it is more likely to veridical. Conversely, if it is concerned with useless affairs and generates perturbation, despair, impatience, duplicity and pharisaical zeal, then it is more likely to be non-veridical (Alston 1991, 203). But why on earth should we suppose that those factors are associated with the veridicality of experience? And how do we account for the fact that non-believers can display most of the positive traits (patience, charity etc) without experiencing God? Does this imply that their failure to experience God is also veridical? If so, this gives rise to a new version of the problem from the negative principle of credulity.

 

(b) There are tensions between the beliefs generated by different practices. Famously, for example, there are some tensions between traditional Christian beliefs and the beliefs generated by science and history (e.g. biblical historical studies). It’s not possible to do a full accounting of those practices and their reliability here, but there are good reasons to think that these other practices are generally reliable, possibly more reliable than Christian mystical practice. But if that is true then the believer needs to do further work to resolve the tensions between these practices. Again, the religious experiences themselves cannot be self-justifying and do all the work.

 

In short, for all its ingenuity, Alston’s argument doesn’t seem to fare much better than Swinburne’s. In reaching this assessment, I have focused on particular features of Alston’s argument. It is worth adding that many of the other criticisms of arguments from experience mentioned previously — that there are alternative naturalistic explanations or that God cannot be an object of perception in the supposed way — could also apply to this argument.


6. Conclusion

In this article, I have considered the argument from religious experience, focusing on versions developed by two of its proponents: Richard Swinburne and William Alston. Both of the arguments raise a number of fascinating philosophical questions, particularly questions concerning the relationship between perceptual experiences and the veridicality (or non-veridicality) of such experiences. That said, for all their technical sophistication and analytical rigour, I don’t find either of the arguments persuasive.



Monday, January 18, 2021

What must parents do for their children?



In a previous article, I examined the structure of the parent-child relationship, and considered an argument to the effect that the unique structural properties of that relationship can provide a justification for becoming a parent. To briefly recap, although I think there is much to this idea, I also I struggle to find a justification for parenthood within it, particularly given the moral risks involved. And yet, despite all this, I am myself a parent and have chosen, however unwisely, to take those risks.

In this article, I want to continue my examination of the parent-child relationship but take the discussion in a different direction. I want consider the purpose of the parent-child relationship (if any) and the consequent duties of parents vis-a-vis their children. This is a topic of considerable significance for any parent. One thing I noticed when I became a father was the level of anxiety and uncertainty I experienced with respect to what I ought to be doing for my daughter: how should I care for her? What should I be doing? Am I a bad father? Am I, to paraphrase Philip Larkin, fucking her up in some disastrous way?


They fuck you up, your mum and dad. 
They may not mean to, but they do. 
They fill you with the faults they had 
And add some extra, just for you. 
(Larkin, “This be the verse”)

 

To a large extent, my anxieties have dissipated in the 15 months since my daughter was born. In part, this is simply the result of increased competence. In an experience that I am sure has been shared by millions of other parents, I have found that the sweat-inducing panic of the first few weeks has gradually given way to a more sure-footed approach. But this isn’t the full story. I also think my anxieties have eased because, over the past 15 months, I have altered my understanding of my role as a father. I think of it less now as a series of tasks and duties that I must perform to some ideal end and more as an ongoing relationship that I must sustain with my daughter.

In what follows, I want to explain why I have taken this approach, why I think it is advantageous, and how it relates to some of philosophical writings and the duties of parents.


1. The Optimising Model of Parenting

In my experience, it is common for parents to conceive of their role as a set of tasks and duties. They must care for their child. They must feed it, clothe it, vaccinate it, educate it and so on. If they fail to do so, they will have violated their primary duties as parents. Sometimes these violations may even have legal consequences. If children are neglected or improperly cared for it is not unusual for the state to intervene and take them away from their parents, even if only temporarily.

To what end must all these duties be performed? Obviously, there is a minimal goal: keep the child alive; don’t fuck it up. But there is more to it than that. Most of my peers — college-educated, middle-class thirty-somethings — either implicitly or explicitly embrace, or at the very least struggle with, a stronger view of the purpose of parenting: to produce an optimal human being.

To be clear, very few of my peers are like James Mill, father of John Stuart Mill. James Mill tried to mould his child into a political and philosophical reformer through a rigorous regime of home-schooling. He had a specific type of life in mind for young John Stuart — a life that he thought was socially and personally optimal — and he structured John Stuart’s daily routine around the dogged pursuit of that goal. But even if they are not so single-minded and dogged in their pursuit of a particular parental goal, they are not far off. They think it is essential that they give their children ‘the best start in life’, enroll them in the best schools, and generally ensure that they have every opportunity to succeed as adults. What this success entails is sometimes unclear, but there is a common sense that if you don’t do all these things for your child, you are failing as a parent. The pressure is sometimes immense.

I call this the optimising model of parenting and I find it problematic, to say the least, but I must confess that I found it seductive in the early stages of parenting, and I occasionally find myself lapsing into it in conversation with others.

I am not sure that many philosophers openly embrace the optimising model of parenting but there are ideas out there that are similar to it. Julian Savulescu, for example, has famously defended the so-called principle of procreative beneficence, according to which prospective parents ought, among the children it is possible for them to have, procreate the best possible one. Savulescu uses this as an argument for preferring procreation via assisted reproduction as opposed to the traditional method (because the former allows for genetic diagnosis of embryos and selection of the best embryos). It’s easy to see how this principle could be transferred to the rest of the parental role: once the child is alive you have a duty to ensure that it has the best possible life. That said, the principle of procreative beneficence is widely critiqued in the bioethics literature (I covered some critiques previously in my own writings) and I am not aware of anyone using it to discuss ongoing parental duties after a child has been born. (I’m sure someone will correct me if I’m wrong).

What is more commonly discussed is Joel Feinberg’s ‘open future’ principle. According to this, all children have a right to an open future. Consequently, it is incumbent upon all parents to raise their children in a way that ensures that they have an open future. Feinberg’s argument is based on the idea that adults have autonomy rights: rights to choose their own preferred path in life, based on their interests and preferences (excluding obvious moral limits such as a preference for murder or rape). Since children will become adults, their future autonomy rights must be protected by their parents. Hence parents have a duty to develop their children’s capacities in such a way that they can exercise autonomy as adults, and they must not foreclose any possibilities from them in the process. On a maximal interpretation — which the philosopher Joseph Millum argues is implied in Feinberg’s original defence of the principle — this entails something pretty close to an optimising model of parenting. You don’t pick a particular goal for your child but you have to produce a child that can do pretty much anything that it wants to do.

Finally, there is the famous ‘best interests of the child’ principle, which is commonly used in legal and policy settings to make decisions that might affect the future well-being of a child. On the face of it, this sounds like it could be used to endorse an optimising model of parenting. After all, it seems to suggest that parents and all other people involved in childcare should act in a way that optimises the welfare of the child. But I’m not sure it works out that way in practice. Although the best interests principle is applied in different ways in different countries, it seems to be primarily used when a specific conflict has arisen between parents or other childcare service providers (for examples, sometimes conflicts arise between medical service providers and parents) as to what might best serve a child’s interests. In other words, it’s not as if legal authorities use it to closely scrutinise every parental decision and intervene if something seems non-optimal. They use it when people disagree as to the preferred course of action. Still, the very idea of a best interests test being applied to what they are doing might exert some psychological pressure on parents in their day-to-day lives. I’ll touch upon this again in the conclusion to this article.


2. Problems with the Optimising Model

As I mentioned, I think the optimising model of parenting is problematic. Why so? There are many reasons, some of which will seem obvious to you. Let me mention four main ones.

The first, and most philosophically obvious, is that I do not think that there is such a thing as an optimal life. I am pluralist when it comes to the well-lived life. There are many pathways to the good life and it is a mistake to suppose that an overly narrow one should be forced upon your child. Most people seem to agree with this idea when asked. But their actions belie this agreement. Implicitly, it seems like many parents (at least in my peer group) suppose that there is, if not a single pathway, a very narrow pathway to the good life: you get your child into the best school, you educate them well, make sure that they succeed academically and socially, get them into a good university, and then push them towards a stable and financially lucrative career. Most of the time, this narrow pathway is favoured because this is the path the parents themselves followed and it is the one that is favoured and reinforced in their cultural milieu. Perhaps other narrow pathways are preferred in other peer communities. But whatever content it might have, the assumption that there is this narrow pathway to success is, I believe, a mistake and something that parents should avoid reinforcing. One thing I always admired about my own parents, for example, was how they did not force a particular vision of success on me. They allowed me to pursue my own interests to a large extent, allowing me to drop certain activities (piano, sports) and take up others when I wished to do so. I would hope to adopt a similarly flexible approach with my daughter.

The second problem with the optimising model is that it assumes that parents have a lot of control over the shape of their children’s lives — that through their choices they can significantly influence their children’s capacities, interests and emotional well-being. I’m not sure that this is really true. I’m not going to rehash the whole nature-versus-nurture-versus-peer influence debate here, but in reaching this conclusion I have been influenced by two books I read in the past six months. The first was Robert Plomin’s book Blueprint. Plomin is a well-known (should I say ‘notorious’?) figure in the field of behavioural genetics. He was a pioneer in doing twin studies to understand the genetic influence on the variance in certain character traits. His book presents some pretty good evidence to suggest that the genetic influence on the variance in behavioural traits is, in many cases, far larger than you might expect, oftentimes greater than the environmental or parental influence. There are criticisms of his work, of course, but I found the book to be more persuasive than I expected it to be. The other book was Michael Blastland’s book The Hidden Half. This book wasn’t about parenting or behavioural genetics per se, but it was about the role of chance and uncertainty in human life. The central thesis of the book is that we know a lot less than we think we know about the causal influences over certain processes. This includes the causal influences over behavioural traits and dispositions. In some ways then, Blastland’s book is a counterweight to Plomin’s. Plomin is more sure about the causal influences. But both complement one another to reinforce the view that parents may not have as much control over their children’s long-term well-being as they might like to suppose. (Relatedly, there is Judith Rich Harris’s famous book The Nurture Assumption which argues that a child’s peers have more effect on its character than its parents. I’m not able to assess that book here but if it is right it lends further support to the view that parents have less control than we might think).

None of this is to suggest that parents have no role to play in their children’s lives nor that their decisions about how to educate their children and so on do not matter. They do, but they matter more from a relational perspective than from a character-moulding perspective. I’ll talk about this in more detail in a moment.

The third problem with the optimising model is that it encourages parental guilt, shame and regret. If your job as a parent is to optimise your child’s life, then you have a heavy burden of responsibility resting on your shoulders. You may constantly question the choices you make, fearing that you are fucking up your children’s lives at every point. This can lead to decisional paralysis, which is an impediment to good parenting. This is something I struggled with a lot in the early months of my daughter’s life. I was so anxious about how I should best fulfil my parental duties that I often didn’t know what to do. I was afraid that, like the butterfly flapping its wings in the jungle, every decision I made might have long-term, detrimental consequences. This is an unhealthy psychological burden for anyone to deal with and I think it prevented me from relating to my daughter as a result.

Finally, another problem with the optimising model is that it is, arguably, contrary to principles of social justice and equality. If parents must secure the best opportunities for their children, and if some parents have more resources to do this than others, then there is a danger that the optimising model just reinforces structural inequalities in society. This is something that many philosophers have written about, particularly when it comes to policies around school choice. Of course, it is not something that individual parents can do much about. It is a tragedy of the commons type of problem. Parents make choices that are rational — perhaps even commendable — from their own perspective but this reinforces a less desirable general social equilibrium. The solution to this problem will require some kind of top-down policy intervention that makes the seemingly rational parental choices less desirable. Still, if you have a social conscience, this is something you might worry about with the optimising model of parenting and add to your level of guilt as you pursue it.

Taken together, these seem like good reasons to reject the optimising model of parenting. But before I move on, I will add that I think these critiques hold for lesser versions of the optimising model too. For example, some people might argue that parents do not have to optimise their child’s life but they should ensure that they are happy or contented. I think this demands too much as well. No one is happy or contented all the time; there are many pathways to happiness; and we may not have as much control over the happiness of our children as we might like. For example, I believe that my parents did a pretty good job raising me but I am still prone to long spells of discontent and unhappiness. But this has nothing to do with them and the choices they made for me. Its just part of the human condition. It would be naive to expect my daughter to be any different.


3. The Relational Model of Parenting

Let’s get our bearings. The problem with the optimising model and its ilk is that they all assume that the goal of parenting is to produce a child with a certain mix of traits, dispositions, experiences, emotions and life opportunities. Parental duties and responsibilities then flow from the pursuit of those goals: we ought to do whatever makes it more likely that our children will achieve these goals. If we fail, then we fail as parents.

A better approach, at least in my mind, is to assume that we are in an ongoing relationship with our children. This relationship doesn’t have a particular end goal or purpose. There will be ups and downs within it. Sometimes our children will be frustrated, sad or angry. Our job is to be there for them, provide them with support and encouragement when needs be, to help them out when we can, but not to dictate the shape of their lives. Sometimes we will have joint pursuits with our children. For example, we might be playing a game with them that has a particular goal. In those cases we work together with them to achieve that goal. Similarly, our children might have pursuits and projects of their own. In those cases, we might help them out and try to ensure their success. But these goal-oriented pursuits are not the be-all and end-all of the relationship. They are incidental aspects of the ongoing relationship.

In this respect, our role as parents is similar to our role in other interpersonal relationships. Think, for example, of friendships and partnerships. I don’t think of my relationship with my friends or my wife as having any particular long-term goal. I’m not trying to control my friends’ lives or shape them in a particular way. I just want to enjoy their company, talk to them about the challenges and opportunities that life throws our way, engage in some mutually-fulfilling activities, and so on. We are in it for each other. For the support and enjoyment we can provide one another along the way.

Now, to be clear, I’m not claiming that our relationships with our children are exactly like these other kinds of interpersonal relationship. That would be silly. I noted in a previous article that the parent-child relationship is structurally unique, particularly when it comes to the level of dependency of the child on the parent in the early phases of the relationship. This asymmetrical dependence does impose greater burdens on the parent in those early phases. They must care for and avoid injury or harm to their child. I get that.

But I think of this as a relatively minimal duty — of the ‘do no serious harm’ variety. Certainly, I found that once I shifted from thinking of my role as a parent as one in which I had to ensure an optimal life for my child to one of simply being there and relating to her, a lot of my anxieties and worries lifted. Instead of every activity having to serve some ultimate purpose — play in order to ensure good cognitive development; reading in order to ensure literacy and academic success — I found that activities could be enjoyed for their own sake, as part of the ongoing relationship. The fear of failure could be substituted for the love of the interaction itself. Now, I know that there will be failures (of a sort) along the way, but as long as I am there for her, and do my best to support her and help her, I think I’m doing my bit.


4. Conclusion: Is the relational model sustainable?

This might all sound a bit pollyanna-ish and naive. I can only speak from my personal experiences of parenting thus far. My experience is that I found myself trapped in the goal-oriented, optimising model for months and this provoked a lot of guilt and anxiety. More recently I have shifted to the relational model. This has helped me to be less anxious and fearful of my role as a parent.

But the optimising model still holds some allure. As Richard Smith points out in his article ‘Total Parenting’, there is considerable ideological power behind it (or something very close to it). Through a mixture of cultural beliefs and practices, as well as social and legal policies, parents are now frequently reminded of the risks of getting things wrong with their children, of failing to provide for them or support them in the right way. I’m sure I will be sucked back into this mode of thinking as my daughter grows older.

That said, one of the surprising benefits of raising a child in the midst of the COVID-19 pandemic is the opportunity it has provided to resist this failure mode of parenting. Don’t get me wrong: I’m not grateful for the pandemic. But the disruption it has entailed, and the need for improvisation for parents who must work from home and care for children themselves, has reduced some of the pressure that might ordinarily be felt. It’s impossible to do all the things that are ordinarily expected of parents in the current environment. You just have to muddle through and enjoy the experience for what it is.

In short, in the midst of the pandemic, it is impossible to be the perfect parent. But maybe this should help us to realise that it is impossible to be a perfect parent at any time.

Friday, January 15, 2021

The Parent-Child Relationship: Can it justify becoming a parent?


I recently became a father. Well, when I say recently, I mean just over a year ago (October 2019). Being a parent raises a number of practical and philosophical questions. Should you have children in the first place? How do you care for a newborn? How do you give your child the best start in life? Is it wrong to give your child special treatment over other children/people? Does being a parent give meaning to life that was previously absent?

Ordinarily, I am inclined to prolonged and frequent spells of philosophical self-reflection. The examined life and all that. One thing that has surprised me about becoming a parent is how little of this I have done on the subject of parenting itself. Perhaps this is not unusual. Perhaps the first year of parenting tends to be dominated by the practicalities of caring for a child and not its philosophical import. But now that I have settled into a somewhat predictable routine with my daughter (fingers-crossed!), I have a bit more time for my usual ruminations.

And there is plenty to ruminate on. In this article, I will focus on one issue in particular: the nature and value of the parent-child relationship. We have many relationships in our lives. They are often a source of value. Think about your friends and intimate partners, for example. Few of us would do without them. The parent-child relationship is both different from and similar to these other kinds of relationships. What I want to consider are its structural features and how these affect both the value of the relationship as a whole. I’ll be folding some of my own thoughts, from my first year-and-a-bit of parenting, into the discussion as I go along.

One thing I won’t be focusing on in this article, though it does linger in the background to some extent, is the ethics of having children. Some philosophers are anti-natalists. They think it is wrong to have children. Most people are pro-natalist. They think it is desirable, perhaps even obligatory to have children. I’ve examined the views of these different camps elsewhere in my writings. I won’t do so at any length in what follows. It’s a bit late for me to engage in this debate anyway since I am now already a parent, but I will pass some occasional comments that touch upon the anti versus pro natalist debate as we go along.


1. The Nature of the Parent-Child Relationship

Relationships come in many different forms. The relationships we have with our friends and intimate partners ought to be voluntary (in the sense that we ought to be able to choose our friends and partners) and broadly egalitarian (in the sense that no one party should dominate or subordinate the other to suit their needs). Of course, friendships and intimate partnerships often fall short of these ideals, but when they do there is generally considered to be something defective or problematic about them.

Not all of our relationships are voluntary and egalitarian. Our workplace relationships, for example, can be relationships of inequality: one person (the boss) might be deemed to have more control and power than another (the employee). In addition to this, sometimes our workplace relationships are involuntary in nature: we don’t always get to choose who our colleagues are; they are chosen for us. This doesn’t necessarily make these relationships ethically defective; it’s just makes their ethical qualities distinctive. Bosses, for example, might have different (more burdensome) duties than employees and there might be a greater need for compromise and toleration among colleagues than there would be in purely voluntary relationships.

What about parent-child relationships? In some respects, they are a sui generis phenomenon - unique in human experience. From the child’s perspective, they are never voluntary: they never get to choose who their parents are. The sole exception to this, perhaps, is when the child reaches maturity and can legally or practically emancipate themselves from their parents. But even then their parents remain their parents: they cannot eliminate them entirely from their lives or sense of self-identity. Sometimes the relationships are involuntary from the parent’s perspective too — e.g. in cases of rape or forced pregnancy or where there is an absence of birth control — but among most of my peers this is rare. Most people I know voluntarily choose to become parents. Or, as might be more true in my own case, voluntarily assent to other people’s choice to become parents.

Parent-child relationships are also highly asymmetrical. There is, as Christine Overall puts it in her book Why Have Children, both inherent asymmetry in the relationship — because the child never chooses their parents — and contingent asymmetry — because during the early years the child is highly dependent on their parents for survival. Furthermore, during these early years, parents can shape their children’s lives in ways that can have permanent or long-lasting effects. This dependency can reverse later in life. When children mature, they can become relatively independent beings, and when parents reach a state of extreme old age, they often become highly dependent on their children for their survival. Such are the cycles of life — cycles that highlight, to some extent, the flaw in thinking that any of us is ever truly independent from anyone else.

Despite the inherent and contingent asymmetries in the parent-child relationship, there is still a role for equality. A child is not a thing to be toyed with or experimented upon by its parents. A child is — or at least will become — a person in their own right. As such, they deserve — or at least will come to deserve — the same level of respect owed to all human beings. What started as a highly asymmetrical relationship will become more egalitarian over time.

Finally, it is worth commenting on the role of unconditional love (or affection and respect) in the parent-child relationship. It is often said that parents do, or at least should, love their children unconditionally. But this seems like an unrealistic and undesirable standard. I think, instead, a parent’s love for their child should be highly robust and resilient. It should be able to endure lots of ups and downs, but it should also have some limits. If a child turns out to be a mass murderer or serial killer, it’s hard to see why a parent should be obliged to love them (though, of course, they may still have a deep bond and natural affection for them). What about a child’s love for its parents? Well, again, ideally it seems that this should be robust and resilient too, but given the asymmetries in the parent-child relationship, it does not seem fair to hold a child to the same standard as a parent.


2. The Value of the Relationship as an Argument for Becoming a Parent

According to some philosophers, the unique structural properties of the parent-child relationship, and the effect it has, in particular, on parents, provides a potential justification for having a child. Christine Overall is one defender of this view and she introduces it at the very end of her aforementioned book Why Have Children.

This is an interesting book. In it, Overall argues that the decision to have children is ethically fraught and that people often don’t treat it with the level of ethical scrutiny it deserves. And while she rejects a strongly anti-natalist stance, such as the one defended by David Benatar, she also rejects the notion that there is an ethical duty to have children or that having children is an especially noble or desirable thing (if you are interested in her arguments, I covered some of them in more detail in this article). It consequently comes as something of a surprise when, in the final chapter, she argues that anyone who has thought about having children should not miss out on the opportunity to have one (though possibly, as she herself puts it, “no more than one”). In other words, Overall’s view is that while it is not obligatory to have children, nor essential to the well-lived life, it is permissible and can contribute to the well-lived life, in the right conditions.

What’s the argument for this? It’s a little difficult to unpack, but here’s how I read it:


  • (1) A flourishing life is a good thing and humans are, ceteris paribus, justified in aspiring to live one.
  • (2) There are many different pathways and elements to a flourishing life; we are free to choose among these pathways and elements as we see fit (provided we do not violate some other ethical duty in the course of doing so).
  • (3) Having a child and experiencing the parent-child relationship can be an element in a flourishing life due to the unique nature of the parent-child relationship (and, for at least some people, having a child does not violate other ethical duties).
  • (4) Therefore, having a child can be justified as an element in a flourishing life.

Now I’ll be the first to admit that this is a rough-and-ready formulation of the argument. It won’t win any prizes for its logical precision. But it does give us enough to subject Overall’s argument to some critical evaluation.

The first two premises of this argument strike me as being relatively uncontroversial. Of course we are justified in trying to live a flourishing life. That said, the ceteris paribus (“all else being equal”) clause is crucial in both of these premises: our flourishing ought not to come at the expense of some other ethical duty. To take an extreme case, perhaps I could live a very worthwhile and enjoyable life by killing my closest rival at work (let’s assume I can live with the guilt). But I would not be justified in doing this: my flourishing cannot take precedence over his right to life.

The third premise is the crucial one. What is it about the parent-child relationship that leads to flourishing? Overall walks a fine line in response to this question. She accepts that many times the choice to have a child is not justified and that parents can do it for poor reasons. But when undertaken for the right reasons — and when parents accept the independent personhood of their children — the decision can be transformative. In a critical passage, she highlights some of its potential benefits:


[in becoming parents, people find that] many other abilities have a chance to flourish; their ability to observe; their understanding of human development and psychology; their courage and tenacity; their appreciation for play; their artistic, musical, scientific, or athletic abilities; and their understanding of their own place in the social world…In choosing to have a child, one is deciding both to fulfil one’s sense of who one is and at the same time aspiring to be a different person than one was before the child came along. In becoming a parent, one creates not only a child and a relationship, but oneself; one creates a new and ideally better self-identity. 
(Overall 2012, p 218)

 

I buy this argument to an extent. Certainly, my own experience of parenthood suggests that it helps to cultivate positive character traits. For example, before my daughter was born, I worried that I was too obsessed with myself, with my own work and success, with my own projects and ambitions, to be a good parent. I worried that I would resent my child for taking up my time and attention, for taking me away from the things I once valued so dearly. Since she was born, I have been pleasantly surprised by how little this has been true. The reality is that I positively enjoy spending time with her and caring for her. In doing so, have had to develop capacities that had I lacked or had left to atrophy (patience, playfulness etc). Indeed, if anything, I resent my work now for taking me away from her. The experience has suggested that I am, perhaps, less selfish and self-obsessed than I previously supposed. In short, I have found the experience to be self-transformative and, dare I say it, I might even be a better person as a result.

Still, this argument sits a little uneasily with me. I have two major worries. The first is that in suggesting that the decision to become a parent can be justified insofar as it transforms me (the parent) into a better person it seems like the argument endorses an egotistical and selfish motivation. It doesn’t seem like it respects the independent personhood of the child at all. If we buy the argument, the child is just a project for self-transformation.

Overall tows a fine line in this regard. She accepts that some parents do have children for selfish and unjustifiable reasons but counters that, with the right motivation, having a child can be “self-oriented…not inevitably selfish” (2012, 217). That’s a subtle, perhaps meaningless, distinction. My way of reasoning it out is to say that in becoming a parent I have developed attributes that have to do with caring for another person (attributes that have to do with me but are largely other oriented in nature). In other words, I may be a better person by becoming a parent but only to the extent that I am better at caring for and relating to another person. In this respect, being a parent might be similar to any charitable project that you find fulfilling: you get something out of it but only because other people do too. In this sense it is a win-win. But this comparison with charitable work also highlights the oddness in choosing the parental pathway to this kind of flourishing. Why did I have to create another person — one with whom I have a unique and asymmetrical relationship — to develop these more altruistic character traits? Why couldn’t I just do this with other, already existing, persons? Surely, there is something selfish, and perhaps even a little grandiose, in creating a dependent person for this purpose? I’m not sure I will ever wrap my head around that.

This links to my second worry. Having a child is a morally risky business. There are risks to the mother as she bears the child. There are risks to the child once it is born. There are also potential risks to me and to my relationship with the child’s mother. What if I didn’t find myself transformed by becoming a parent? What if it turned out I was as selfish and resentful as I feared I would be? What if I am not able to meet the challenges of parenthood as my daughter matures? Will I have harmed another person (or persons), irrevocably, as a result of my self-oriented (if not selfish) decision? Overall is fully aware of the moral risks — indeed the majority of her book is about them — but she gives them quite short shrift at the very end. She says:


Having children is morally risky. And the ideas I have explored in this chapter must not by any means be interpreted as a claim that parenthood is the only or even the primary path to a flourishing life. But it is one such path…if, after taking into account all the issues in this book, you are still considering whether to have a child, I continue to say, “Don’t miss it”. 
(Overall 2012, 220)

 

I guess the argument here is that if you are the kind of person who would read her book and consider the ethical risks of parenthood, and if despite this you still seriously consider having a child, you should give it a go because of the potential to create a unique and mutually fulfilling relationship. That may well be true. It may be the case that someone like me is in a better position to minimise the risks of parenthood than someone else. But it still seems like an awful risk to take. The potential harms of parenthood — to oneself, one’s intimate partners (if any), and one’s child — seem to outweigh the potential benefits. If there are other pathways to a flourishing life, then why not try those instead? How can anyone justify the risk? That’s another question I will probably continue to struggle to answer.


3. Conclusion

In sum, the parent-child relationship is a unique one. It has a number of unique structural properties — involuntariness from the child’s perspective; a high degree of dependency at the outset — and can be a source of great value. If my experience is anything to go by, it is possible to be transformed, arguably for the better (though it is still early days), by entering into this relationship. But choosing to create that relationship remains risky and difficult to justify. And I say this as someone who has taken that risk.

Wednesday, December 23, 2020

87 - AI and the Value Alignment Problem

Iason Gabriel

How do we make sure that an AI does the right thing? How could we do this when we ourselves don't even agree on what the right thing might be? In this episode, I talk to Iason Gabriel about these questions. Iason is a political theorist and ethicist currently working as a Research Scientist at DeepMind. His research focuses on the moral questions raised by artificial intelligence. His recent work addresses the challenge of value alignment, responsible innovation, and human rights. He has also been a prominent contributor to the debate about the ethics of effective altruism.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

 

Show Notes:

Topics discussed include:

  • What is the value alignment problem?
  • Why is it so important that we get value alignment right?
  • Different ways of conceiving the problem
  • How different AI architectures affect the problem
  • Why there can be no purely technical solution to the value alignment problem
  • Six potential solutions to the value alignment problem
  • Why we need to deal with value pluralism and uncertainty
  • How political theory can help to resolve the problem

 

Relevant Links


Tuesday, December 15, 2020

86 - Are Video Games Immoral?

Have you ever played Hitman? Grand Theft Auto? Call of Duty? Did you ever question the moral propriety of what you did in those games? In this episode I talk to Sebastian Ostritsch about the ethics of video games. Sebastian is an Assistant Prof. (well, technically, he is a Wissenschaftlicher mitarbeiter but it's like an Assistant Prof) of Philosophy based at Stuttgart University in Germany. He has the rare distinction of being both an expert in Hegel and the ethics of computer games. He is the author of Hegel: Der Welt-Philosoph (published this year in German) and is currently running a project, funded by the German research body DFG, on the ethics of computer games.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).




Show Notes

Topics discussed include:

  • The nature of video games
  • The problem of seemingly immoral video game content
  • The amorality thesis: the view that playing video games is morally neutral
  • Defences of the amorality thesis: it's not real and it's just a game.
  • Problems with the 'it's not real' and 'it's just a game' arguments.
  • The Gamer's Dilemma: Why do people seem to accept virtual murder but not, say, virtual paedophilia?
  • Resolving the gamer's dilemma
  • The endorsement view of video game morality: some video games might be immoral if they endorse an immoral worldview
  • How these ideas apply to other forms of fictional media, e.g. books and movies.


Relevant Links


Wednesday, December 9, 2020

Should I Become an Academic? Academia and the Ethics of Career Choice




[Note: This is a draft chapter from a book project I was trying to get off the ground called The Ethics of Academia. It looks unlikely that this book will ever see the light of day, and if it does it’s even more unlikely that this draft chapter will be part of it. So, I thought there would be no harm in sharing it here. The writing style in this draft chapter is intended to be somewhat ‘tongue-in-cheek’.]

If you are reading this the odds are pretty good you are an academic or, at least, thinking about becoming one. But maybe you are having second thoughts? Maybe this career isn’t all it’s cracked up to be? Maybe you are not sure that you want to spend the rest of your life churning out research papers, teaching students, or, God forbid, administering other researchers and teachers?

I commend you. The first ethical question any academic should ask of themselves is: should I exist? I don’t mean this in the profound existential sense. Albert Camus (1942) once said that the question of suicide was the first and most important of all philosophical questions. He may well be right about that, but that’s not the question I think all academics should ask of themselves. I think they should ask the slightly more mundane question: is being an academic an ethical career choice? Not everyone gets to choose their careers but I’m guessing that if you are considering a career in academia you have the luxury of some choice. There are, presumably, other things you could do with your time. Should you do them instead?

Many people fail to ask this question. Outside of some extreme exceptions — assassin, torturer, arms dealer — most of us assume that our choice of career is ethically neutral. We try do what we want to do and what we feel best suited for doing. We may not always succeed, but that’s usually the goal. Careers guidance councillors often reinforce this attitude toward career choice. They advise us to focus on our aptitudes and talents, not on the relative moral standing of careers. When you think about it, this is a very odd thing to do. Whatever we choose to do in our careers, we are likely to spend a lot of time doing it. It will be in and around 80,000 hours according to one popular estimate. It seems appropriate, then, to subject our choice of career to some serious ethical scrutiny.

In the remainder of this article I will do this for academia. My analysis will proceed in three parts. First, I’ll outline a framework for thinking about the ethics of career choice. This framework will suggest that there are two main ethical criteria that we can use to assess different careers: (i) do they produce good/bad outcomes in the world? and (ii) do they allow us to self-actualise or attain self-fulfillment? (This may not sound like an ethical criterion right now but bear with me.) Second, I will consider whether choosing to be an academic produces good or bad outcomes in the world. And third, I will consider whether we can self-actualise or attain self-fulfillment through an academic career. Initially, I will be quite critical of academia, suggesting that it isn’t a particularly ethical career choice; subsequently, I will soften the argument and suggest that it probably isn’t any worse than many other career choices. To the extent that “do no evil” is ethical principle that’s worth adopting in your own life, you have some reason to hope that you won’t do evil by becoming an academic. Perhaps that’s the most any of us can hope for.


1. Ethical Criteria for Choosing Careers

There are two criteria we can use to assess the ethical value of our careers. They are: (i) the consequentialist criterion and (ii) the self-actualisation criterion. These can be explained in the following terms:


(i) The Consequentialist Criterion: If I pick this particular career, will it allow me to cause or produce or bring about morally positive consequences in the world?

 

(ii) The Self-Actualisation Criterion: If I pick this particular career, will it enable me to self-actualise, i.e. allow me to attain a high level of satisfaction and fulfillment, and enable me realise and take advantage of my talents?

 

The consequentialist criterion views your career as a means to an end. Imagine you are choosing whether or not to be a doctor. If you apply the consequentialist criterion, then you will want to ask yourself how much good you can do by being a doctor. How many lives can you save or prolong? This consequentialist approach to career choice is favoured by several philosophers (Care 1984; Unger 1996). It is also central to the effective altruist community’s approach to career choice. The effective altruist community is a community of individuals that dedicate themselves to doing the most good they can possibly do with their lives and one of the most important things we can do with our lives is choose a career (MacAskill 2015, Singer 2015).

Although I say that the consequentialist criterion views the career as a means to an end this doesn’t mean quality of life is irrelevant to its application. Your happiness with your career is a relevant consequence of choosing that career, one that ought to be factored into any consequentialist calculation of the relative benefits of a career. That said, your individual happiness is likely to be swamped by other ethically relevant consequences. Similarly, your capacity to self-actualise through your choice of career isn’t completely irrelevant to the consequentialist criterion. After all, your talent for a particular job is likely to have some bearing on your capacity to do good with that job. It’s just not the major relevant consideration (though we will discuss a complication to this in more detail below).

Although the consequentialist criterion finds most favour among proponents of a utilitarian or consequentialist moral theory, it doesn’t only appeal to them. Consequences are relevant to most ethical theories, even if they are not decisive or constitutive of what it means to make an ethical choice. So the consequentialist criterion has broad appeal.

The self-actualisation criterion is rather different. It views a career not so much as a means to producing better ends but, rather, as a vehicle for self-fulfillment. What matters is whether you are happy and engaged by the work that you do, whether that work is suited to someone with your skills and aptitudes, and whether it develops those skills and aptitudes in an appropriate way. As I said in the introduction, this is the criterion that most of us use when choosing careers. On the face it, it doesn’t seem like an ethical criterion. Indeed, it seems like the exact opposite: it is selfish and egoistic criterion. But that’s not entirely true. For one thing, as I just noted, self-actualisation is relevant when it comes to considering the ethical consequences of a career. For another, there are some ethical theories according to which we may have a duty to fulfill our potential. Immanuel Kant, for instance, developed a complex moral theory that claims that humans have ethical duties purely in virtue of the fact that we are agents (i.e. beings with the power to choose and intend our actions). Among the duties he thought we had was a duty to make the best of our talents. As he put it, we each have a duty:


not to leave idle… rusting away the natural predispositions and capacities that [our] reason can someday use” 
(Metaphysics of Morals, 6:444-5)

 

That said, Kant no doubt would have accepted that there are other moral constraints on this duty. We shouldn’t make the best of our talents if doing so would, for instance, violate another of our duties toward other agents, such as the duty not to treat another agent as a means to an end. So, sad to say, even if being an assassin is the best way to make use of your talents, you still probably shouldn’t do it. Finally, it is worth noting that the self-actualisation criterion might overlap with virtue ethical approaches to the moral choice. Virtue ethics, roughly, is the view that we should act in a way that develops morally virtuous character traits (generosity; kindness; courage etc). Doing so will lead to fulfilment and flourishing. It’s possible, though I don’t think it is guaranteed, that following the self-actualisation approach could lead to the development of the virtues.

Sometimes people pick and choose one of these two criteria over the other. In fact, a significant amount of the philosophical discussion of ethical career choice is focused on figuring out which of the two should guide our decisions. But, strictly speaking, they are not in tension with each other. They are, rather, two different dimensions along which we can evaluate a career choice. To illustrate this point, we can arrange them into the following two-by-two matrix:


Ideally, you would like to pick a career that is high and to the right: one that produces good outcomes in the world and allows you to self-actualise. Sometimes, however, you will need to make a tradeoff. Some philosophers, such as Norman Care (1984), have argued that given the current injustices of the world those of us with the luxury to choose a career ought to prioritise good consequences over self-actualisation. At a first glance, that sounds plausible but it raises the obvious question: can we produce good outcomes by becoming academics?


2. The Consequences of Becoming an Academic


There is a consequentialist case to be made for becoming an academic. Think about what an academic does. According to most job descriptions and characterisations, the typical academic will be expected to do three kinds of things: (i) research; (ii) teaching; and (iii) administration. It’s possible to do good through each of these activities. Consider:

 
(1) It is possible for an academic to produce good outcomes through their research: they can produce knowledge or insights that are either intrinsically valuable (i.e. valuable in and of themselves) or instrumentally valuable (i.e. capable of being used to good effect). There are some uncontroversial examples of this. Albert Einstein produced groundbreaking insights and theories in physics. These insights are both intrinsically fascinating for what they say about the nature of reality and instrumentally valuable in helping us to develop satellite technology and GPS. He was, for most of adult life, an academic (yes, I know, he wrote his first famous papers while working as a patent clerk but he was always actively seeking academic work and did end up working as an academic for most of his life). Or consider Jonas Salk, developer of the polio vaccine, whose research prevented the suffering of millions of people. Or Rosalind Franklin, whose groundbreaking X-ray crystallography was important in unlocking the molecular structure of DNA. Examples could be multiplied, but you get the point. Research can do a lot of good for the world and, as an academic, you are actively encouraged to do it.

 

(2) It is possible for an academic to do good through their teaching, by providing their students with essential skills and knowledge that help them to live better lives: Education is something that uplifts and improves the lives of students (at least if done right). It enables them to question and analyse the world around them and explore new opportunities. For example, Tara Westover, in her memoir of growing up in a fundamentalist Christian home, explains how crucial education was in helping her to escape the limitations of that world (Westover 2018). Education helps students to develop and hone skills that are essential to securing paid employment. It may even make the world, more generally, a better place. As an illustration of this consider Carlos Fraenkel’s memoir of teaching philosophy to students in conflict zones, Teaching Plato in Palestine. Fraenkel is no starry-eyed optimist about the power of education but through his recollections he shows how it is possible to use philosophical education to facilitate discussion between competing worldviews and perhaps avoid violent conflict. What could be more valuable to the world than doing that?

 

(3) It is possible for an academic to do good through effective administration: Academic administration is usually criticised and rarely celebrated. Nevertheless, administration of higher educational systems (and, indeed, any complex human organisation) is essential if they are to operate effectively. Without proper administration it would be impossible for academics to do the good work they can do through research and education. So by helping out with administration, academics can help themselves produce good outcomes in the world through their teaching and research.

 

To be clear, none of these arguments is watertight. No one would claim that all research or all teaching produces good outcomes. Lots of academic research has been used for ill. For example, some people have used psychological research to create manipulative advertising and to design more effective forms of violent interrogation. Destructive weapons systems have been created with the help of academic research. Some teachers instil false beliefs in their students and may even crush their hopes and dreams. Plenty of academics fail to do good in their jobs. But failures of this sort are a problem in all careers. The crucial point is that it is possible to do good with an academic career.

That said, possibility alone is not enough. Some exceptional individuals may be able to do a lot of good with an academic career but what about the rest of us? Most of us aren’t exceptional. What we want to know is whether there is some reasonable probability of doing good with an academic career. When we try to assess this probability, things start to look a lot worse for the would-be academic. There are six issues, in particular, with which to contend.

First, academia is a highly competitive career. After undergoing a boom in the mid-20th century, when there was a significant undersupply of academic labour relative to the number of available careers, there is now a significant oversupply of academic labour. There are far more PhDs granted than there are academic jobs for these PhDs to fill. This trend seems likely to continue. As Bryan Alexander notes, most developed nations are undergoing a demographic shift (Alexander 2020). Traditionally the demographic structure of society represented a pyramid: there were lots of young people and relatively few old people. Thanks to improvements in healthcare, and declining fertility rates, this demographic structure is now shifting to a more rectangular shape. This means there are roughly equal numbers of young people and old people. In some extreme cases, such as Japan, the demographic structure is starting to represent an inverted pyramid in which the old outnumber the young. This presents a major challenge for the university system which has, traditionally, been designed to educate the youth population. Unless there is a significant shift in institutional design, it seems plausible to suppose that there will be a retrenchment in the higher education system in the future. In other words, to put it more bluntly, there are likely to be dwindling job opportunities for academics coupled with increasing competition for those job opportunities. This presents a major problem for anyone who wishes to do good through an academic career. Unless you are exceptionally talented, privileged, or fortunate, you are increasingly unlikely to have the chance to even get the chance to do good through an academic career.

Second, even if you overcome the odds and get an academic job, you are unlikely to be a morally successful academic. In other words, you are unlikely to do much good with your job. Consider the example of doing good through research. There are only a handful of people who manage to make significant breakthroughs with their research. The sad reality is that most academics do trivial and unimportant work. This is partly because they lack the talent and also, partly, because they don’t get rewarded in their careers for doing high impact work. The philosopher Michael Huemer has made this argument in rather stark terms in relation to philosophical research. He claims that most philosophical research and writing is done to improve the reputation of the researcher in the eyes of their academic peers; not to solve important worldly problems or to make a moral difference to society. This, he claims, equates to a massive squandering of human capital:


“Quite a bit of intellectual talent and energy is being channeled into producing thousands upon thousands of papers and books that hardly anyone will ever read or want to read. These articles and books are written almost entirely for other academics working in the same sub-sub-sub-specialization that the author works in. The main reason they are written is so that the author can get tenure or otherwise get credit for publishing. The main reason they are read even by the tiny number of people who read them is so that the readers can cite those articles in their own articles.”

 

And it is not just philosophers who suffer from this ignominious fate. Consider the replication crisis in biomedical science and psychology. Over the decades, thousands of experiments have been performed and research reports have been written about positive psychological and pharmacological effects. These effects have since turned out to be false or, at best, unproven (Fidler and Wilcox 2018). That equates to thousands of psychologists and biomedical researchers whose research has not made the positive difference that they once thought it did. To be clear, this is not to say that no academic research is valuable or that all academic research careers, like their political equivalents, end in failure. It’s just to say that you are unlikely to be among the privileged elite of researchers whose research does make a positive difference.

Third, something similar is true when it comes to teaching. Even if you don’t hope for success as a researcher you might hope for success as a teacher. Most academics get to teach a unique cohort of students. Through their teaching, they might hope to make a positive difference to, at least some of, the lives of that unique cohort of students. But this hope is probably forlorn. For starters, many academics are not very good at teaching. They aren’t properly trained for it and they see it as a distraction from their more important research work (even if, as I suggested above, this research is itself likely to be trivial). Even if they are engaged in teaching, there is little evidence to suggest that their teaching makes a positive difference to their students lives. It is hard to measure any difference teaching makes in terms of skills acquisition or knowledge transfer. Most students forget what they have learned within a relatively short period of time, and there is a strong case to be made that most of the value of higher education lies in signalling and credentialing, not teaching and learning (Caplan 2018). This doesn’t mean that students don’t like their teachers. Sometimes they do and sometimes they claim that their teachers have made a positive difference to their lives. The problem is that these self-reports are usually based on how likable they perceive their teachers to be. There is evidence suggesting that likability does not correlate with positive educational impact such as improved intellectual capacity or skill (Brennan and Magness 2019). Furthermore, the potential for teaching to do good for students is to a large degree dependent on other moral features of the student-teacher relationship in particular the fairness of assessment and grading practices. As I have argued elsewhere there are good reasons to think that current assessment and grading practices are morally circumspect and unfairly prejudicial. If that argument is correct then academics may actually do more harm than good through teaching.

Fourth, even if you have good intentions, and have the capacity to do good through your work, you will often find yourself hampered in doing so by institutional constraints and incentives. This is obvious enough in other careers. Perlman (2000), for example, argues that it is true for lawyers. A lot of people write about legal ethics and the ethical choices facing the typical lawyer. But the reality is that most lawyers working in large law firms (or other legal institutions) have little choice over what kinds of cases they do and what kinds of clients they take on. If you choose to be a lawyer, odds are that you will face stark ethical choices several times in your career: represent an ethically dubious client or quit your job. This leads Perlman to conclude that the most important ethical choice made by a lawyer is whether to become one in the first place and, if they do, what kind of law firm or institution they choose to join. Once they are in situ, their ethical choices will become much more constrained and their opportunities for doing good work will be limited. Something similar, though perhaps less extreme, is true for academics. They might want to do ethically valuable research or inspire their students to reach new heights, but once they find themselves in an academic institution they might quickly be disabused of these aspirations. They might learn that their preferred field of research is not rewarded by their institution or their academic peers. They might find themselves teaching hundreds of students and evaluating their performance in line with ethically dubious institutional norms. They might find themselves being evaluated using metrics that don’t encourage ethically valuable work and, in some cases, incentivise the opposite (Muller 2019). This means that, once they are in situ, they won’t have the choice, time or energy to do the good things they would like to do.

Fifth, even if you are good at what you do and you have the opportunity to do good, you are likely to be replaceable by someone who can do even better. In any highly competitive career, there are likely to be hundreds of well-qualified candidates for your job. Are you so sure that you are better than them? What if your occupying a job is denying someone more competent and more likely to do good the opportunity to do so. Saul Smilansky (2004) refers to this as the “paradox of beneficial retirement”. According to Smilansky, if you are a professional academic, then you are likely to do more good by retiring from your current job than continuing to do it. Why? Because even if you are competent at what you do you are unlikely to be exceptional. Consequently, you would make the world a better place by retiring and clearing the path for someone who is exceptional. Of course, there are problems with this argument. As James Lenman (2007) points out, for your retirement to be genuinely beneficial, you have to assume that (i) there is a plentiful supply of better candidates for your job, (ii) that one of these candidates will actually get your job if you retire and (iii) that they wouldn’t have got an equivalent job if you didn’t retire. Those conditions may not hold. Indeed, in a highly competitive career there are probably many less qualified and less competent candidates for your job as well. It’s possible that they might end up taking your position if you retired. So you might make the world a worse place by retiring. Still, Smilansky’s basic insight is an important one. It takes a peculiar kind of arrogance and self-belief to assume that you are the best candidate for your own job and, perhaps more importantly, that you will make a positive moral difference with your career choice. That’s an insight that all would-be academics should take to heart.

Sixth, and finally, there are moral opportunity costs associated with becoming an academic. Even if you can do good through an academic career, it’s possible that you might have done even more good with another career. William MacAskill, one of the co-founders of the effective altruist movement, famously popularised this analysis of career choice. In his article “Replaceability, Career Choice and Making a Difference” (MacAskill 2014) he argues that if we want to do good with our lives, we are better off choosing a job that pays well and using the money for philanthropic donations, than trying to do good through our actual careers. In other words, instead of becoming a doctor and trying to save lives, you are better off becoming an investment banker, earning lots of money, and then giving that money to other people who can save lives. His reasoning is straightforward. Money is a fungible resource: careers are not. You can do more types of good with money than you can with your job. Similarly, there is a good deal of moral uncertainty associated with career choice. As the arguments discussed above suggest, even if you want to do good by becoming an academic, you might end up doing bad. At least with money, you can compensate for any badness through the right kind of donation. If you have squandered your life doing research that makes the world a worse place, there is little chance to correct your mistake, especially if your career wasn’t very lucrative. Academics are usually intellectually gifted people and its quite likely that they could use their talents to pursue other, more lucrative, career choices. Consequently, it’s likely that budding academics could do more good for the world if they gave up their dreams of becoming academics and considered other career possibilities.

Taken together, these six arguments seem to cast doubt on the wisdom of becoming an academic. Is there anything to be said against them?


3. Academia and Self-Actualisation

The preceding analysis focused entirely on the consequentialist criterion for evaluating career choice. If we shift focus to the self-actualisation criterion, perhaps we can paint a different picture – a picture is a little more optimistic about the ethics of becoming an academic?

There are two parts to the argument I wish to develop. The first focuses on the limits of the consequentialist criterion and why we cannot completely ignore self-actualisation in the analysis of career choice; the second on the advantages of academia from the perspective of self-actualisation.


(A1) - The Limits of the Consequentialist Criterion

There are a few problems with relying solely on the consequentialist criterion to guide your choice of career. The most obvious, and most important, is that very few career choices hold up under its scrutiny. This is because it is very hard to conduct an all things considered evaluation of the consequences of an individual’s career. We cannot easily add up all the incidents and outcomes of an individual career, categorise them according to whether they are bad or good, and determine clearly whether the good outweighs the bad (or vice versa).

There are some outlier cases, of course. We can say with some confidence that Hitler’s life was, on net, bad. He did more ill for the world than good. But beyond these outlier cases our judgments are dubious and prone to bias. If you were to ask me, right now, whether I had done more good than bad through my career I would be hard pressed to give you an answer. I would like to think I have done more good but I have no idea. I don’t collect the relevant data and I don’t even know how to go about collecting it. I can pick particular incidents and anecdotes that support the notion that I am a good person, but I’m probably being conveniently selective in my approach to the data about my own life. Perhaps some of the things I have said in class to students have shattered their hopes and dreams? Perhaps I have inspired them to do wicked things? Perhaps my research has been or will be used by others to support ideologies and agendas that are evil? I don’t know. Unless you are meticulous in collecting data about the consequences of your actions, and unless you avoid bias and error in doing so, you won’t be able to tell whether your career choice was, on balance, good or bad. What’s more, since we don’t typically collect this data about people who currently follow the career you are thinking about following, you don’t have the evidence you need to apply the consequentialist criterion to your own career choice.

The problem, however, goes deeper than simply a lack of evidence. Although we might have some hope of reaching consensus on the badness of certain careers, I suspect we will find it much harder to reach consensus on the goodness of most careers. This is because there are several different conceptions of the good life and a lot of disagreement about what is truly “good” for the world.

Consider the case of Normal Borlaug. Borlaug is one of the scientists responsible for the so-called “Green Revolution” in agriculture. Working initially in Mexico in the 1940s and 50s, Borlaug successfully bred new strains of high-yield wheat that, according to his supporters, averted mass famines in the middle part of the 20th century. He was awarded the Nobel Peace Prize in 1970 and when he died in 2009, he was lauded as a humanitarian hero, perhaps “history’s greatest human being”, “who saved 1 billion people from death by starvation.”

He sounds like the model example of someone who satisfied the consequentialist criterion with his choice of career (though, to be clear, there is no evidence to suggest he thought about career choice in this way). But not everyone sees it the same way. To his critics, Borlaug’s “Green Revolution” has had disastrous social and environmental consequences. It has increased the use of artificial irrigation and chemical fertilisers, increased the power of large agricultural conglomerates, and disrupted traditional small scale communities and farms. Some critics see Borlaug as a moral monster. Alexander Cockburn, for example, has suggested that: “Aside from Kissinger, probably the biggest killer of all to have got the [Nobel] peace prize was Norman Borlaug, whose 'green revolution' wheat strains led to the death of peasants by the million.”

Cockburn’s criticisms are unfair, in my opinion. But that’s not really the point. Even judicious and fair-minded assessments of Borlaug — such as that provided in Charles Mann’s book The Wizard and the Prophet — acknowledge that the Green Revolution has had some sizeable negative consequences. So even someone lauded for their positive contribution to humanity can have a legacy that is contested and open to doubt. If this is true for the putative “greatest human being” ever to have lived, then what hope do the rest of us have? Heck, I could write the critical reappraisal of my own life right now.

As I say, this is probably most important criticism of the consequentialist criterion. There are, however, two others that are worth mentioning.

The first is that even if we did apply the consequentialist criterion we would have to apply it to other possible careers too. Academia might offer little hope for doing good but do other career choices fare any better? Consider supposedly ethical careers like being a doctor or pursuing charitable work. Can you do good through them? Sure, it’s possible. Are you likely to do good through them? Not necessarily. They are susceptible to many of the criticisms I offered of academia. They are highly competitive careers so you may not get the chance to do any good through them; you may end up being a sub-standard or relatively incompetent occupant of those careers; you may do work that is counterproductive or trivial; you will be replaceable, and so on. Similar problems arise for high-earning careers, such as investment banking, that are supported by those who think we should do good through charitable donations. You may not succeed in becoming an investment banker since the field is so competitive, you may not be a high-earning investment banker, your moral views may change as a function of occupying that role; and so on.

To be clear, some of these fears are addressed by people who write about career choice and the ethics of earning to give (MacAskill 2014). I raise them not in order to endorse a form of “futility thinking” about careers, i.e. assuming that it is impossible to do good through one’s career (cf Unger 1996). I raise them in order to highlight the fact that if we consistently apply the consequentialist criterion to the ethics of career choice, it is not obvious that academia is such a terrible career choice or that it is much worse than other, supposedly positive, careers. Consistent consequentialists are rarely able to reach such definitive conclusions.

This then relates to another criticism: It is really hard to be a consistent consequentialist. This is a long-standing criticism of consequentialist moral theories. They are often said to be “over-demanding” (Mulgan 2001; MacFarquhar 2015). They demand us to do more good with our lives and to constantly reevaluate our choices to ensure that they are, in fact, doing the most good. Sometimes it is good to demand a lot of ourselves and as long as we don’t violate the Kantian maxim that “ought implies can” (i.e. that it should be possible to adhere to a moral norm) then it’s not obvious that demanding a lot is a mark against a ethical criterion. Nevertheless, when we apply to the consequentialist criterion not just to individual choices but to our entire lives — i.e. who we are and who we choose to be — then it can become counterintuitive and counterproductive.

The philosopher Bernard Williams was one of the first people to point this out (Smart and Williams 1973). He was critiquing utilitarian moral theories when he did so, but did so by specifically raising a dilemma about career choice. He asked us to imagine a chemist named George who is out of work and desperate to earn some income for his family. George is also deeply morally opposed to biochemical warfare. George is offered a well-paying job in a chemical weapons plant. He is told that the job is competitive and that if he refuses to do it another, equally qualified and more enthusiastic, candidate will be found. Should he take the job? If George is a consistent consequentialist, then he probably should take it. He can provide for his family by doing so, and it is probably better, all things considered, if someone less enthusiastic occupies the job. It might reduce the harm done to the world. But, of course, this means that George will have to suppress or deny his profound moral opposition to chemical warfare. He will have to treat himself as a mere instrument to certain ends and not as an agent with coherent life plans and values.

Williams thinks this is a flawed approach to career choice, in particular, and moral decision-making, in general. Consequentialism seems to demand that we adopt an impartial point of view and treat our own lives as things that are alien from us and interchangeable with any other life. This impartialist logic is clearly at work in the criticisms of academia that I made in the previous section. Williams argues that we cannot consistently apply this approach because we cannot completely alienate ourselves from our own lives. We have to live with ourselves. In a sense, then, self-actualisation has to be a core part of the picture when it comes to career choice. If we didn’t think about oursleves, and whether a career is a good fit for us, we would undermine our sense of moral integrity. As Williams put it, for the chemist George to consistently apply the consequentialist criterion:


“is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his projects and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.” 
(Smart and Williams 1973, 116-117)

 

Proponents of the consequentialist criterion to career choice have acknowledged Williams’s concern and sought to avoid it. MacAskill (2014), for example, argues that choosing a high-earning career over a charitable, do-gooding career does not have to undermine one’s integrity. There are several reasons for this. One is simply that many high-earning careers are not necessarily attacks on one’s moral integrity. Many times people are ambivalent or unsure about the compatibility between the career and their ethical commitments. Another, perhaps more powerful reason, is that picking a high-earning career in order to pursue an ethical goal (doing good through charitable giving) can be construed as acting with the highest moral integrity. You want to do good for the world and you do this through your career choice. MacAskill uses the example of Friedrich Engels to make this point. Engels, along with Karl Marx, was an opponent of capitalism. He wrote about it, and organised against it. But he also worked for a capitalist company, run by his uncle, that he hated. The money he earned from this job funded his socialist and communist activism:


“In doing this, rather than displaying a gross violation of integrity, it seems that Engels acted with the highest integrity. He found his moral projects sufficiently compelling that he was willing to work out how best to further them and to act on that basis.” 
(MacAskill 2014, 280)

 

MacAskill’s point then is that you can treat your career as a mere instrument to an end without violating your integrity in the process. This is true as long as your career choice is consistent with your ultimate moral goals.

This might be plausible but, in the end, it still leads us back to the importance of the self-actualisation criterion. To successfully apply the consequentialist criterion to your choice of career you should want to be a consistent consequentialist. In other words, that should be part of what it means to be a fully actualised version of yourself. This doesn’t mean you have an excuse to ignore the consequences of your actions if that doesn’t seem to fit with your abilities and aspirations; but it does mean that you need to take some due consideration of those abilities and aspirations when deciding what to do.


(A2) - We Might Be Able to Self Actualise Through Academia

To summarise the preceding argument: it’s very difficult to apply the consequentialist criterion to career choice because it is hard to reach an all things considered assessment of the relative value of careers; given this uncertainty it’s not obvious that academia is such a morally terrible career choice; and we have to allow self-actualisation to play some role in our ethics of career choice. This then raises the obvious question: can you self-actualise through academia?

Yes, maybe. This is something that each individual will have to determine for themselves, based on their attributes and abilities. I often ask students thinking about pursuing an academic career whether they enjoy certain processes and activities. Do they like ideas and arguments? Do they enjoy the process of research? Do they like explaining ideas to others? If so, then they might find an academic career quite rewarding and self-actualising. Furthermore, academia is quite a diverse career and can be rewarding for a number of different sensibilities. If you aren’t any good at research, you might fare better with teaching. If you don’t like teaching, you might find your calling in academic administration. If you don’t like one discipline, you might be able to shift to another. There are many ways to build an academic career and one of them might be the right fit for you.

The challenge, of course, is that it can be difficult to know in advance whether academia will be self-actualising. This is because you have to try it out for yourself to see if it fits. The philosopher L.A. Paul (2014) has described this problem quite well. She argues that a number of choices in our lives are transformative. In other words, by making those choices we don’t just alter the short term experiences we might happen to have; we also change the kind of person we will become. In these cases, you need first-hand experiential knowledge of what it will be like to make the respective choice in order to fully rationally assess the options. Career choices are often like this. You can learn a bit from reading about other people’s careers and asking them what it is like, but ultimately you have to run the experiment for yourself. But academia fares no worse than other careers in this respect. We always face this epistemic “gap” when deciding who we wish to become.

So it is possible that academia can be self-actualising and if you are one of the people for whom this is true then you also have a shot at doing some good by being an academic, i.e. by doing research that changes the world for the better or by inspiring others to do good and to actualise themselves. This might seem to contradict what I said previously, but it doesn’t. It is still pretty unlikely that you will do good by becoming an academic. But it helps if you have the both the aptitude and self belief that you can do good.

In this respect, Lisa Bortolotti’s (2018) ‘agency-based’ theory of optimism can be quite inspiring. Bortolotti argues that even though most forms of optimism are irrational there is one form of optimism that might buck the trend. It is not irrational, she argues, to be optimistic about the power of your own actions to make a positive difference to the world. Bortolotti supports this thesis by highlighting a series of famous studies done on people suffering from serious illnesses such as breast cancer. These studies have found that patients who think they can positively affect their health outcomes through their choices, and who formulate reasonable, evidence-based plans for doing so, tend to do better than their more pessimistic peers. This idea is complemented by research from other fields. Philip Tetlock, for example, in his work on “superforecasters” — people who outperform others in their ability to predict future events — finds that people who believe that forecasting is a skill that they can hone and improve are more likely to be better at it (Tetlock and Gardner 2016). If this is right, then it may be worthwhile being optimistic about the ethics of academia as a career choice. If you think you can do good by being an academic, and if you formulate a reasonable, evidence-based plan for doing so, then you might just pull it off. At any rate, you won’t do much worse than you would in any other career.


References

  • Alexander, Bryan (2020). Academia Next: The Futures of Higher Education. Baltimore, MA: John Hopkins University Press.
  • Bortolotti, Lisa (2018). Optimism, Agency, and Success. Ethical Theory and Moral Practice 21, 521–535 (2018). https://doi.org/10.1007/s10677-018-9894-6
  • Brennan, Jason and Magness, Philip (2019). Cracks in the Ivory Tower: The Moral Mess of Higher Education. Oxford University Press.
  • Camus, Albert (1942). The Myth of Sisyphus. London: Penguin (2005 edition)
  • Caplan, Bryan (2018). The Case Against Education. Princeton, NJ: Princeton University Press.
  • Care, Norman (1984). Career Choice. Ethics 94: 283-302
  • Fidler, Fiona and Wilcox, John (2018)/ "Reproducibility of Scientific Results", The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.)
  • Kant, I. (1797) The Metaphysics of Morals. Cambridge University Press Edition, edited by McGregor, Mary, published 1996.
  • MacAskill, William (2014). Replaceability, Career Choice, and Making a Difference. Ethical Theory and Moral Practice 17: 269-283.
  • MacAskill, William (2015). Doing Good Better. Guardian Faber.
  • MacFarquhar, Larissa (2015). Strangers Drowning: Impossible Idealism, Drastic Choices, and the Urge to Help. New York: Penguin.
  • Mann, Charles (2018) The Wizard and the Prophet. New York: Knopf.
  • Mulgan, Tim (2001). The Demands of Consequentialism. Oxford: OUP.
  • Muller, Gerald (2019). The Tyranny of Metrics. Princeton, NJ: Princeton University Press.
  • Lang, Gerald (2014). Jobs, Institutions and Beneficial Retirement. Ratio 27: 205-221
  • Lenman, James (2007). Why I have no plans to retire. Ratio 20: 241-246.
  • Paul, L.A. (2014). Transformative Experience. Oxford: OUP.
  • Perlman, AM (2000). A Career Choice Critique of Legal Ethics Theory. Seton Hall Law Review 31: 829-
  • Singer, Peter (2015). The Most Good You Can Do. New Haven, CN: Yale University Press.
  • Smart, JJC and Williams, Bernard (Cambridge, UK: Cambridge University Press, 1973)
  • Smilansky, Saul (2004). The Paradox of Beneficial Retirement. Ratio 18: 332-337
  • Unger, Peter (1996). Living High and Letting Die. Oxford: OUP.
  • Tetlock, Philip, and Gardner, Dan (2016). Superforecasting: The Art and Science of Prediction. London: Random House.
  • Todd, Benjamin (2016). 80,000 Hours: Find a fulfilling career that does good. Createspace Publishing.
  • Westover, Tara (2018). Educated: A Memoir. Random House