Thursday, May 27, 2021

The Trouble with Teaching: Is Teaching a Meaningful Job?





Frederick William Sanderson was the headmaster of the Oundle School in England from 1892 to 1922. In a hagiographic biography, HG Wells celebrated him as ‘the greatest man’ he had ever known. If Wells’s reflections, and those of former pupils and colleagues, are anything to go by, Sanderson must have been an impressive figure. Consider, for example, the following recollection from a former student. Having been discovered taking notes in the school library after dark, Sanderson reprimanded the student for his breach of discipline but then, calming down, asked him what he was reading:


He looked over the notes I had been taking and they set his mind going. He sat down beside me to read them. They dealt with the development of metallurgical processes, and he began to talk to me of discovery and the values of discovery, the incessant reaching out of men towards knowledge and power, the significance of this desire to know and make and what we in the school were doing in that process. We talked, he talked for nearly an hour in that still nocturnal room. It was one of the greatest, most formative hours in my life... 'Go back to bed, my boy. We must find some time for you in the day for this’

 

It’s hard not to be moved by this. Sanderson seems to have had a positive impact on his students. He cultivated a sense of wonder in them and, at least in this case, transformed their lives. He may be the quintessential example of a teacher whose career embodied the highest aspirations of that profession.

I have been teaching at universities for over a decade. Although I have never bothered to count, I estimate that I have given over 1000 lectures/seminars and interacted with over 1500 students. When I started out, I was enthusiastic about teaching. I enjoyed the challenge of explaining difficult concepts; of facilitating lively discussions and debates; of encouraging the students (I would never call them ‘mine’ as some do) to think for themselves.

Over the past decade, my enthusiasm has waned. I still enjoy aspects of teaching — and I will talk about those aspects in what follows — but overall I find teaching quite frustrating. I don’t think is a particularly meaningful job, despite what some people claim. In fact, many times I find it disheartening. I’m sure some of this is my own fault — maybe I don’t try hard enough or care enough about the students — but I think some of it is inherent to the nature of teaching, and the problems of teaching in a modern third-level institution.

In the remainder of this article, I will try to explain the reasons for my frustration. I will draw heavily from my own experiences of teaching. I will also examine the extent to which my experiences are (or are not) mirrored in the empirical research. I write this with two hopes in mind. First, I hope that other academics and instructors might find it useful to have someone articulate these views on teaching. Perhaps they have had similar thoughts and would like to know that they are not alone. Second, I hope that someone will convince me that I am wrong. 


1. The Case Against Teaching

I have written a lot about meaningful work in the past. My book, Automation and Utopia, deals with the topic at length. In that book, I conclude that work as whole is structurally bad and that non-work alternatives are more meaningful. Given this argument, you might suspect that my analysis of teaching is biased from the outset. Since I think work in the modern world is structurally bad, it stands to reason that I would not be a huge fan of teaching. It is just a particular case study in the awfulness of work.

But this is not my view. In Automation and Utopia, I did not conclude that all forms of work are necessarily bad. I conceded that despite the structural conditions that make work worse than it ought to be, some forms of work, including my own, can be quite good. My job as an academic has many benefits. I work in a university that is relatively devoid of managerialism (at least when compared to universities in the UK). I have a lot of autonomy. I can research whatever I like and there is very little interference with how I teach and assess my modules. I am essentially free to develop my skills and hone my craft. Furthermore, I work with people I generally like and I have a chance to enrich the minds of the students I encounter. On paper, everything is good. I have one of those jobs that scores highly on standard conceptions of meaningful work.

In practice it is a different story. I’ll explain why in two steps. First, in the remainder of this section, I will outline four arguments for thinking that teaching is less meaningful than you might think. Second, in the subsequent section, I will consider some objections to this negative assessment. In this first part of the analysis, I will be looking at teaching from a largely (though not entirely) consequentialist perspective. In other words, I will be working with the assumption that one of the things that makes teaching meaningful (on paper) is that it serves a valuable purpose (education) and that teachers can derive meaning from their work to the extent that they contribute to that valuable purpose. I don’t think this is an overly controversial assumption, but I will consider some criticisms of it later on.

Anyway, here are the four arguments against teaching understood in those terms.


A1 - The Purpose and Value of Education is Questionable

You might think it is just obvious that education is valuable. Our world is, after all, one that rewards educated people. Educated workers typically earn more money, have more stable personal lives, and are generally better equipped to manage that vagaries of work in a knowledge economy. It is not that things are easy for them, but it would be a lot harder without an education. Competencies such as literacy and numeracy are practically essential in the modern world, and higher-order cognitive abilities such as the capacity for critical reflective thought, problem-solving and the ability to evaluate different sources of information are highly sought after.

I buy this argument, at least at an abstract level. When it comes to education in general, in particular schooling in basic competencies such as literacy and numeracy, I am sure that education does serve a valuable purpose. Where I struggle, is when it comes to the purpose of the particular classes and subjects I teach, and the practical challenge of converting abstract purposes into specific learning outcomes for those classes.

What is it that I should be doing in class every day? Here’s a definition of teaching that I have long admired and, indeed, quoted in my own teaching statements:


…the real aim of education [is]: to waken a student to his or her potential, and to pursue a subject of considerable importance without the restrictions imposed by anything except the inherent demands of the material. 
(Parini 2005, 10)

 

But what does that mean? What is a student’s potential? Does it vary from student to student? Everyone is unique so this would stand to reason. So is it really possible for me, as a teacher, to waken each individual to their unique potential? Also, what are the inherent demands of the subject? It’s not clear. It turns out that I may like this quote because it is so vague. It speaks to the high falutin’ aspirations of teaching as a profession, but means relatively little in practice. The purpose is vague and its value unclear.

There are more practical guides to what the purpose of education might be. Most lecturers are introduced to Bloom’s taxonomy of learning outcomes when they do courses on how to teach. Originally formulated in 1956, this taxonomy has been refined and expanded over the years. Many people claim that these refinements are an improvement on the original. I’m not so sure. I think the original, with its simplicity and hierarchical organisation, is much more memorable than all the subsequent riffs upon it. Anyway, the diagram below illustrates the original taxonomy.



As you can see, the basic idea here is that a well-designed course/module will enable students to ascend through the hierarchy of learning outcomes. The teacher will begin by sharing some key information they want the student to remember and understand. This may be done in the classroom in the form of a lecture, or through reading lists and textbooks. Then they will help the students analyse and apply this information, breaking it down into its key components and seeing the relationships between different concepts. Then they will move on to synthesising and critically evaluating this information. Do the ideas and arguments hold up to scrutiny? Are they true? Do they have value?

This provides a neat structure for teaching and, at least on the face of it, a clear guide as to the purpose of teaching. I start most of my courses with Bloom’s taxonomy. I like to be transparent with students. In my course on contract law, for example, I tell students that I will start most topics by sharing some key legal principle or rule. I will try to help them to understand that principle or rule by reviewing case law. I will then get them to perform a series of practical exercises in which they will analyse these cases, before moving onto applying rules to novel cases, and then critically evaluating their role in the modern world.

It all sounds so simple, but there are a number of problems in practice. First, there is the selection problem, which bits of information or knowledge should I be exposing students to? Most subjects are vast. There are lots of cases and rules relevant to contract law. Which ones should I include in my courses? I cannot possibly cover everything. I have to make some tradeoffs, but every decision to include some topic leads to the exclusion of another. The standard approach is just to follow the existing textbooks or professional curricula, but some people question the value of this. The status quo is biased towards conservative, non ‘critical’ attitudes toward law. Maybe we should be disrupting and decolonising the curriculum? Sharing different voices and different ideas? It’s a challenge to know what you should and should not include. Furthermore, the more you know about a subject, the more complex it becomes. You start to see how knowledge is one vast interconnected web. Whenever we teach, we arbitrarily cleave this web at its joints. We strip away the context that helps it all fit together.

Second, there is the value problem. Is the information I am sharing and asking students to critically evaluate, really important? Is this stuff they need to know? I often kid myself that it is. I will claim that a subject as apparently dry and boring as contract law is intrinsically fascinating because it raises important questions about trust, freedom, reciprocity, economic value and so on. I will also claim that it is eminently practical since people make and enter into contracts all the time. But I’m not so sure that this is true. Many of my students won’t ever use the information I cover again in their lives. They won’t need to remember those obscure Victorian cases on shipping and medical quackery that I cover in such loving detail. Heck, I don’t need to remember them in my own life and I teach the subject. Furthermore, the deep and important questions relating to trust, freedom and reciprocity, can be covered in other, more interesting and more direct ways. And this is, in some ways, a best case scenario: contract law is probably one of those subjects that lends itself to a credible argument on behalf of the value of the underlying subject matter. Many academics teach incredibly obscure and niche courses whose contents are unlikely to have any lasting importance for their students lives.

Third, there is the meta-value problem. Even if the information I am sharing is not, in and of itself, intrinsically or practically valuable, I might argue that students are still learning valuable transferable skills from my courses. For instance, I could (and frequently do) argue that students are learning the capacity for critical and self-reflective awareness as result of my teaching (in fact, I teach an entire course dedicated to critical thought). Let’s set to one side the question of whether this is true (we’ll return to it in a moment). Assume that it is. Is it, in fact, valuable to learn such meta-skills? The claim is often made that critical thinking skills are valuable from a social perspective: people with the capacity for critical thought are more discerning consumers of information, better problem solvers, better citizens and so on. But I don’t know how true this is. There is plenty of research on the benefits of high intelligence for society and individuals, but there is also quite a bit of evidence to suggest that people with high critical intelligence can be more ideologically entrenched and biased than others. Keith Stanovich is possibly the leading researcher on this issue, documenting how ‘myside’ bias tends not to diminish with intelligence. Instead of people becoming more open to other views and more willing to admit when they are wrong, they engage in motivated reasoning that reinforces existing beliefs and opinions. Similarly, David Robson, in his book The Intelligence Trap reviews several studies (and some famous anecdotes) suggesting that more intelligent people fall into many cognitive traps, even when they are aware of the potential biases and errors underlying human reasoning.

What about the personal benefits of critical thinking? Cognitive behavioural therapy, which is a popular treatment for many psychological disorders, including depression and anxiety, is, in a sense, a kind of applied critical thinking. The idea underlying CBT is that we fall into certain cognitive traps that lead to psychological distress. For example, we tend to exaggerate negatives, catastrophise, engage in ‘all or nothing’ thinking and so on. CBT tries to get people to identify these cognitive errors and correct them through systematic reevaluation, behavioural experiments and so on. CBT is a well-evidenced therapeutic intervention and while it is not a miracle cure, it can work well for some people. Given this, we might feel confident that teaching critical thinking could make people more at home in the world and less psychologically distressed. The problem is that the benefits of CBT are hard won. It usually requires extended one-on-one interactions with a therapist who will guide you through the methods and give feedback and encouragement along the way. This is very different from how critical thinking is taught at university. Furthermore, most critical thinking classes at university are not directed at our beliefs about ourselves, they are, rather, directed towards specific subject areas. For example, I teach a course on critical thinking for lawyers that focuses on common errors in legal reasoning, not errors in reasoning about ourselves (though I do bring in a range of non-legal examples and sometimes refer to CBT). Also, balanced against the benefits of CBT there is evidence suggesting that people with high intelligence are more prone to mood disorders, including anxiety and depression. Ruth Karpinski and her colleagues surveyed over 3000 members of Mensa to see if there was a link between high intelligence and psychological disorders. They found that there was a correlation. People at Mensa were about 2.5 times as likely to experience high levels of anxiety and depression. That said, this was only a correlational study. A similar European study by Navrady et al, with a much larger sample size (over 180,000) found that intelligence was only associated with a higher risk of depression among those who scored highly on neuroticism. Otherwise, intelligence seemed to be protective against psychological disorders though the effects were small.

Speaking from my own example, I suspect that I am above (though not by much) average when it comes to the disposition for critical thought. I spend my whole life dissecting arguments and information, probing their truth and persuasiveness from multiple angles. Has this made my life better? I’m not sure. If anything I suspect it makes me more neurotic, less trusting, and less confident. For example, I’m not sure that I have many strong convictions or principles. Pretty much everything I believe is defeasible and open to doubt. This often leaves me with a lack of motivation or desire. This can be an immense source of frustration for others.

Socrates once said that the unexamined life is not worth living. Teachers love that line. But as Kurt Baier once said, the over-examined life isn’t much to write home about either.


A2 - Teaching Often Fails to Achieve Its Purpose

The previous argument might be interesting to some but it is not, in my view, the most significant problem with teaching. I am happy to concede that teaching might serve a valuable purpose. What I’m much less convinced of is that teaching actually achieves its purposes. Let’s assume that the purposes of teaching align with Bloom’s taxonomy. The goal is to share valuable knowledge, and then to get students to remember, understand, analyse and critically evaluate that knowledge. If possible, the further goal is to get them achieve these goals in a specific module by developing metacognitive skills that they can then transfer to other aspects of their lives. Does teaching achieve those ends?

Let me start with some anecdotal evidence. I’ve been teaching the same subjects for years now. I have a good handle on what I want students to achieve in these subjects. I also assess in similar-ish ways year on year. (I say ‘ish’ because I do ‘innovate’ to some extent.) Despite this, I don’t see any discernible improvement in outcomes for my students, nor in their results. Roughly the same number of students achieve first and second class grades each year. The quality of the assessments varies little as well. The abiding impression I get is that the students that do well in my courses would have done well no matter what I said or did (as long as I attained some minimal level of competence). They were self-motivated and would have thrived no matter who was teaching. I’ve had this confirmed, to some extent, when I review the grades of these students upon entry to university and across all other subjects. The single best predictor of how well a student will do in one of my courses is how well they did on their entrance assessments and in their other modules. Furthermore, having spoken to students years after they left university, many of them tell me that they remember little, if anything of what they learned in my classes. If they remember anything at all, it tends to be the trivial stuff: the day I cancelled class, the day one student ran through the lecture theatre in a chicken costume, the day my powerpoint presentation wouldn’t work and so on. In short, the benefits of teaching seem to be narrow and transient, and the impact of the teacher (i.e. me) seems to be minimal.

That’s just my impression. Is this confirmed by empirical data? Bryan Caplan’s book The Case Against Education is probably the most damning monograph on the effectiveness of teaching. Caplan argues that the benefits of higher education (which he admits are significant, at least when it comes to income) are all down to a signalling effect. Students that make it through 3-4 years of higher education are signalling to potential employers that they would be good employees. The benefits are not down to any learning that takes place at university. Most professors do not teach anything that students need to know in the long-term; most valuable skills are learned on the job.

Caplan reviews the available evidence on learning in Chapter 3 of his book. He is unimpressed. As he notes at the outset, there is a basic problem when it comes to measuring the effectiveness of teaching:


Measurement is tricky. Using students’ standardized test scores implicitly assumes students learn everything they know in school. What about changes in students’ standardized test scores? A little better, but the basic problem remains: the fact that students improve from grade to grade does not show that schooling caused their improvement. Maybe they’re maturing, or learning in their spare time. Given these doubts, most researchers strongly prefer controlled experiments: randomly give some kids extra education, then measure their surplus knowledge. Unfortunately, all these approaches — controlled experiments included — neglect retention. 
(Caplan 2018, 72)

 

Looking at the available information on long-term retention, Caplan’s conclusion seems to be a depressing one. For example, despite spending many hours learning math (algebra, trigonometry, calculus) few adults remember what they have learned. The same holds true for other subjects like history. Basic literacy and numeracy seems to be the only knowledge that is retained, but this is presumably because people have to read and engage with numbers (pay checks, bills etc) on an ongoing basis. If this didn’t happen, they would forget that too. That’s how our brains tend to work: thinking is hard; if we can get away with it, we let the knowledge atrophy.

Of course, we know this to be true. Unless we are forced to keep up with a given area of study, we tend to retain nothing in the long run. I teach at a law school. I studied all the standard law subjects as an undergraduate (company, equity, land, tort, criminal, constitutional, contract etc). I have retained virtually none of the information I learned. Indeed, I forgot most of contract law before I was required to teach it. I had to relearn on the job.

What about transferable skills and learning to learn? Maybe we forget subject specific knowledge, but retain metacognitive learning skills that we can apply to new domains? Caplan also reviews the evidence on this and finds it lacklustre. For example, commenting on studies of science graduates who were tested on their ability to apply scientific methods outside of their narrow domains of study, Caplan notes:


… college students are bad at reasoning about everyday events despite years of coursework in science and math. Believers in “learning how to learn” should expect students who study science to absorb the scientific method, then habitually use that fruitful method to analyze the world. This scarcely occurs. By and large, college science teaches students what to think about topics on the syllabus, not how to think about the world. 
(Caplan 2018, 89)

 

That said, the results are not entirely dispiriting. Caplan notes that students do appear to learn skills through college courses. Law students get better at verbal reasoning and science students improve at statistical reasoning. It’s just that the skills tend to be narrow, and subject specific. There is limited evidence for any improvement in general cognitive ability. Furthermore, the effect of college itself on these skills is often questionable. Students who score highly on skills tests tend to be the ones that score highly on such tests before starting college.

Caplan may be too pessimistic. He seems to overemphasise the negative, and he marshalls the evidence in order to defend the signalling theory of education’s value throughout his book. Nevertheless, I think his scepticism reveals an important epistemological problem for any university teacher who claims to be doing a good job. I don’t carry out randomised controlled tests on students in my class. I don’t track their progress over the long-term. As a result, I have little, if any, information to suggest that they gain anything from my classes. This leaves me in the perpetually troubling position of not knowing whether anything I’m doing is making a difference.


A3 - Any Feedback You Do Receive is Unhelpful

You may question the conclusion of the previous section. Surely, teachers do receive feedback about the quality of their teaching? If you teach at a university, you will regularly give students forms and surveys to complete. Students will rate the quality of your teaching on scales from awful to excellent. They will also provide qualitative feedback on what they liked or disliked. Doesn’t this tell you that you are (or are not) making difference?

To say that the value of student feedback surveys has been questioned is an understatement. The link between survey results and other measures of teaching effectiveness has been subject to innumerable studies over the years. Indeed, it may be the best researched topic in the entire field of higher education studies. The results are pretty grim. Feedback surveys do not seem to measure the effectiveness of teaching, at least if effectiveness is understood as enhanced cognitive ability as measured by educational assessment. Instead, feedback surveys seem to be a measure of how fluent and likeable a lecturer is. On top of that, surveys are often biased against women, ethnic minorities and non-native language speakers.

The most comprehensive study in recent times is the meta-analysis from Uttl, White and Gonzalez. As they note, many early studies on the link between student surveys and effectiveness were of limited value. Instructors were often surveying their own students and then measuring their success on their own assessments. There was no random allocation of students to different instructors, and no attempt to subject all students to the same final assessment. Furthermore, the studies were generally small in size, often involving little more than 100 students. More recent studies have tried to correct for this by adopting a ‘multisection’ experimental protocol. I’ll leave them describe it:


An ideal multisection study design includes the following features: a course has many equivalent sections following the same outline and having the same assessments, students are randomly assigned to sections, each section is taught by a different instructor, all instructors are evaluated using SETs at the same time and before a final exam, and student learning is assessed using the same final exam. 
(Uttl et al 2017, 23)

 

Analysing 97 such multisection studies, Uttl et al find that there is practically no correlation between positive survey outcomes and test results. Only small studies, and studies that do not correct for prior learning, tend to find a positive effect. Their conclusion, which is blunt, is announced in the title of their paper: “Student evaluation of teaching ratings and student learning are not related.”

Carpenter, Witherby and Tauber have also looked at the value of student surveys. Theirs is not a meta-analysis but rather a simple literature review. They note that students are not particularly good judges of how effective their learning is. Students tend to like engaging presenters, not people that challenge them with difficult concepts or the injunction to think for themselves. They like a fluent teaching style, not a challenging one. As a result, students are prone to a number of ‘illusions of learning’ that show up on survey results. There are many famous experiments that reveal this problem. One of the best known is the Dr Fox study from the 1970s. This involved an actor giving a class. The content of the class was deliberately nonsensical and contradictory. The actor delivered the class in two different styles: one hesitant and disfluent; the other confident and fluent. The students rated the second lecture more highly and reckoned they learned a lot from it. This is just one small study but its results are consistent with others.

Carpenter et al are particularly interesting on the phenomenon of active vs passive learning. If you read any book on teaching for higher education, it is likely to encourage you to adopt an active learning approach in the classroom. Instead of being the ‘sage of the stage’, delivering wisdom and knowledge from the lectern, you are supposed to be the ‘guide on the side’, setting exercises for the students, getting them to engage with the material for themselves, and then providing them with feedback on how they did. The claim is that this is a more effective approach to teaching. Students retain more and gain more from this approach. The empirical literature appears to confirms this and it is supported by more basic psychological studies (see, for example, the discussion of this in Make it Stick and Small Teaching).

The problem is that most students hate active learning and often tell you about their hatred of it in the student surveys. As Carpenter et al note:


The passive lecture gives the impression of a fluent, smooth, and seamless learning experience, whereas active learning creates a more disjointed, less fluent experience, in that students may need to think more deeply about, and struggle with, the material to understand and apply it. It is perhaps no surprise, therefore, that many students resist active learning techniques on the grounds that they feel they are not learning…[In one study] students who experienced the passive lecture gave significantly higher ratings of their own learning, and they also rated the instructor as significantly more effective, than did students who experienced the same lesson via active learning. Scores on the test at the end of the lesson, however, revealed a significant advantage for students who experienced active learning compared to students who experienced the passive lecture. 
(Carpenter et al 2020, 140)

 

My own experience with active learning chimes with these findings. In 2020, I created a new course on critical thinking for law students. I read several teaching guides in advance. I drew, in particular, from James Lang’s book Small Teaching, which was recommended to me by several people as a great practical guide to implementing active learning techniques. I drunk the active learning Kool Aid. I decided the course would be all about active learning. Students would be given exercises each week. I would ask students to engage with those exercises first, then I would give some short lectures explaining important concepts and cognitive tools, I would then get them to reengage with the same or similar exercises. I would provide feedback on these exercises, correcting 20-50 mini assignments each week, explaining where students were doing well and how they could improve.

It was a lot of work from my perspective, but students were expected to put in a commensurate amount of effort. I explained to them at the outset that they might find this approach more disfluent and, perhaps occasionally uncomfortable, than what they were used to. But I asked them to be patient, explaining the teaching philosophy behind what I was doing and the empirical research that seemed to support it.

The end result? I got the worst feedback I’ve ever received. Many students hated the class. They found it uncomfortable and didn’t know what they were supposed to be doing. They felt they were being unnecessarily challenged by the exercises. I was surprised since I repeatedly explained the intended learning outcomes, provided more feedback than I have ever provided before, and clearly linked the assessment to the in class exercises. But despite this, several students told me that I wasn’t doing my job properly because I was expecting them to do too much.

Of course, maybe I should just suck it up, keep my head down and persist with this active approach (I probably will). But it’s hard to do so when the feedback is so negative. And this is the problem. If the research is right, then this feedback isn’t particularly relevant, but it’s pretty much all you get in the way of information about how well you are doing. To repeat the point from above: we typically don’t do the randomised controlled experiments to see if students actually benefit from our classes. All we have to go on is their feedback and class results.

Here’s an analogy that might explain the predicament of a teacher. I’ve long been fascinated by the art of stand-up comedy. Comedians spend years honing their craft. They often play to rooms of people that don’t laugh at their jokes, and may even heckle and abuse them. But if they are good, there’s no denying it. They will get the laughs — a constant trickle of feedback that tells them they are doing their job right. Well, teaching is a bit like stand-up comedy without the laughter.*


A4 - Minor Niggles

There are several other minor complaints I have about teaching. These are less important than the three preceding arguments, but they do add to the frustration one experiences while teaching.

First, there are the institutional constraints that make it harder to implement an effective teaching style. There are many of these and some of them might be unique to my own institutional experiences. The obvious one is student numbers. Student numbers still seem to be growing at 3rd level, without corresponding increases in teaching staff. This means we get ever larger student groups to teach with fewer per student resources. For example, I teach five classes of 150+ students and one class of about 50 students. It’s very hard to do anything interactive or discursive with the larger groups, despite numerous attempts to do so. Things might be better if I taught postgraduate courses or smaller group seminars but, alas, I don’t do any of that. Creeping managerialism also makes teaching harder by increasing the demand for pointless form-filling accountability exercises. This takes away autonomy from teachers, which is one of the few redeeming features of the job.

Second, there are the repetitive, but annoying, student behaviours. I don’t like to complain about students. Many of them struggle with much higher academic workloads, expectations and financial concerns than I ever had. Still, there is no denying that there are repetitive student behaviours that sap away a lot of energy. For example, despite crafting long week-by-week summaries of class content and assessment guides that explain what I’m looking for in assignments, I still get dozens of emails from students asking me questions that would be answered if they took the time to read these documents. In the past week alone, for example, I have received 14 emails from students asking the same question about a word limit on an assignment I set, even though this question is answered in the assignment guide. I suppose I can’t blame students for this. I don’t read lots of things I am sent. But it still grates. Similarly, student attendance and engagement with classes seems to inevitably decline as the semester progresses. In one of my classes, I start out the semester lecturing to over 100 students and, by the end, that can be down to less than 30. This is not an unusual problem. I’ve read accounts from academics at ‘elite’ institutions like Harvard and Oxford making the same complaint, and there are many people that actively boast about how few classes they attend (and still succeed academically). Nevertheless, it is dispiriting to see the student numbers dwindle, despite your best efforts to make the classes interesting and to maintain your own enthusiasm. It seems like a referendum on who you are.

This is to say nothing about the practical and ethical challenges of marking student assessments, which I have written about at length before. Suffice to say, this is possibly the most frustrating aspect of the job.

I could go on, but I won’t. I don’t want this to turn into a long ‘woe is me’ memoir. Overall, I think the four preceding arguments provide a prima facie case for thinking that teaching is not a particularly meaningful job: it’s not clear that it serves a valuable purpose, or what its precise purpose should be; even if we could agree upon a purpose, it’s not clear that teaching actually helps to achieve that purpose, or that teachers play a significant role in helping students to achieve that purpose; and the kind of feedback you receive from students tends not be a good indicator of whether you are doing an effective job and, in fact, may be inversely correlated with how effective your teaching is (although this presumes we have a measure for effective teaching). This is to say nothing about the other minor niggles and annoyances one experiences as a teacher.


2. Objections to the Case Against Teaching

I’ve front-loaded this article with the negative stuff. Is there any reason to think that teaching is more meaningful and fulfilling that the preceding arguments might suggest? Maybe. Here are some objections to what I’ve just argued, along with some replies.


O1 - Nothing lasts forever, why expect teaching to buck this trend?

You could object to my case against teaching insofar as it expects too much. Nothing lasts forever. All humans degrade and die. All our cultural institutions and legacies will crumble to dust. Why expect so much from education? If you think you are going to transform a student’s understanding and ability over the long-term, then you are expecting too much. The best you can hope for is short-term changes. If students need some bit of knowledge or some skill, then they will be forced to retain it by the pressures of work and life. A teacher cannot control for that.

This is fair. If we expect lasting change, then very little of what we do is meaningful. Also, it would be arrogant and coercive to expect students to love our subjects as much as we do. But shouldn’t we expect some medium-to-long-term change? And how short-term is short-term? Most students benefit from classes up to the point of assessment, and then quickly forget everything they have learned. I can’t deny this since it has been my own experience. That seems a bit too short, but maybe it is the best we can hope for.


O2 - Effective Teaching Cannot be Measured

You could object to my case against teaching on the grounds that the benefits of effective teaching cannot be measured, or at least cannot be measured easily. The assumption underlying the empirical work on effective teaching is that if you test students in the right way, you can determine whether teaching has been effective. But perhaps that’s not the right way to go about it. Maybe effective teaching has more nebulous or difficult to discern benefits?

I can see where this objection is coming from. Thinking back over my own education, there are some subtle benefits I received from it that probably would not show up on any test. For example, teachers often mentioned important thinkers or concepts in class that I then researched in more detail myself. I remember, in particular, one teacher that briefly ran through the prisoner’s dilemma in class. This caused me to read up on game theory myself. Game theoretical explanations of morality then became a major component of my PhD thesis. Maybe I would have come across the idea anyway without that teacher’s input, but their mentioning of it did open a door for me. It would be hard to test for that. Perhaps teachers have many such subtle influences over their students lives?

The problem with this argument is that, even if it is true, it isn’t particularly uplifting from a teacher’s perspective. Even if you are having such an influence on the students in your classes, you are unlikely to ever know about it — indeed, the students mightn’t be aware themselves. It also makes teaching something of a crapshoot — random things said or done can have a lasting impact. Students may even learn a lesson that is completely antithetical to the one you were trying to teach.

I have an example of this. The only lecture I remember from my undergraduate days (and I’m not kidding about this: it’s the only one I remember) was in Evidence Law. I remember it like it was yesterday. The teacher asked five students to leave the classroom while the rest of us watched a clip from a movie. The clip depicted a crime. The clip was a particularly notorious scene from the 1972 movie Last Tango in Paris. If you’ve seen the movie, you’ll probably know the one. It involved butter.** This was in the days before trigger warnings and sensitivity to student trauma. Anyway, we watched the scene and then the five students who left the class returned and had to ask the rest of us about it. They were playing the role of investigating officers or lawyers. I can’t remember which. Now, I’m sure the point of this exercise was to highlight problems in witness testimony. Did everyone in the class agree on what they had just seen? Did they have different memories? Was it all a bit Rashomon-like? But that’s not what I remember about it. What I remember is that the students who watched the clip thought it was their job to make it as difficult as possible for the students that did not to figure out what had happened. It was like a guessing game. Eventually, the lecturer abandoned the exercise once they realised that the students weren’t doing it right. The lesson I took from this is that students are oddly competitive, and if you don’t explain the purpose of an exercise to them then they will subvert it for their own ends.

So did this lecture have an effect on me? It did. As I say, it’s the only one I still remember. But it wasn’t the effect the lecturer intended. It’s possible that lots of the things I do in class could be having a similar, unintentional, effect. I’m not sure that I should be happy about that.


O3 - It’s Not About Outcomes 

You could object to my case against teaching on the grounds that it is too outcome-oriented. Maybe that’s the wrong way to think about it. Since we cannot control the outcomes, and since the outcomes are hard to measure in practice, maybe we should focus more on the day-to-day experiences and the ongoing relationship we have with students? Maybe the goal of teaching should be to create enjoyable and entertaining in-class experiences, no matter what the long-term consequences of this might be? Maybe teachers should dedicate energy to ensuring that students are having fun and being treated with respect, nothing more than that?

I think there is a lot to be said for this. On a previous occasion, I wrote a critique of outcome-oriented approaches to parenting. I suggested that parents that think the goal of parenting is to raise an optimal child are barking up the wrong tree. We don’t really know what an optimal child is or how to go about raising one. What parents can do is avoid obvious harms (like malnutrition, abuse or neglect), create enjoyable experiences for their children, and forge a meaningful ongoing relationships with them. Now, I am not going to fall into the trap of claiming that raising a child is like teaching a student. They are very different processes in most respects, but perhaps they are similar in this one respect. Perhaps we should drop the commitment to significant learning outcomes in teaching and focus on the ongoing experiences and relationships instead?

I like this proposal, but there are some problems with it. First, it’s worth noting that it would be quite a transformative reorientation in how most people think about teaching. It would also go against most best practice guidelines. All university lecturers are now encouraged to plan their curricula around ‘learning outcomes’, and all the guidebooks and empirical research focus on finding the methods that are best able to achieve those outcomes. Much of this ‘best practice’ guidance would have to be abandoned, or reimagined if we cared less about outcomes. Also, perhaps ironically, shifting to this approach would mean that student surveys are, in fact, a good guide to what works in teaching. Students may not be able to tell you whether they are achieving significant learning outcomes, but they can tell you whether they are having a good time and whether you are treating them with respect.

Second, I would be wary of any claim that teaching is about ongoing relationships and not outcomes. It depends on what is meant by ‘ongoing relationship’ but I have previously explained my views on the ethics of teacher-student relationships. To briefly summarise, I don’t think it is desirable or wise for teachers to have meaningful relationships with students. Intimate relationships are obviously a no-no but even friendship is, in my view, problematic. I think teachers should be respectful, collegiate and obliging, but anything more than that is ethically fraught. In any event, it is practically difficult in the era of mass higher education. You cannot possibly have meaningful relationships with over 500 students, and selecting a handful of them (because they are more vocal or pushy or you happen to like them?) smacks of arbitrariness and favouritism. This doesn’t mean that we cannot create enjoyable learning experiences — maybe that should be the focus — but assuming that meaningful ongoing relationships should emerge from this doesn’t seem right to me.


O4 - What do you know? You are just a bad teacher

People might object to my case against teaching on the grounds that it stems from some bitterness or incompetence on my own part. Perhaps I am a really bad teacher and I am just rationalising my own incompetence?

I understand the tendency to seek biographical explanations for pessimism. I have read Schopenhauer’s essay on women. It’s hard not to imagine that something so misogynistic and hateful doesn’t have its origins in his own life story. His troubled relationship with his mother, maybe? Ultimately, it’s for others to judge my incompetence, but I’m not sure that this essay stems from it.

For one thing, one of the arguments I am trying to make is that I have no idea whether I am a competent or not. I am not sure what the standard for being a good teacher is. If we assume that it is having some lasting impact on student knowledge and skills, then the evidence seems to suggest that most teachers are not particularly good at doing that. But this is irrelevant since I don’t collect that kind of evidence for students taking my classes. So even if this was the right standard neither I nor most teachers would know whether they are hitting it.

The one thing I do have to go on are the results of the student feedback surveys in my classes and other, more informal, types of feedback I receive from students and colleagues. By these metrics, my teaching does not appear to be particularly bad. I tried to review my feedback results from previous years before writing this article to make sure I was not distorting the truth. I quickly discovered that I am not a good record keeper. I only have records from 2018 and 2020 (I was on sabbatical in 2019). In those years, my student feedback was generally positive. For example, I taught a module on Banking Law to two separate cohorts of students in 2018 (both over 150 in number). In both cases, more than 90% of respondents to the survey agreed that I was either ‘good’ or ‘very good’ at explaining key concepts and that my lectures were well prepared. Over 75% of students rated me as ‘very good’ on both questions. Furthermore, I got lots of positive comments too, such as:


I find John is brilliant at teaching this subject, his passion and level of knowledge really helps me to understand this module.
…the lectures are very well prepared and the topic matter is explained and demonstrated extremely well.
John you are an amazing professor who explains everything clearly and accurately.

 

Similarly, in my 2020 Contract Law module (which was taught entirely online and at a time when most students seemed to be really hating the learning experience), over 90% of respondents in two separate cohorts agreed that my lectures were well prepared and that I was effective in explaining difficult concepts. I also got lots of positive qualitative comments, such as:


The module is very well organised. I have access to everything I need to achieve the learning outcomes. The podcasts and supplemental materials are in depth and easily accessible. I have enough resources to fully understand the materials and concepts.
It is probably my best organized module. All the podcasts are very helpful and explain everything well. The lectures are well organized too
John was a really good lecturer and the material was very interesting.

 

I am not citing this to blow my own trumpet. Frankly, I find some of it embarrassing. And I receive negative feedback too. Some students find me boring, few find me likeable, and I already mentioned my experiment with the critical thinking module that appeared to backfire. My point is that I have no reason to think I am particularly bad at teaching. All the indicators are essentially positive or neutral. In addition to the feedback surveys, I have been nominated for teaching awards by students on two occasions over the past five years (though I have never submitted an application for such an award)*** and I receive emails from current and former students thanking me for my classes. The latter are nice but I tend to discount their importance. They are few and far between — perhaps a dozen at most over the years — and all the students that really hated my classes are unlikely to contact me.

All that said, I will concede that my frustration with teaching may be linked to my biography in one important way. I’ve read on several occasions that most people that end up in academia have a ‘teacher story’. Somewhere along the line they had a teacher that inspired them and welcomed them into the world of ideas. I don’t have such a story. The one subject area that I now specialise in (philosophy and ethics) is one in which I have never taken a class. I’m entirely self-taught. My sense is that most of the valuable things I have learned I have learned through my own reading and research. In this respect I sympathise with one of the things Jay Parini says in his memoir on teaching:


I often felt that a teacher was someone who got between me and my reading. I used to believe that teachers unfairly attempted to control the nature and pace of my work, my rate and quality of retention, the ultimate direction of my thoughts….If a book was listed on a syllabus, I naturally veered away from it, not toward it. 
(Parini 2005, 9-10)

 

I had the same attitude to my own teachers and my suspicion is that this is the way it is for most people that are really excited by ideas. Teachers play a limited role in their lives. They do the important stuff themselves. But I must be wrong since I hear so much testimony to the contrary


O5 - Surely there is something meaningful about teaching?

I have been finding the dark cloud attached to the silver lining throughout this article. What about the silver lining? Is there nothing positive to say about teaching? Sure there is. As a teacher you get to enhance your knowledge and understanding of many interesting things; you sometimes get to facilitate enjoyable discussions and debates among students; and you nearly always learn something yourself from the process. Furthermore, despite creeping managerialism, teaching remains (for me, at least) a relatively autonomous job. Without teaching, I wouldn’t be able to do the research work I do, which remains enjoyable and fulfilling.

There is plenty to like about teaching. It’s just not as noble or inspiring as some people suppose. It’s a job and often a frustrating one.


* Of course some teachers are genuinely funny and may get many laughs. That’s not the point. The point is that there is no equivalent of the laugh when it comes to informative feedback for effective teaching.

** I don’t know why I am being so coy. The scene depicts an anal rape. The actress involved (Maria Schneider) has complained about it in the years since, saying that it was not in the script and that she found it traumatic.

*** I don’t know how teaching awards work in all universities, but at my current one nominated lecturers have to submit a five page application explaining why they are ‘excellent’ teachers. I just can’t bring myself to argue that I am an excellent lecturer. 


Monday, May 24, 2021

Flipping the Script: When do technologies disrupt morality?




Answer: when they flip the social script.

Technologies change how humans perform tasks. Consider what I am doing right now. I’m typing words onto a screen using word processing software. Later, I plan to publish these words on a website where they can be accessed by all and sundry. This is a very different way of writing and sharing one’s thoughts than was the historical norm. If I was living in Europe in, say, the 1600s, I would probably first write out these words by hand using paper and ink, then, if I was lucky and wealthy enough, I might pay to have them printed up as a pamphlet. I would then hand out at that pamphlet at street corners and public meetings.

But just because technologies change how humans perform tasks, it does not follow that they will be morally or socially disruptive. Some changes in what we do don’t have substantive ripple effects on our social relations and social organisation. For that to happen, technologies have to do more than simply change what we do; they have to change how we relate to one another.

That, at any rate, is one of the arguments developed by Stephen Barley in his research on technological change in the workplace. Barley argues that it is only when technologies disrupt our ‘role relations’ that they have substantial impacts on the normative and bureaucratic frameworks in which we live out our lives. Barley’s empirical research focuses almost entirely on technology in the workplace, but I think his research has broader lessons. In particular, I think it can help us to distinguish between technology that changes some day-to-day behaviours from technology that is truly morally disruptive, i.e. capable of changing our social-moral beliefs and practices.

I will develop this argument in the remainder of this article. I do so, first, by outlining the explanatory framework that Barley uses. I will then consider a practical illustration of this explanatory framework drawn from Barley’s research. I will conclude by considering the broader lessons that can be learned from this framework when it comes to understanding technology-induced moral disruption.


1. The Explanatory Framework: All the World’s a Stage…

Let’s consider the explanatory framework. One of my favourite bits of Shakespeare is Jaques “All the world’s a stage…” speech from As You Like It. The speech suggests that human life is a bit like a drama played out in seven acts. We play different roles in each act (the infant, the school-boy, the soldier, the lover etc) and hence our life can be said to follow a script. Of course, Shakespeare’s particular conception of the different roles we play is somewhat limited, and the main focus of the speech is on the ageing process, not necessarily the complexity of human social interactions. Still, the speech is memorable because it seems to capture something true about the human condition. Human life has a dramaturgical aspect to it.

It’s no surprise then to learn that social psychologists and sociologists have developed a dramaturgical theory of human social life. Barley draws from this in his research, taking particular inspiration from the work of Erving Goffman. The essence of the dramaturgical theory is quite straightforward. Humans encounter each other in different contexts in social life — the school, the restaurant, the workplace and so on. In these different contexts we play different roles — the pupil, the waiter, the boss. When doing so, we tend to follow a social script that tells us how we ought to behave. This is not a literal script, handed to us so that we can learn our part. It is, rather, something that we learn through imitation and observation. We see that there is a structured pattern to each social encounter. If we disrupt the script, and try to play a different part, then this can cause anxiety and unease, even if sometimes the disruption is warranted.

One of the classic examples of this dramaturgical theory is the interaction between a waiter and a customer at a restaurant. When you enter a restaurant, you expect your interaction with the waiter to play out in a certain way. You expect to be shown to your table. You expect to be handed the menu. You expect to be asked if you would like anything to drink before you order your food. And so on. If a waiter disrupted the script and asked you what you would like for dessert before you sat down, you would find this very strange.

The dramaturgical theory can be pushed quite far. Each social encounter can be said to play out on a stage. This stage is the physical and material environment in which the actors meet (e.g. the restaurant). The actors sometimes use props in their encounter (e.g. menus, notebooks to record orders and so on). There are also other supporting actors that can influence the interaction (your dinner companions; the chefs in the kitchen).

How does this relate to technology and social disruption? Barley’s research is about technology in the workplace. Drawing from the dramaturgical theory, he argues that workplaces are usually organised around roles and scripts. When you take up a particular job, you are given a role within an organisation. This organisation will occupy a physical stage of some kind (this is true even if it is a digital or remote workplace — more on this in a moment). It will consist of many supporting actors playing other roles. Each of these actors will follow scripts set down by organisational rules and habits.

Technology can have a profound effect on all of this. When you are playing out your role, you may have to use or interact with some new bit of technology. This could be part of the new material environment of the workplace or a prop that you rely upon to play your part. This can change how you play your part. Sometimes the effect might be minimal, only changing what you do but not how you interact with others. Sometimes the change can be more significant, affecting how you interact with other roles and how they interact with you. When this happens, the roles may need to be redefined and the script altered.

Barley’s main contention is that it is only when technology affects role relations (i.e. interactions between different the different social roles) that we see the more disruptive changes to workplace norms and organisational rules. Indeed, some of the most disruptive changes arise when technology alters the entire stage upon which the social interaction plays out. When this happens the actors scramble to figure out new roles and new scripts that fit the new stage.


2. The Impact of the Internet on Car Dealerships

Barley has studied the organisational impact of technology on a range of workplaces over the years. His typical mode of inquiry is ethnographic in nature, i.e. detailed on-site shadowing and observations, coupled with interviews. I’m just going to consider one of his case studies here: the impact of internet sales on car dealerships. I find this case study to be informative, in part because it shows how a technology can completely disrupt the social script associated with a workplace activity.

The focus of Barley’s study is on car sales in the US, specifically California. The traditional script — the one that long predated internet sales — is one that is baked into the American popular consciousness. Barley argues that there are three ‘acts’ to this script. In the first act, the customer would arrive at a car dealership and start to look around. They would be greeted by a salesperson (all male in Barley’s study). The salesperson would engage in lots of smalltalk, trying to build rapport with the customer, sometimes even lying in the process. As Barley puts it:


… if the salesman noted a car seat in the customer’s car, he would ask if the customer had a child and then inquire about its age. The salesman would then either profess to have a child of roughly the same age or reminisce about when his children were that age (sometimes even if he was childless) 
(Barley 2020, 56)


The goal of this first act was to ‘land’ a customer on a car and get them to agree to a test drive. Some customers would bow out at this point. If not, things would proceed to the second act: the test drive itself. This was a short act, typically lasting about 15 minutes, during which the salesperson would accompany the customer, point out all the features of the car, and answer any questions.

Upon return to the car dealership, the third act would begin. The customer would be invited to a back office to ‘complete the paperwork’. Again, some customers would bow out at this point. If not, the customer and the salesperson would haggle over the price of the vehicle. This act tended to be the most adversarial. The salesperson would insist there was a price below which they could not go. If the customer insisted on a lower price, the salesperson would sometimes leave the office to ‘consult’ with the sales manager. There was often an extended delay as a result, with the explicit goal of building suspense and anxiety for the customer. The salesperson would sometimes return with the manager, who would put additional pressure on the customer to purchase the car. Oftentimes, the salesperson would do things to up the ante, suggesting that they could not guarantee the negotiated price beyond today. The customer, for their part, could also engage in various negotiating tactics, threatening to take their business to another dealer or even disparaging the salesperson to their face. Overall, the tenor of these interactions could be quite unpleasant and tense:


In many cases, the interaction between the customers and salesmen became strained. It was not uncommon for one party to insult the other. Many negotiations, therefore, never reached an agreed-upon price and, hence, a deal. However, if a deal was struck, the atmosphere became less tense… 
(Barley 2020, 57)

 

What is noticeable about this traditional script is how formulaic it often was (standard talking points and negotiating tricks) and also how negative it seemed to be from the customer’s perspective. Customers often saw salespeople as sleazy and dishonest. They often brought negotiating partners with them (family, friends) to counterbalance the onslaught from the dealers.

The internet changed this. By the early 2000s, most dealers had extensive web catalogues of the cars they sold and also back office internet sales teams. An entirely new stage was set for the process of buying a car. Customers would first browse through the online catalogue, looking at various options, oftentimes armed with knowledge from other websites about makes and models. If they liked something, the website would encourage them to send an email notification that would be followed up with a sales call from the dealership (many online sales processes follow this model). Once they did this, a new script, with two acts to it, would play out.

The first act took place entirely over the phone. The salesperson would talk to the customer about their preferred make and model and give them a price quotation (sometimes they would just leave voice messages that may or may not be followed up by the customer). The price quotation during this phase of the discussion was remarkably honest. The salesperson would tell the customer how much the dealer paid for the vehicle and how much profit they wished to make on the sale. The purchase price quoted was, in Barley’s study, ‘always accurate’ and the profit was relatively minimal, often no more than a few hundred dollars per vehicle. If the customer disputed the price and suggested that another dealer was offering the same make and model at a cheaper price, the salesperson would do one of two things: (i) point out that the customer was mistaken (because the make and model were not the same) or (ii) tell the customer to purchase the vehicle from this other dealer. There was never any haggling over price and none of the standard negotiating tactics were used by the internet salespeople.

If the customer was still interested in the car, they would be invited to the dealership to look at the car, take a test drive and, if they wished, 'complete the paperwork'. This phase of the interaction was often straightforward. Customers that showed up to the dealership typically wanted to make a purchase. If they changed their mind after seeing the vehicle or taking it for a test drive, they would leave amicably. Overall, the atmosphere of the interactions was much more pleasant and much less tense. Customers, indeed, seemed to prefer internet sales in Barley’s study, finding the internet salespeople less ‘pushy’.

Why did this happen? The internet changed the stage for the social interaction and hence required a new script. It equalised the power differential between the salespeople and the customers. Customers were given the power to start the process and could easily terminate whenever they wished. Customers typically had more information at their fingertips (or at the end of online search) and salespeople couldn’t get away with the same pressure tactics that they employed during in-person negotiations:


…Internet salesmen [could not] avail themselves of supporting actors to create pressure on the customer to buy. Instead, the Internet salesmen had to work entirely with information contained in databases. Under these conditions, it would be disadvantageous for a salesman to misrepresent the data, because doing so would eventually undermine the sale… the Internet pushed the salesman to be highly factual and to forgo the stance of a negotiator to sell vehicles successfully. 
(Barley 2020, 63)

 

Another way of putting it: the internet transformed car sales from a margin business — in which the goal was to maximise profit on each sale — to a volume business — in which the goal was to maximise sales. The customer benefitted from this technologically-mediated disruption (at least in the dealerships that Barley studied).


3. Lessons for Moral Disruption

As should be obvious from the preceding description, the technological disruption caused by the internet to car sales changed social moral beliefs, attitudes and behaviours. Car sales no longer depended on perceived dishonesty, hard bargaining and inequality of power. Instead, honesty and relative equality ruled the day. 

This is a welcome form of moral disruption. The traditional process was unpleasant, possibly harmful from the customer’s perspective and arguably corrosive to the virtue of salespeople. No one would, I think, view the traditional salesperson as the paragon of virtue, at least in their professional lives (they may have been wonderfully virtuous in other respects).

But this is just one case study. Are there any general lessons to be learned? Is Barley right to say that technology is most disruptive when it affects role relations? Could we take Barley’s explanatory framework, apply it to other contexts, and, perhaps, predict the possible direction of technologically-mediated moral disruption? Let me conclude by trying to answer some of these questions.

First, is it true that technology is most disruptive when it affects role relations and not simply tasks? I think this is true, at least to some extent. In previous writings, I have endorsed Michael Tomasello’s theory of the origins of human social morality. In brief, Tomasello (following a philosopher called Stephen Darwall) argues that human social morality is characterised by a ‘second personal’ psychology. We don’t just view the world from our own perspective but can switch perspective to that of other people with whom we interact. In Tomasello’s recounting, this second personal psychology is a role-based psychology. We see other people as occupying certain social roles and we expect them to behave in a manner that fits those roles. This generates concepts of duty and obligation — ‘If you occupy role X, then you ought to behave in manner Y”. If someone fails to live up to their role-related duties, then we develop reactive attitudes toward them. We get angry, upset, jealous, disappointed. This, in turn, can generate moral blame and condemnation.

If Tomasello’s theory is correct, then human social morality is a role-based morality. Our moral beliefs and attitudes centre on the roles that we and others perform. If those social roles get disrupted, and if the expected performances associated with them change, then it stands to reason that there will be greater disruption to social morality. This doesn’t mean that disruptions to role relations are the only thing that matters from a moral perspective, but they are one of the more significant forms of moral disruption.

That said, the concept of a role relation is a little fuzzy and figuring out whether a technology disrupts role relations can be tricky. In obvious cases of disruption — like those observed in car dealerships and internet sales — there may be little disagreement, but in other cases there may be some room for disagreement. Barley, for instance, insists that some technologies can change task performance without changing role relations, at least not in a significant way.

One of his go-to examples of this is the relationship between academics (professors, lecturers etc) and administrative assistants in universities (Barley 2020, 30). He points out that in the 1980s, administrative assistants used to type letters and documents for academics, in addition to performing many student-facing roles (answering queries etc). Nowadays, due to computerisation in the workplace, academics tend to do all their own typing and word processing. Administrative assistants have had to learn to work with new software programs to manage many of their day to day tasks, developing a new skills profile in the process. Yet, according to Barley, this has not had much of an impact on the relations between academics and administrative assistants:


... administrative assistants and faculty continue to have roughly the same relationship as they had in the past. There is no doubt about who has the greater status and who works for whom. 
(Barley 2020, 30)

 

This doesn’t ring true for me. I’ve worked in universities for over a decade now and have interacted with many administrative assistants, but I have never thought that I had a higher status to them or that they worked for me or on my behalf. I see us as involved in a common endeavour. Furthermore, while I do not depend on them for most of my day-to-day tasks, administrative assistants provide essential background support for the smooth functioning of the department in which I work. I don’t wish to learn how to use all the complicated web-based apps for managing finances and timetabling. They must do so as part of their jobs. As a result, if anything, I would suggest that the status of administrative assistants has grown.

But this comment from Barley may reveal an assumption that underlies some of his research, namely: that role relations are primarily power relations and ‘significant’ disruptions to them involve some change in the balance of power (this is based on reading two of his case studies; I have not read them all). I don’t see things the same way. Technology can also result in significant moral disruption if it changes what people expect of one another. It seems to me that this clearly has happened in the case of the relationship between academics and administrative assistants. I don’t think they have a standing obligation or duty to do my typing. It would be insulting if I asked them to do so. I can, however, expect them to help with timetabling and room bookings since they have the skills to manage the online platforms for these services. So, even if the power differential hasn’t changed, the moral expectations have.

Could we take Barley’s framework and apply it to other contexts? Of course we could. He and his colleagues have done so on several occasions. Other interesting applications of it (beyond the workplace) might include how technology has disrupted the relationship between politicians and constituents. Instead of relying on door-to-door canvassing and in-person clinics, politicians increasingly rely on social media broadcasts and web-based interactions. It seems obvious that this has changed the content and civility of those interactions to some extent. Likewise, the impact of technology on various human relationships (friendship and intimate relationships) is something I have considered in my own research. Technology can completely change the social script when it comes to those relationships. For instance, it can change how we find friends (online first instead of in person first), how we interact with them (zoom calls, texts and messaging groups instead of in-person meetups), and even who our friends might be (long distance friends, machine ‘friends’). I don’t believe that these technological changes to human relationships are necessarily good or bad, but it seems to me that they are quite disruptive of the previous social scripts.

Can we use this framework to predict the course of future moral disruptions? This is challenging. It seems unlikely that we could make precise predictions about future changes. A lot will depend on (a) the existing social script and role relations and (b) the nature of the technological disruption. Still, we might be able to predict some general patterns. If we go back to the power question, some technological disruptions can have an equalising power by removing advantages that one role has over another. Contrariwise, some disruptions may reinforce and compound existing inequalities. It is possible that we could predict these changes by carefully mapping the existing power relationships and the likely effect of certain technologies on the existing power differentials.


4. Conclusion

This brings me to the end of this article. To briefly recap, I have been looking at Stephen Barley’s explanatory framework for understanding how technology can lead to disruptive social change. Barley’s framework focuses on social scripts and social roles. His claim is that technology is at its most disruptive when it changes the social script and hence how different roles relate to one another. Although he applies this framework to the impact of technology on the workplace, I have argued that it can apply to the impact of technology on social morality. Why? Because social morality is, in large part, dependent on a role-based moral psychology. If we disrupt the roles, we disrupt our expectations of what we owe one another.


Monday, May 17, 2021

What Matters for Moral Status: Behavioural or Cognitive Equivalence?


Here's a new paper. This one is forthcoming in the July issue of the Cambridge Quarterly of Healthcare Ethics. It's part of a special edition dedicated to the topic of other minds. The paper deals with the standards for determining whether an artificial being has moral status. Contrary to Henry Shevlin, I argue that behavioural equivalence matters more than cognitive equivalence. This paper gives me the opportuntity to refine some of my previously expressed thoughts on 'ethical behaviourism' and to reply to some recent criticisms of that view. You can access a preprint copy at the links below.


Title: What Matter for Moral Status: Behavioural or Cognitive Equivalence?

Links: Official (to be added); Philpapers; Researchgate; Academia

Abstract: Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.



 

 

 



Wednesday, May 12, 2021

Failure: A Philosophical Analysis

 


Kids, you tried your best and you failed miserably. The lesson is: never try. 
(Homer Simpson)

I’m like most people: I spend a long time thinking that I am a failure. I see others posting updates online about personal triumphs and successes, and I feel like I don’t measure up. I’m not as successful as they are. I haven’t achieved as much. I have failed at work and failed at life. How can I do better?

I know that I am not alone in these feelings. As best I can tell, most people struggle with perceptions of failure from time to time. There is, now, an entire industry of books, events and podcasts dedicated to helping people cope with failure. The common strategy seems to be to encourage some kind of reframing. Don’t see failure as a sign of your inadequacy but, rather, an opportunity for growth. For example, Elizabeth Day’s How to Fail Podcast (and related book) takes this approach. As she describes it herself, the purpose is to interview people about their failures and see what these failures “taught them about how to succeed better”. In a similar vein, for many years, Silicon Valley startup founders participated in the now-defunct (?) FailCon, an annual conference celebrating failures in business and what can be learned from them. The message seems to have been: fail fast, learn and then pivot to something better.

Other efforts are afoot trying to normalise failure. For example, in academia, there was something of a fetish for ‘CVs of failure’ a few years back. The craze was started by Melanie Stefan with an article in the journal Nature. Stefan encouraged academics to keep a record of their failures in order to help others with their setbacks. The craze really took off when Johannes Haushofer publicly posted his own CV of failures, listing all the jobs he failed to get, grants he failed to win, and papers he failed to publish. For some reason his CV went viral and the idea grew legs. More and more people starting compiling lists of their own failures.

Despite my own struggles, my sense is that most perceptions of failure are irrational. This includes my own. We have vague and poorly formed beliefs about what constitutes success and what constitutes failure. This vagueness contributes to an over-ascription of failure (and success) to our own lives and hence a lot of unnecessary mental anguish. I think some philosophical analysis and reflection might help to rid us of some of these irrational beliefs. That said, I think it is hard, in the modern world, to completely rid ourselves of a sense of failure.

I try to explain why in what follows. I proceed in four stages. First, I will engage in some conceptual analysis, explaining what failure is and how it differs from similar concepts such as regret and rejection. Second, I will look at the different ways in which people can fail, paying particular attention to the various ‘levels of abstraction’ at which we can understand ourselves and our failures. Third, I will argue that many perceptions of failure are irrational insofar as they assume we have more control over our lives than we actually do. Fourth, I will consider some strategies for coping with the inevitability of failure.


1. What is failure?

I’ll start with the standard philosophical practice of clarifying the concept under consideration. What, exactly, is failure? As a first pass, I would say that failure can be defined, roughly, as follows:


Failure = a phenomenon that arises whenever we have made some effort, or ought to have made some effort, to achieve a goal or attain a standard, but have not done so. This is typically, though not necessarily, associated with negative self-directed emotions. These emotions can include things like shame, guilt, blame and so on.


This definition captures three important ideas. First, that failure is defined relative to some goal or standard, i.e. for failure to exist there must be some outcome we were trying to realise or some standard of excellence we were trying to obtain, but failed to do so. Second, that failure is linked to perceptions of control and responsibility: we believe that it was within our power to obtain the goal or standard. Third, that failure is often, though not necessarily, associated with negative personal emotions. We tend to feel bad about ourselves as a result of our failures. I say that these negative emotions are ‘not necessary’ because, as noted in the introduction, a common coping strategy nowadays is to view failures in a positive light: as something from which we can learn.

Failure, so defined, can be distinguished from other cognate concepts. Failure, for instance, is not the same thing as regret. I wrote a long analysis of regret on another occasion. There, I defined regret as a negative comparative emotion. We regret things we have done based on some counterfactual comparison with things we could have done or ought to have done. Many times, regret is linked to failure. If we pick some goal and fail to achieve it, then it is quite likely that we will regret several of the choices we made along the way. If I fail to achieve my goal of running a marathon before I am 40, I might regret all those times I chose to sit on the couch watching TV instead of training. But regret is not necessarily linked to failure. Sometimes we can regret our successes. For example, I sometimes regret that I have spent so much of my life writing academic articles. Writing and publishing those articles were goals that I set for myself and I have succeeded in achieving (many of) them, but doing so came at a cost: I could have spent that time doing something else. I regret the life I could have lived.

Failure can also be distinguished from rejection. Rejection is dependent on other people. If I submit an article for publication, and it is rejected, that is because other people didn’t like it, didn’t like me, didn’t feel that it measured up to their standards and so on. Acceptance by other people can often be part of one’s personal goals and ambitions. In this sense, failure can arise, in part, because of rejection by other people. If we take Johannes Haushofer’s CV of failures as an illustration, then we can see that most of his examples of failures are, in fact, rejections by other people. All those jobs he failed to get, grants he failed to win and articles he failed to publish were, at least in part, the result of rejection. But failure doesn’t have to be linked to rejection. You can set yourself goals and standards that are not dependent on the approval and acceptance of others. Indeed, one of keys to overcoming the dark side of failure might be to set goals that are not so dependent on other people.

Finally, failure can be distinguished from loss. This is a distinction that Beverley Clack makes in her book How to be a Failure and Still Live Well. Loss is an inevitable part of human life. We all age, we all die, we all fade away. Everything we care about will eventually be lost. Loss is beyond our control; failure is not. As Clack puts it:


The notion of failure reflects a sense of responsibility for an outcome that could have been avoided. Loss, on the other hand, cannot be avoided, regardless of how careful we are, for its experience reflects the very nature of life.

(Clack 2020, 75

 

As we will see in a moment, one of Clack’s main arguments is that many things we currently perceive as failures are better perceived as a form of loss, and when we perceive them in this way we might lose some of the negative emotions associated with failure. You cannot blame yourself for the inevitability of loss.


2. The Many Different Faces of Failure

There are many different ways to be a failure. This is one of the reasons why perceptions of failure are so common. In the definition just provided, I suggested that failure is linked to both goals and standards. This gives rise to two primary forms of failure:


Goal-related failure: This is a discrete failure to achieve some particular outcome, e.g. failing to run a marathon, failing to get a book published, failing to show up to your child’s football game.

 

Standard-related failure: This is a more general and potentially ongoing failure to achieve some standard of performance, e.g. failing to be honest with your partner, failing to be diligent in responding to emails, failing to try your hardest at work.

 

This is just the beginning of the complexity of failure. Pretty much everything we do can be said to have some combination of goals and standards attached to it. Consider running a marathon. The goal might be to finish the race in under four hours. The standard might be to maintain focus and determination throughout the race. It is possible to fail at one or both of these things. Admittedly, the distinction between goals and standards is a bit fuzzy. I like to think of standards as things that apply throughout the performance of some activity and goals as what is supposed to happen at the end of the activity. But some standards can be broken down into discrete sub-goals. For example, if I wish to be diligent in my responses to emails, I could set myself a series of sub-goals that might help me to obtain that standard, e.g. respond to 5 emails before 10am each day. Still, despite its fuzziness, I think the distinction is useful.

Since pretty much everything we do comes with some combination of goals and standards, this means that the possibility of failure is endemic to human life. Nevertheless, I think it makes sense to analyse failure as something that can arise at different levels of abstraction. Two such levels strike me as being particularly important:


Task-related failure: This is failure that is associated with specific tasks that we perform. These tasks can be discrete, one-off affairs (e.g. running a marathon before the age of 40) or repetitive daily habits (e.g. responding to email). The point is that they are reasonably specific and temporally-bounded activities. They start and end at identifiable times.

 

Role-related failure: This is failure that is associated with different roles that we occupy in life. Roles are usually made up of bundles of tasks and standards. Some roles persist throughout our lives (e.g. being a citizen), some are temporally bounded (e.g. being a member of a jury). Some roles are self-chosen (e.g. being a writer) and some are socially constructed and imposed upon us (e.g. being white/black etc).

 

Role-related failure is probably the most interesting kind of failure. Roles can be large or small and oftentimes come with shifting goals and standards. It is, consequently, easy for failure to persist in social roles and for us to perceive ourselves as failures across multiple roles. Thinking just about myself for a moment, here are some of the roles I occupy and some of the perceptions of failure that could be associated with them:


Academic failure: I am a failure as an academic because I have not published enough, supervised enough PhD students, won enough research funding etc etc.

 

Gender failure: I am a failure as a man because I am not strong enough, tough enough, financially successful, emotionally stable etc etc.

 

Parental failure: I am a failure as a father because I have not spent enough time with my children, provided for them adequately, given them a headstart in life etc etc.

 

Citizen failure: I am a failure as a citizen because I have not voted in recent elections, become involved in local organisations and activities, kept up to date with political news etc etc.

 

To be clear, I am not claiming that I actually do perceive myself as a failure across these multiple roles. My point is simply that because I occupy multiple social roles, and because each role has goals and standards that I could fail to obtain, it possible for the perception of role-related failure to be quite persistent and pervasive. This is why I suspect that perceptions of role-related failure are often the most psychologically troubling.


3. The Irrationality of Perceptions of Failure

Although the possibility of failure is endemic to human life, I believe that many of our perceptions of failure are irrational or unwarranted. In particular, I think we have a tendency to over-ascribe failure to our lives and that this is responsible for much unnecessary torment. In support of this thesis, I offer the following four arguments.


A1 - The Imprecision Problem

Oftentimes we have a very imprecise conception of the goals and standards we need to attain in order to be successful. This is particularly true for social roles that have multiple goals and standards and, hence, multiple dimensions along which success and failure can be measured. This can make us unsure of whether we are a success or failure and thus, depending on our mindset, make it easy for us to ascribe failure to our lives without warrant.

For instance, what does it mean to be a successful academic? One measure of success is the number of publications in peer-reviewed journals. This is a nice, readily identifiable figure. But how many publications is enough to be considered a success? Is 20 enough? 40? 120? I’m not sure anyone knows. In any event, sheer volume of publication might not be the best measure of success. Perhaps it is the number of publications in top-ranked journals? But, then, which ranking system should you use? Or perhaps it is the number of citations? Or h-index? Or i-10 index? Or maybe it is the sum of research funding you have been awarded? Or the number of PhD students you have supervised? Maybe it is the number of media mentions? Maybe it is all of these things? Maybe this is overly research-focused? Maybe we should focus on student evaluations of teaching? There are many different ways of measuring academic ‘success’ and it is very unlikely that anyone succeeds along all of these measures. Consequently, it is easy for perceptions of failure to persist.

The imprecision problem can also give rise to the problem of shifting-goalposts. Sometimes the goals and standards of success change. Sometimes the standard gets pushed higher. Sometimes it changes entirely. Peter Higgs — he of Higgs-Boson fame — once remarked that he would not be able to get a job as an academic scientist today because the standards had changed so much since he was a graduate student. The number of publications expected of entry-level academics nowadays is much higher than it was in the 60s. This was a startling and yet sobering admission. By any common-sense understanding of success, Peter Higgs is an extraordinarily successful scientist; but the modern version of him would not be given a chance to become a success. He would fail early and be filtered out of the system. This problem of shifting goal-posts can affect anyone in any social role.

The imprecision problem can also lead to the problem of limitless failure. If our goals and standards are imprecise, it is possible for us to constantly shift our own expectations higher: to want more and more. As a result, we are never happy and never appreciate our own successes. The arch-pessimist Arthur Schopenhauer was alive to this problem nearly two centuries ago. He claimed that the tragedy of the human condition is that we desire more than we can ever have. Once we satisfy one desire another one takes its place. Unless we escape the power of desire, we will live on a perpetual treadmill of failure.


A2 - The Lack of Value Problem

Oftentimes the goals and standards associated with success lack value. This can be true objectively — everyone looking at those goals and standards rationally and reflectively would agree that they lack value — or subjectively — other people think that they have value but they lack value for us. In addition to this, some goals and standards are ambivalent or multivalent. There is both good and bad associated with them. This means that what we initially think of as the sine qua non of success can turn out to be a bitter pill. If we obtain the standard or goal, we might change our mind and decide that it is not actually a mark of success. We will have failed even in our apparent successes.

Beverley Clack, in her book on failure, suggests that this problem is particularly true of some of the models of success that pervade the modern world. Clack is one of those academics that uses the term ‘neoliberalism’ to name a monster that must be slain. Neoliberalism is often poorly (if ever) defined. In this context, it means something like the tendency to view all human life and activity through the lens of economic markets and to use economic success as the ultimate measure of success. In other words, you, as an individual, are deemed to be a success — in a neoliberal world — if you are an economic success: you have a lucrative career, have lots of purchasing power, own the right bundle of assets (house, car etc). The problem is that this economic model of success is something that was imposed upon us. We did not choose it for ourselves. Also, many of its metrics of success lack value or are of dubious value. Owning the right assets can lead to high levels of indebtedness and ongoing financial anxiety. Having a lucrative career can suck away all your time and energy from family and friends. Is that really the mark of success?

Unsurprisingly, I am sympathetic to this line of argument. I have, after all, written an entire book about the problems with work and dangers of assuming that flourishing and meaning ought to be derived from one’s career. Clack’s point, however, is a generalisable one. We often end up following someone else’s script and living up to their ideals. It’s important to step back on occasion and ask yourself whether you are pursuing a path that you actually find valuable. If not, your success will be illusory and fragile.


A3 - The Finitude Problem

Human life is finite. We have finite time, finite resources and finite minds. This finitude is one of the defining features of existence. As noted earlier, Clack argues that finitude means that loss, as distinct from failure, is an inevitable part of human life. But even if we accept this distinction, finitude poses problems for our perception of failure too.

We occupy many roles in life and we have many choices to make. It is virtually impossible to be successful across all roles and choices. Sacrifices have to be made. This means that all of our successes tend to come with a significant, but unavoidable, opportunity cost. There are other things we could have done that would also have given us some sense of success, but we chose a different path that prevented us from doing so. The problem is that we don’t accept these opportunity costs. Our ambition is limitless and so we feel like failures for not having it all.

I mentioned, earlier, the example of writing and its opportunity costs. I have spent large stretches of my life setting goals for writing books and articles. I have succeeded in many (but certainly not all) of these endeavours. This has come at a cost. For example, it has meant that I did not spend time on teaching preparation or academic administration. This has been frustrating and led to a sense of failure when it comes to my performance in those work-related roles. But I cannot have it all. I cannot achieve the same level of success across roles.

I know this at an intellectual level. However, I continue to perceive myself as a failure for not achieving it all. This is irrational, but it is difficult to shake the illusion of failure. One reason for this is that external pressures constantly remind me of the paths not chosen and the failure that results. So it’s not just a matter of individual ambition, but external pressure that encourages this lingering sense of failure.


A4 - The Lack of Control Problem

The biggest problem with our perception of failure is our tendency to take responsibility for things that are not within our control. Again, this is not always a voluntary choice. Sometimes we are encouraged to see something as being within our control when it is not. This fosters perceptions of failure that are not really warranted.

I mentioned earlier the distinction between rejection and failure. You fail when your efforts fall short; you get rejected when someone else turns you down. Oftentimes failure and rejection get intermingled when the goals we set ourselves depend on acceptance by other people, but ultimately we don’t control how other people respond to us. Linking rejection to failure is not only unjustified but, I believe, a recipe for unhappiness.

As noted, writers and academics are frequently guilty of this. Academics think that they have failed if they fail to get published, win grants, get tenure and so on. But each of these successes depends on acceptance or rejection by some set of gatekeepers. It is a mistake to think that you can take responsibility for this and that rejection is a mark of failure. There are many reasons for rejections; some of them have nothing to do with you or the quality of your work. What you can take responsibility for (remembering what I just said about finitude) is how many grants you apply for, how often you write and submit articles for publication, how many jobs you apply for, and so on. If you are going to have metrics of success, this is what you should focus on, not on the acceptances. Here’s one possible example of this. Years ago, I mentioned how I liked the writing advice that Paul Silvia gives to academics in his book How to Write A Lot. He says that your goal should be to become the ‘most rejected author’ in your department. I thought this was an interesting reframing. Being frequently rejected means you haven’t given up; that you are still writing and submitting pieces. I don’t quite agree that being rejected should be the goal. But still trying should be and that means disconnecting what it means to be a success from rejection.

Of course, it is not that simple. If we are constantly encouraged to take responsibility for things beyond our control, and to link our perceptions of success and failure to those things, then we run the risk of perpetual disappointment. The aforementioned Beverly Clack dedicates a long chapter to gender and failure in her book How to be a Failure. It is an interesting cultural history of standards of success and failure for men and women, covering everyone from Thomas Aquinas to Germaine Greer. The key insight from it is that women’s standards of success are often linked to things they cannot control, in particular to beauty and fertility. You are a success, as a woman, as a result of physical traits that you may or may not have. Furthermore, since all beauty fades and fertility ends (notwithstanding improvements in technology), all female lives are doomed to end in failure because they will all inevitably lose what is supposed to make them successful. Everyone will experience these losses, but they are often more intimately associated with feminised standards of success. As Clack comments:


…there is a long history of the female body being used as a container for the anxieties which arise from the experience of being embodied beings subject to change. The body is never just a physical entity: it is always shaped by social mores and values. By exploring success and failure through concepts routinely played out on the female body, it is possible to discern powerful anxieties regarding loss running beneath cultural narratives of what it is to fail. 
(Clack 2020, 54)

 

Linking female success to physical traits like beauty and fertility is not inevitable. It is, in part at least, socially constructed, but if you are a woman it is difficult to completely shake free of these socially constructed standards. You can acknowledge their contingency, accept that they are beyond your control, and try to follow your own playbook, but this will always be hard. This is the lesson for anyone in a similar situation.


4. Conclusion: Minimising Failure

I don’t want the wrong conclusion to be drawn from this analysis. I am not claiming that failure is not a real thing, nor that one cannot learn from failure. Of course it is and of course you can. Just as there is a danger of linking our failure to things beyond our control, there is also a danger of assuming we are powerless to change our fates.

But it is important to take a rational and sensible approach to failure. If I’m right, then failure is often over-ascribed and we torment ourselves unnecessarily as a result. Is it possible to correct for these misperceptions and live a more tranquil life? Possibly. I’m not really in the business of selling solutions; I haven’t got it all worked out and I’m suspicious of anyone that claims they do. Nevertheless, three suggestions emerge from the preceding discussion:


  • Recognise that, since there are many ways to fail (many roles, many tasks), there are also many ways to succeed. If one particular role or task is not working out, try to focus on another (if it is within your power to do so). You cannot possibly succeed at everything; but you can succeed at some things.
  • Be critical of the goals and standards that define your perceptions of failure. Take the time to step back and ask whether they really suit you and make you happy. Oftentimes, standards of success and failure are thrust upon us by external forces. They hold no value or allure for us. It may be hard to completely shake them, but you can at least recognise them for what they are and approach them with a healthy sense of the absurd.
  • Focus on what is within your control. Try not to link your perceptions of success to things that are beyond your control. You will tend to be frustrated if you do. This is simple advice — standard since the time of Stoics — but harder to implement than you might think. It takes time to figure out what is really within your control and reorganise your goals appropriately.