Thursday, June 10, 2021

Axiological Futurism: The Systematic Study of the Future of Values



Here's a new paper that I have forthcoming in the journal Futures. This paper has had a long gestation. I wrote it more than two and a half years ago. At the time, I thought it was one of my more interesting pieces. Apparently journal editors disagreed. Vehemently. This paper was rejected from four different journals before finally, on the fifth try, finding a home. I still think it is among the more interesting and important pieces I have written. It makes the case for 'axiological futurism', which is the study of the future of values. This links to my ongoing work on technology and moral revolutions. See what you think of it. Links to prepublication versions are available below. The final version will be open access (thanks to Ireland's new open access publishing agreements with Elsevier et al) and I will post that once it is available.


Title: Axiological Futurism: The Systematic Study of the Future of Values

Links: Official; Philpapers; Researchgate; Academia

Abstract: Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades. 

 

 


Wednesday, June 2, 2021

Interviews about Automation and Utopia


I did a few interviews about my book Automation and Utopia over the past year. Once upon a time I was meticulous in documenting and recording all of them on this website (admittedly more for my own records than for the benefit of readers). For some reason, I have lapsed in this practice recently. Anyway, here's my attempt to correct for this oversight with a list of recent interviews. If you want to learn more about the book, check them out:


Tuesday, June 1, 2021

The Technological Future of Love




Here's a new draft paper. This one was co-authored with Sven Nyholm and Brian Earp. It is about the role that technology can and will play in reshaping the value of love. It is forthcoming in an edited collection entitled Love: Past, Present and Future. You can access a preprint version of the paper at the links below.

Title: The Technological Future of Love

Authors: Sven Nyholm, John Danaher, Brian Earp

Links: Philpapers; Researchgate; Academia

Abstract: How might emerging and future technologies-sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and 'gamify' romantic relationships-change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for "cautious optimism" about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology. 

 

 

Thursday, May 27, 2021

The Trouble with Teaching: Is Teaching a Meaningful Job?





Frederick William Sanderson was the headmaster of the Oundle School in England from 1892 to 1922. In a hagiographic biography, HG Wells celebrated him as ‘the greatest man’ he had ever known. If Wells’s reflections, and those of former pupils and colleagues, are anything to go by, Sanderson must have been an impressive figure. Consider, for example, the following recollection from a former student. Having been discovered taking notes in the school library after dark, Sanderson reprimanded the student for his breach of discipline but then, calming down, asked him what he was reading:


He looked over the notes I had been taking and they set his mind going. He sat down beside me to read them. They dealt with the development of metallurgical processes, and he began to talk to me of discovery and the values of discovery, the incessant reaching out of men towards knowledge and power, the significance of this desire to know and make and what we in the school were doing in that process. We talked, he talked for nearly an hour in that still nocturnal room. It was one of the greatest, most formative hours in my life... 'Go back to bed, my boy. We must find some time for you in the day for this’

 

It’s hard not to be moved by this. Sanderson seems to have had a positive impact on his students. He cultivated a sense of wonder in them and, at least in this case, transformed their lives. He may be the quintessential example of a teacher whose career embodied the highest aspirations of that profession.

I have been teaching at universities for over a decade. Although I have never bothered to count, I estimate that I have given over 1000 lectures/seminars and interacted with over 1500 students. When I started out, I was enthusiastic about teaching. I enjoyed the challenge of explaining difficult concepts; of facilitating lively discussions and debates; of encouraging the students (I would never call them ‘mine’ as some do) to think for themselves.

Over the past decade, my enthusiasm has waned. I still enjoy aspects of teaching — and I will talk about those aspects in what follows — but overall I find teaching quite frustrating. I don’t think is a particularly meaningful job, despite what some people claim. In fact, many times I find it disheartening. I’m sure some of this is my own fault — maybe I don’t try hard enough or care enough about the students — but I think some of it is inherent to the nature of teaching, and the problems of teaching in a modern third-level institution.

In the remainder of this article, I will try to explain the reasons for my frustration. I will draw heavily from my own experiences of teaching. I will also examine the extent to which my experiences are (or are not) mirrored in the empirical research. I write this with two hopes in mind. First, I hope that other academics and instructors might find it useful to have someone articulate these views on teaching. Perhaps they have had similar thoughts and would like to know that they are not alone. Second, I hope that someone will convince me that I am wrong. 


1. The Case Against Teaching

I have written a lot about meaningful work in the past. My book, Automation and Utopia, deals with the topic at length. In that book, I conclude that work as whole is structurally bad and that non-work alternatives are more meaningful. Given this argument, you might suspect that my analysis of teaching is biased from the outset. Since I think work in the modern world is structurally bad, it stands to reason that I would not be a huge fan of teaching. It is just a particular case study in the awfulness of work.

But this is not my view. In Automation and Utopia, I did not conclude that all forms of work are necessarily bad. I conceded that despite the structural conditions that make work worse than it ought to be, some forms of work, including my own, can be quite good. My job as an academic has many benefits. I work in a university that is relatively devoid of managerialism (at least when compared to universities in the UK). I have a lot of autonomy. I can research whatever I like and there is very little interference with how I teach and assess my modules. I am essentially free to develop my skills and hone my craft. Furthermore, I work with people I generally like and I have a chance to enrich the minds of the students I encounter. On paper, everything is good. I have one of those jobs that scores highly on standard conceptions of meaningful work.

In practice it is a different story. I’ll explain why in two steps. First, in the remainder of this section, I will outline four arguments for thinking that teaching is less meaningful than you might think. Second, in the subsequent section, I will consider some objections to this negative assessment. In this first part of the analysis, I will be looking at teaching from a largely (though not entirely) consequentialist perspective. In other words, I will be working with the assumption that one of the things that makes teaching meaningful (on paper) is that it serves a valuable purpose (education) and that teachers can derive meaning from their work to the extent that they contribute to that valuable purpose. I don’t think this is an overly controversial assumption, but I will consider some criticisms of it later on.

Anyway, here are the four arguments against teaching understood in those terms.


A1 - The Purpose and Value of Education is Questionable

You might think it is just obvious that education is valuable. Our world is, after all, one that rewards educated people. Educated workers typically earn more money, have more stable personal lives, and are generally better equipped to manage that vagaries of work in a knowledge economy. It is not that things are easy for them, but it would be a lot harder without an education. Competencies such as literacy and numeracy are practically essential in the modern world, and higher-order cognitive abilities such as the capacity for critical reflective thought, problem-solving and the ability to evaluate different sources of information are highly sought after.

I buy this argument, at least at an abstract level. When it comes to education in general, in particular schooling in basic competencies such as literacy and numeracy, I am sure that education does serve a valuable purpose. Where I struggle, is when it comes to the purpose of the particular classes and subjects I teach, and the practical challenge of converting abstract purposes into specific learning outcomes for those classes.

What is it that I should be doing in class every day? Here’s a definition of teaching that I have long admired and, indeed, quoted in my own teaching statements:


…the real aim of education [is]: to waken a student to his or her potential, and to pursue a subject of considerable importance without the restrictions imposed by anything except the inherent demands of the material. 
(Parini 2005, 10)

 

But what does that mean? What is a student’s potential? Does it vary from student to student? Everyone is unique so this would stand to reason. So is it really possible for me, as a teacher, to waken each individual to their unique potential? Also, what are the inherent demands of the subject? It’s not clear. It turns out that I may like this quote because it is so vague. It speaks to the high falutin’ aspirations of teaching as a profession, but means relatively little in practice. The purpose is vague and its value unclear.

There are more practical guides to what the purpose of education might be. Most lecturers are introduced to Bloom’s taxonomy of learning outcomes when they do courses on how to teach. Originally formulated in 1956, this taxonomy has been refined and expanded over the years. Many people claim that these refinements are an improvement on the original. I’m not so sure. I think the original, with its simplicity and hierarchical organisation, is much more memorable than all the subsequent riffs upon it. Anyway, the diagram below illustrates the original taxonomy.



As you can see, the basic idea here is that a well-designed course/module will enable students to ascend through the hierarchy of learning outcomes. The teacher will begin by sharing some key information they want the student to remember and understand. This may be done in the classroom in the form of a lecture, or through reading lists and textbooks. Then they will help the students analyse and apply this information, breaking it down into its key components and seeing the relationships between different concepts. Then they will move on to synthesising and critically evaluating this information. Do the ideas and arguments hold up to scrutiny? Are they true? Do they have value?

This provides a neat structure for teaching and, at least on the face of it, a clear guide as to the purpose of teaching. I start most of my courses with Bloom’s taxonomy. I like to be transparent with students. In my course on contract law, for example, I tell students that I will start most topics by sharing some key legal principle or rule. I will try to help them to understand that principle or rule by reviewing case law. I will then get them to perform a series of practical exercises in which they will analyse these cases, before moving onto applying rules to novel cases, and then critically evaluating their role in the modern world.

It all sounds so simple, but there are a number of problems in practice. First, there is the selection problem, which bits of information or knowledge should I be exposing students to? Most subjects are vast. There are lots of cases and rules relevant to contract law. Which ones should I include in my courses? I cannot possibly cover everything. I have to make some tradeoffs, but every decision to include some topic leads to the exclusion of another. The standard approach is just to follow the existing textbooks or professional curricula, but some people question the value of this. The status quo is biased towards conservative, non ‘critical’ attitudes toward law. Maybe we should be disrupting and decolonising the curriculum? Sharing different voices and different ideas? It’s a challenge to know what you should and should not include. Furthermore, the more you know about a subject, the more complex it becomes. You start to see how knowledge is one vast interconnected web. Whenever we teach, we arbitrarily cleave this web at its joints. We strip away the context that helps it all fit together.

Second, there is the value problem. Is the information I am sharing and asking students to critically evaluate, really important? Is this stuff they need to know? I often kid myself that it is. I will claim that a subject as apparently dry and boring as contract law is intrinsically fascinating because it raises important questions about trust, freedom, reciprocity, economic value and so on. I will also claim that it is eminently practical since people make and enter into contracts all the time. But I’m not so sure that this is true. Many of my students won’t ever use the information I cover again in their lives. They won’t need to remember those obscure Victorian cases on shipping and medical quackery that I cover in such loving detail. Heck, I don’t need to remember them in my own life and I teach the subject. Furthermore, the deep and important questions relating to trust, freedom and reciprocity, can be covered in other, more interesting and more direct ways. And this is, in some ways, a best case scenario: contract law is probably one of those subjects that lends itself to a credible argument on behalf of the value of the underlying subject matter. Many academics teach incredibly obscure and niche courses whose contents are unlikely to have any lasting importance for their students lives.

Third, there is the meta-value problem. Even if the information I am sharing is not, in and of itself, intrinsically or practically valuable, I might argue that students are still learning valuable transferable skills from my courses. For instance, I could (and frequently do) argue that students are learning the capacity for critical and self-reflective awareness as result of my teaching (in fact, I teach an entire course dedicated to critical thought). Let’s set to one side the question of whether this is true (we’ll return to it in a moment). Assume that it is. Is it, in fact, valuable to learn such meta-skills? The claim is often made that critical thinking skills are valuable from a social perspective: people with the capacity for critical thought are more discerning consumers of information, better problem solvers, better citizens and so on. But I don’t know how true this is. There is plenty of research on the benefits of high intelligence for society and individuals, but there is also quite a bit of evidence to suggest that people with high critical intelligence can be more ideologically entrenched and biased than others. Keith Stanovich is possibly the leading researcher on this issue, documenting how ‘myside’ bias tends not to diminish with intelligence. Instead of people becoming more open to other views and more willing to admit when they are wrong, they engage in motivated reasoning that reinforces existing beliefs and opinions. Similarly, David Robson, in his book The Intelligence Trap reviews several studies (and some famous anecdotes) suggesting that more intelligent people fall into many cognitive traps, even when they are aware of the potential biases and errors underlying human reasoning.

What about the personal benefits of critical thinking? Cognitive behavioural therapy, which is a popular treatment for many psychological disorders, including depression and anxiety, is, in a sense, a kind of applied critical thinking. The idea underlying CBT is that we fall into certain cognitive traps that lead to psychological distress. For example, we tend to exaggerate negatives, catastrophise, engage in ‘all or nothing’ thinking and so on. CBT tries to get people to identify these cognitive errors and correct them through systematic reevaluation, behavioural experiments and so on. CBT is a well-evidenced therapeutic intervention and while it is not a miracle cure, it can work well for some people. Given this, we might feel confident that teaching critical thinking could make people more at home in the world and less psychologically distressed. The problem is that the benefits of CBT are hard won. It usually requires extended one-on-one interactions with a therapist who will guide you through the methods and give feedback and encouragement along the way. This is very different from how critical thinking is taught at university. Furthermore, most critical thinking classes at university are not directed at our beliefs about ourselves, they are, rather, directed towards specific subject areas. For example, I teach a course on critical thinking for lawyers that focuses on common errors in legal reasoning, not errors in reasoning about ourselves (though I do bring in a range of non-legal examples and sometimes refer to CBT). Also, balanced against the benefits of CBT there is evidence suggesting that people with high intelligence are more prone to mood disorders, including anxiety and depression. Ruth Karpinski and her colleagues surveyed over 3000 members of Mensa to see if there was a link between high intelligence and psychological disorders. They found that there was a correlation. People at Mensa were about 2.5 times as likely to experience high levels of anxiety and depression. That said, this was only a correlational study. A similar European study by Navrady et al, with a much larger sample size (over 180,000) found that intelligence was only associated with a higher risk of depression among those who scored highly on neuroticism. Otherwise, intelligence seemed to be protective against psychological disorders though the effects were small.

Speaking from my own example, I suspect that I am above (though not by much) average when it comes to the disposition for critical thought. I spend my whole life dissecting arguments and information, probing their truth and persuasiveness from multiple angles. Has this made my life better? I’m not sure. If anything I suspect it makes me more neurotic, less trusting, and less confident. For example, I’m not sure that I have many strong convictions or principles. Pretty much everything I believe is defeasible and open to doubt. This often leaves me with a lack of motivation or desire. This can be an immense source of frustration for others.

Socrates once said that the unexamined life is not worth living. Teachers love that line. But as Kurt Baier once said, the over-examined life isn’t much to write home about either.


A2 - Teaching Often Fails to Achieve Its Purpose

The previous argument might be interesting to some but it is not, in my view, the most significant problem with teaching. I am happy to concede that teaching might serve a valuable purpose. What I’m much less convinced of is that teaching actually achieves its purposes. Let’s assume that the purposes of teaching align with Bloom’s taxonomy. The goal is to share valuable knowledge, and then to get students to remember, understand, analyse and critically evaluate that knowledge. If possible, the further goal is to get them achieve these goals in a specific module by developing metacognitive skills that they can then transfer to other aspects of their lives. Does teaching achieve those ends?

Let me start with some anecdotal evidence. I’ve been teaching the same subjects for years now. I have a good handle on what I want students to achieve in these subjects. I also assess in similar-ish ways year on year. (I say ‘ish’ because I do ‘innovate’ to some extent.) Despite this, I don’t see any discernible improvement in outcomes for my students, nor in their results. Roughly the same number of students achieve first and second class grades each year. The quality of the assessments varies little as well. The abiding impression I get is that the students that do well in my courses would have done well no matter what I said or did (as long as I attained some minimal level of competence). They were self-motivated and would have thrived no matter who was teaching. I’ve had this confirmed, to some extent, when I review the grades of these students upon entry to university and across all other subjects. The single best predictor of how well a student will do in one of my courses is how well they did on their entrance assessments and in their other modules. Furthermore, having spoken to students years after they left university, many of them tell me that they remember little, if anything of what they learned in my classes. If they remember anything at all, it tends to be the trivial stuff: the day I cancelled class, the day one student ran through the lecture theatre in a chicken costume, the day my powerpoint presentation wouldn’t work and so on. In short, the benefits of teaching seem to be narrow and transient, and the impact of the teacher (i.e. me) seems to be minimal.

That’s just my impression. Is this confirmed by empirical data? Bryan Caplan’s book The Case Against Education is probably the most damning monograph on the effectiveness of teaching. Caplan argues that the benefits of higher education (which he admits are significant, at least when it comes to income) are all down to a signalling effect. Students that make it through 3-4 years of higher education are signalling to potential employers that they would be good employees. The benefits are not down to any learning that takes place at university. Most professors do not teach anything that students need to know in the long-term; most valuable skills are learned on the job.

Caplan reviews the available evidence on learning in Chapter 3 of his book. He is unimpressed. As he notes at the outset, there is a basic problem when it comes to measuring the effectiveness of teaching:


Measurement is tricky. Using students’ standardized test scores implicitly assumes students learn everything they know in school. What about changes in students’ standardized test scores? A little better, but the basic problem remains: the fact that students improve from grade to grade does not show that schooling caused their improvement. Maybe they’re maturing, or learning in their spare time. Given these doubts, most researchers strongly prefer controlled experiments: randomly give some kids extra education, then measure their surplus knowledge. Unfortunately, all these approaches — controlled experiments included — neglect retention. 
(Caplan 2018, 72)

 

Looking at the available information on long-term retention, Caplan’s conclusion seems to be a depressing one. For example, despite spending many hours learning math (algebra, trigonometry, calculus) few adults remember what they have learned. The same holds true for other subjects like history. Basic literacy and numeracy seems to be the only knowledge that is retained, but this is presumably because people have to read and engage with numbers (pay checks, bills etc) on an ongoing basis. If this didn’t happen, they would forget that too. That’s how our brains tend to work: thinking is hard; if we can get away with it, we let the knowledge atrophy.

Of course, we know this to be true. Unless we are forced to keep up with a given area of study, we tend to retain nothing in the long run. I teach at a law school. I studied all the standard law subjects as an undergraduate (company, equity, land, tort, criminal, constitutional, contract etc). I have retained virtually none of the information I learned. Indeed, I forgot most of contract law before I was required to teach it. I had to relearn on the job.

What about transferable skills and learning to learn? Maybe we forget subject specific knowledge, but retain metacognitive learning skills that we can apply to new domains? Caplan also reviews the evidence on this and finds it lacklustre. For example, commenting on studies of science graduates who were tested on their ability to apply scientific methods outside of their narrow domains of study, Caplan notes:


… college students are bad at reasoning about everyday events despite years of coursework in science and math. Believers in “learning how to learn” should expect students who study science to absorb the scientific method, then habitually use that fruitful method to analyze the world. This scarcely occurs. By and large, college science teaches students what to think about topics on the syllabus, not how to think about the world. 
(Caplan 2018, 89)

 

That said, the results are not entirely dispiriting. Caplan notes that students do appear to learn skills through college courses. Law students get better at verbal reasoning and science students improve at statistical reasoning. It’s just that the skills tend to be narrow, and subject specific. There is limited evidence for any improvement in general cognitive ability. Furthermore, the effect of college itself on these skills is often questionable. Students who score highly on skills tests tend to be the ones that score highly on such tests before starting college.

Caplan may be too pessimistic. He seems to overemphasise the negative, and he marshalls the evidence in order to defend the signalling theory of education’s value throughout his book. Nevertheless, I think his scepticism reveals an important epistemological problem for any university teacher who claims to be doing a good job. I don’t carry out randomised controlled tests on students in my class. I don’t track their progress over the long-term. As a result, I have little, if any, information to suggest that they gain anything from my classes. This leaves me in the perpetually troubling position of not knowing whether anything I’m doing is making a difference.


A3 - Any Feedback You Do Receive is Unhelpful

You may question the conclusion of the previous section. Surely, teachers do receive feedback about the quality of their teaching? If you teach at a university, you will regularly give students forms and surveys to complete. Students will rate the quality of your teaching on scales from awful to excellent. They will also provide qualitative feedback on what they liked or disliked. Doesn’t this tell you that you are (or are not) making difference?

To say that the value of student feedback surveys has been questioned is an understatement. The link between survey results and other measures of teaching effectiveness has been subject to innumerable studies over the years. Indeed, it may be the best researched topic in the entire field of higher education studies. The results are pretty grim. Feedback surveys do not seem to measure the effectiveness of teaching, at least if effectiveness is understood as enhanced cognitive ability as measured by educational assessment. Instead, feedback surveys seem to be a measure of how fluent and likeable a lecturer is. On top of that, surveys are often biased against women, ethnic minorities and non-native language speakers.

The most comprehensive study in recent times is the meta-analysis from Uttl, White and Gonzalez. As they note, many early studies on the link between student surveys and effectiveness were of limited value. Instructors were often surveying their own students and then measuring their success on their own assessments. There was no random allocation of students to different instructors, and no attempt to subject all students to the same final assessment. Furthermore, the studies were generally small in size, often involving little more than 100 students. More recent studies have tried to correct for this by adopting a ‘multisection’ experimental protocol. I’ll leave them describe it:


An ideal multisection study design includes the following features: a course has many equivalent sections following the same outline and having the same assessments, students are randomly assigned to sections, each section is taught by a different instructor, all instructors are evaluated using SETs at the same time and before a final exam, and student learning is assessed using the same final exam. 
(Uttl et al 2017, 23)

 

Analysing 97 such multisection studies, Uttl et al find that there is practically no correlation between positive survey outcomes and test results. Only small studies, and studies that do not correct for prior learning, tend to find a positive effect. Their conclusion, which is blunt, is announced in the title of their paper: “Student evaluation of teaching ratings and student learning are not related.”

Carpenter, Witherby and Tauber have also looked at the value of student surveys. Theirs is not a meta-analysis but rather a simple literature review. They note that students are not particularly good judges of how effective their learning is. Students tend to like engaging presenters, not people that challenge them with difficult concepts or the injunction to think for themselves. They like a fluent teaching style, not a challenging one. As a result, students are prone to a number of ‘illusions of learning’ that show up on survey results. There are many famous experiments that reveal this problem. One of the best known is the Dr Fox study from the 1970s. This involved an actor giving a class. The content of the class was deliberately nonsensical and contradictory. The actor delivered the class in two different styles: one hesitant and disfluent; the other confident and fluent. The students rated the second lecture more highly and reckoned they learned a lot from it. This is just one small study but its results are consistent with others.

Carpenter et al are particularly interesting on the phenomenon of active vs passive learning. If you read any book on teaching for higher education, it is likely to encourage you to adopt an active learning approach in the classroom. Instead of being the ‘sage of the stage’, delivering wisdom and knowledge from the lectern, you are supposed to be the ‘guide on the side’, setting exercises for the students, getting them to engage with the material for themselves, and then providing them with feedback on how they did. The claim is that this is a more effective approach to teaching. Students retain more and gain more from this approach. The empirical literature appears to confirms this and it is supported by more basic psychological studies (see, for example, the discussion of this in Make it Stick and Small Teaching).

The problem is that most students hate active learning and often tell you about their hatred of it in the student surveys. As Carpenter et al note:


The passive lecture gives the impression of a fluent, smooth, and seamless learning experience, whereas active learning creates a more disjointed, less fluent experience, in that students may need to think more deeply about, and struggle with, the material to understand and apply it. It is perhaps no surprise, therefore, that many students resist active learning techniques on the grounds that they feel they are not learning…[In one study] students who experienced the passive lecture gave significantly higher ratings of their own learning, and they also rated the instructor as significantly more effective, than did students who experienced the same lesson via active learning. Scores on the test at the end of the lesson, however, revealed a significant advantage for students who experienced active learning compared to students who experienced the passive lecture. 
(Carpenter et al 2020, 140)

 

My own experience with active learning chimes with these findings. In 2020, I created a new course on critical thinking for law students. I read several teaching guides in advance. I drew, in particular, from James Lang’s book Small Teaching, which was recommended to me by several people as a great practical guide to implementing active learning techniques. I drunk the active learning Kool Aid. I decided the course would be all about active learning. Students would be given exercises each week. I would ask students to engage with those exercises first, then I would give some short lectures explaining important concepts and cognitive tools, I would then get them to reengage with the same or similar exercises. I would provide feedback on these exercises, correcting 20-50 mini assignments each week, explaining where students were doing well and how they could improve.

It was a lot of work from my perspective, but students were expected to put in a commensurate amount of effort. I explained to them at the outset that they might find this approach more disfluent and, perhaps occasionally uncomfortable, than what they were used to. But I asked them to be patient, explaining the teaching philosophy behind what I was doing and the empirical research that seemed to support it.

The end result? I got the worst feedback I’ve ever received. Many students hated the class. They found it uncomfortable and didn’t know what they were supposed to be doing. They felt they were being unnecessarily challenged by the exercises. I was surprised since I repeatedly explained the intended learning outcomes, provided more feedback than I have ever provided before, and clearly linked the assessment to the in class exercises. But despite this, several students told me that I wasn’t doing my job properly because I was expecting them to do too much.

Of course, maybe I should just suck it up, keep my head down and persist with this active approach (I probably will). But it’s hard to do so when the feedback is so negative. And this is the problem. If the research is right, then this feedback isn’t particularly relevant, but it’s pretty much all you get in the way of information about how well you are doing. To repeat the point from above: we typically don’t do the randomised controlled experiments to see if students actually benefit from our classes. All we have to go on is their feedback and class results.

Here’s an analogy that might explain the predicament of a teacher. I’ve long been fascinated by the art of stand-up comedy. Comedians spend years honing their craft. They often play to rooms of people that don’t laugh at their jokes, and may even heckle and abuse them. But if they are good, there’s no denying it. They will get the laughs — a constant trickle of feedback that tells them they are doing their job right. Well, teaching is a bit like stand-up comedy without the laughter.*


A4 - Minor Niggles

There are several other minor complaints I have about teaching. These are less important than the three preceding arguments, but they do add to the frustration one experiences while teaching.

First, there are the institutional constraints that make it harder to implement an effective teaching style. There are many of these and some of them might be unique to my own institutional experiences. The obvious one is student numbers. Student numbers still seem to be growing at 3rd level, without corresponding increases in teaching staff. This means we get ever larger student groups to teach with fewer per student resources. For example, I teach five classes of 150+ students and one class of about 50 students. It’s very hard to do anything interactive or discursive with the larger groups, despite numerous attempts to do so. Things might be better if I taught postgraduate courses or smaller group seminars but, alas, I don’t do any of that. Creeping managerialism also makes teaching harder by increasing the demand for pointless form-filling accountability exercises. This takes away autonomy from teachers, which is one of the few redeeming features of the job.

Second, there are the repetitive, but annoying, student behaviours. I don’t like to complain about students. Many of them struggle with much higher academic workloads, expectations and financial concerns than I ever had. Still, there is no denying that there are repetitive student behaviours that sap away a lot of energy. For example, despite crafting long week-by-week summaries of class content and assessment guides that explain what I’m looking for in assignments, I still get dozens of emails from students asking me questions that would be answered if they took the time to read these documents. In the past week alone, for example, I have received 14 emails from students asking the same question about a word limit on an assignment I set, even though this question is answered in the assignment guide. I suppose I can’t blame students for this. I don’t read lots of things I am sent. But it still grates. Similarly, student attendance and engagement with classes seems to inevitably decline as the semester progresses. In one of my classes, I start out the semester lecturing to over 100 students and, by the end, that can be down to less than 30. This is not an unusual problem. I’ve read accounts from academics at ‘elite’ institutions like Harvard and Oxford making the same complaint, and there are many people that actively boast about how few classes they attend (and still succeed academically). Nevertheless, it is dispiriting to see the student numbers dwindle, despite your best efforts to make the classes interesting and to maintain your own enthusiasm. It seems like a referendum on who you are.

This is to say nothing about the practical and ethical challenges of marking student assessments, which I have written about at length before. Suffice to say, this is possibly the most frustrating aspect of the job.

I could go on, but I won’t. I don’t want this to turn into a long ‘woe is me’ memoir. Overall, I think the four preceding arguments provide a prima facie case for thinking that teaching is not a particularly meaningful job: it’s not clear that it serves a valuable purpose, or what its precise purpose should be; even if we could agree upon a purpose, it’s not clear that teaching actually helps to achieve that purpose, or that teachers play a significant role in helping students to achieve that purpose; and the kind of feedback you receive from students tends not be a good indicator of whether you are doing an effective job and, in fact, may be inversely correlated with how effective your teaching is (although this presumes we have a measure for effective teaching). This is to say nothing about the other minor niggles and annoyances one experiences as a teacher.


2. Objections to the Case Against Teaching

I’ve front-loaded this article with the negative stuff. Is there any reason to think that teaching is more meaningful and fulfilling that the preceding arguments might suggest? Maybe. Here are some objections to what I’ve just argued, along with some replies.


O1 - Nothing lasts forever, why expect teaching to buck this trend?

You could object to my case against teaching insofar as it expects too much. Nothing lasts forever. All humans degrade and die. All our cultural institutions and legacies will crumble to dust. Why expect so much from education? If you think you are going to transform a student’s understanding and ability over the long-term, then you are expecting too much. The best you can hope for is short-term changes. If students need some bit of knowledge or some skill, then they will be forced to retain it by the pressures of work and life. A teacher cannot control for that.

This is fair. If we expect lasting change, then very little of what we do is meaningful. Also, it would be arrogant and coercive to expect students to love our subjects as much as we do. But shouldn’t we expect some medium-to-long-term change? And how short-term is short-term? Most students benefit from classes up to the point of assessment, and then quickly forget everything they have learned. I can’t deny this since it has been my own experience. That seems a bit too short, but maybe it is the best we can hope for.


O2 - Effective Teaching Cannot be Measured

You could object to my case against teaching on the grounds that the benefits of effective teaching cannot be measured, or at least cannot be measured easily. The assumption underlying the empirical work on effective teaching is that if you test students in the right way, you can determine whether teaching has been effective. But perhaps that’s not the right way to go about it. Maybe effective teaching has more nebulous or difficult to discern benefits?

I can see where this objection is coming from. Thinking back over my own education, there are some subtle benefits I received from it that probably would not show up on any test. For example, teachers often mentioned important thinkers or concepts in class that I then researched in more detail myself. I remember, in particular, one teacher that briefly ran through the prisoner’s dilemma in class. This caused me to read up on game theory myself. Game theoretical explanations of morality then became a major component of my PhD thesis. Maybe I would have come across the idea anyway without that teacher’s input, but their mentioning of it did open a door for me. It would be hard to test for that. Perhaps teachers have many such subtle influences over their students lives?

The problem with this argument is that, even if it is true, it isn’t particularly uplifting from a teacher’s perspective. Even if you are having such an influence on the students in your classes, you are unlikely to ever know about it — indeed, the students mightn’t be aware themselves. It also makes teaching something of a crapshoot — random things said or done can have a lasting impact. Students may even learn a lesson that is completely antithetical to the one you were trying to teach.

I have an example of this. The only lecture I remember from my undergraduate days (and I’m not kidding about this: it’s the only one I remember) was in Evidence Law. I remember it like it was yesterday. The teacher asked five students to leave the classroom while the rest of us watched a clip from a movie. The clip depicted a crime. The clip was a particularly notorious scene from the 1972 movie Last Tango in Paris. If you’ve seen the movie, you’ll probably know the one. It involved butter.** This was in the days before trigger warnings and sensitivity to student trauma. Anyway, we watched the scene and then the five students who left the class returned and had to ask the rest of us about it. They were playing the role of investigating officers or lawyers. I can’t remember which. Now, I’m sure the point of this exercise was to highlight problems in witness testimony. Did everyone in the class agree on what they had just seen? Did they have different memories? Was it all a bit Rashomon-like? But that’s not what I remember about it. What I remember is that the students who watched the clip thought it was their job to make it as difficult as possible for the students that did not to figure out what had happened. It was like a guessing game. Eventually, the lecturer abandoned the exercise once they realised that the students weren’t doing it right. The lesson I took from this is that students are oddly competitive, and if you don’t explain the purpose of an exercise to them then they will subvert it for their own ends.

So did this lecture have an effect on me? It did. As I say, it’s the only one I still remember. But it wasn’t the effect the lecturer intended. It’s possible that lots of the things I do in class could be having a similar, unintentional, effect. I’m not sure that I should be happy about that.


O3 - It’s Not About Outcomes 

You could object to my case against teaching on the grounds that it is too outcome-oriented. Maybe that’s the wrong way to think about it. Since we cannot control the outcomes, and since the outcomes are hard to measure in practice, maybe we should focus more on the day-to-day experiences and the ongoing relationship we have with students? Maybe the goal of teaching should be to create enjoyable and entertaining in-class experiences, no matter what the long-term consequences of this might be? Maybe teachers should dedicate energy to ensuring that students are having fun and being treated with respect, nothing more than that?

I think there is a lot to be said for this. On a previous occasion, I wrote a critique of outcome-oriented approaches to parenting. I suggested that parents that think the goal of parenting is to raise an optimal child are barking up the wrong tree. We don’t really know what an optimal child is or how to go about raising one. What parents can do is avoid obvious harms (like malnutrition, abuse or neglect), create enjoyable experiences for their children, and forge a meaningful ongoing relationships with them. Now, I am not going to fall into the trap of claiming that raising a child is like teaching a student. They are very different processes in most respects, but perhaps they are similar in this one respect. Perhaps we should drop the commitment to significant learning outcomes in teaching and focus on the ongoing experiences and relationships instead?

I like this proposal, but there are some problems with it. First, it’s worth noting that it would be quite a transformative reorientation in how most people think about teaching. It would also go against most best practice guidelines. All university lecturers are now encouraged to plan their curricula around ‘learning outcomes’, and all the guidebooks and empirical research focus on finding the methods that are best able to achieve those outcomes. Much of this ‘best practice’ guidance would have to be abandoned, or reimagined if we cared less about outcomes. Also, perhaps ironically, shifting to this approach would mean that student surveys are, in fact, a good guide to what works in teaching. Students may not be able to tell you whether they are achieving significant learning outcomes, but they can tell you whether they are having a good time and whether you are treating them with respect.

Second, I would be wary of any claim that teaching is about ongoing relationships and not outcomes. It depends on what is meant by ‘ongoing relationship’ but I have previously explained my views on the ethics of teacher-student relationships. To briefly summarise, I don’t think it is desirable or wise for teachers to have meaningful relationships with students. Intimate relationships are obviously a no-no but even friendship is, in my view, problematic. I think teachers should be respectful, collegiate and obliging, but anything more than that is ethically fraught. In any event, it is practically difficult in the era of mass higher education. You cannot possibly have meaningful relationships with over 500 students, and selecting a handful of them (because they are more vocal or pushy or you happen to like them?) smacks of arbitrariness and favouritism. This doesn’t mean that we cannot create enjoyable learning experiences — maybe that should be the focus — but assuming that meaningful ongoing relationships should emerge from this doesn’t seem right to me.


O4 - What do you know? You are just a bad teacher

People might object to my case against teaching on the grounds that it stems from some bitterness or incompetence on my own part. Perhaps I am a really bad teacher and I am just rationalising my own incompetence?

I understand the tendency to seek biographical explanations for pessimism. I have read Schopenhauer’s essay on women. It’s hard not to imagine that something so misogynistic and hateful doesn’t have its origins in his own life story. His troubled relationship with his mother, maybe? Ultimately, it’s for others to judge my incompetence, but I’m not sure that this essay stems from it.

For one thing, one of the arguments I am trying to make is that I have no idea whether I am a competent or not. I am not sure what the standard for being a good teacher is. If we assume that it is having some lasting impact on student knowledge and skills, then the evidence seems to suggest that most teachers are not particularly good at doing that. But this is irrelevant since I don’t collect that kind of evidence for students taking my classes. So even if this was the right standard neither I nor most teachers would know whether they are hitting it.

The one thing I do have to go on are the results of the student feedback surveys in my classes and other, more informal, types of feedback I receive from students and colleagues. By these metrics, my teaching does not appear to be particularly bad. I tried to review my feedback results from previous years before writing this article to make sure I was not distorting the truth. I quickly discovered that I am not a good record keeper. I only have records from 2018 and 2020 (I was on sabbatical in 2019). In those years, my student feedback was generally positive. For example, I taught a module on Banking Law to two separate cohorts of students in 2018 (both over 150 in number). In both cases, more than 90% of respondents to the survey agreed that I was either ‘good’ or ‘very good’ at explaining key concepts and that my lectures were well prepared. Over 75% of students rated me as ‘very good’ on both questions. Furthermore, I got lots of positive comments too, such as:


I find John is brilliant at teaching this subject, his passion and level of knowledge really helps me to understand this module.
…the lectures are very well prepared and the topic matter is explained and demonstrated extremely well.
John you are an amazing professor who explains everything clearly and accurately.

 

Similarly, in my 2020 Contract Law module (which was taught entirely online and at a time when most students seemed to be really hating the learning experience), over 90% of respondents in two separate cohorts agreed that my lectures were well prepared and that I was effective in explaining difficult concepts. I also got lots of positive qualitative comments, such as:


The module is very well organised. I have access to everything I need to achieve the learning outcomes. The podcasts and supplemental materials are in depth and easily accessible. I have enough resources to fully understand the materials and concepts.
It is probably my best organized module. All the podcasts are very helpful and explain everything well. The lectures are well organized too
John was a really good lecturer and the material was very interesting.

 

I am not citing this to blow my own trumpet. Frankly, I find some of it embarrassing. And I receive negative feedback too. Some students find me boring, few find me likeable, and I already mentioned my experiment with the critical thinking module that appeared to backfire. My point is that I have no reason to think I am particularly bad at teaching. All the indicators are essentially positive or neutral. In addition to the feedback surveys, I have been nominated for teaching awards by students on two occasions over the past five years (though I have never submitted an application for such an award)*** and I receive emails from current and former students thanking me for my classes. The latter are nice but I tend to discount their importance. They are few and far between — perhaps a dozen at most over the years — and all the students that really hated my classes are unlikely to contact me.

All that said, I will concede that my frustration with teaching may be linked to my biography in one important way. I’ve read on several occasions that most people that end up in academia have a ‘teacher story’. Somewhere along the line they had a teacher that inspired them and welcomed them into the world of ideas. I don’t have such a story. The one subject area that I now specialise in (philosophy and ethics) is one in which I have never taken a class. I’m entirely self-taught. My sense is that most of the valuable things I have learned I have learned through my own reading and research. In this respect I sympathise with one of the things Jay Parini says in his memoir on teaching:


I often felt that a teacher was someone who got between me and my reading. I used to believe that teachers unfairly attempted to control the nature and pace of my work, my rate and quality of retention, the ultimate direction of my thoughts….If a book was listed on a syllabus, I naturally veered away from it, not toward it. 
(Parini 2005, 9-10)

 

I had the same attitude to my own teachers and my suspicion is that this is the way it is for most people that are really excited by ideas. Teachers play a limited role in their lives. They do the important stuff themselves. But I must be wrong since I hear so much testimony to the contrary


O5 - Surely there is something meaningful about teaching?

I have been finding the dark cloud attached to the silver lining throughout this article. What about the silver lining? Is there nothing positive to say about teaching? Sure there is. As a teacher you get to enhance your knowledge and understanding of many interesting things; you sometimes get to facilitate enjoyable discussions and debates among students; and you nearly always learn something yourself from the process. Furthermore, despite creeping managerialism, teaching remains (for me, at least) a relatively autonomous job. Without teaching, I wouldn’t be able to do the research work I do, which remains enjoyable and fulfilling.

There is plenty to like about teaching. It’s just not as noble or inspiring as some people suppose. It’s a job and often a frustrating one.


* Of course some teachers are genuinely funny and may get many laughs. That’s not the point. The point is that there is no equivalent of the laugh when it comes to informative feedback for effective teaching.

** I don’t know why I am being so coy. The scene depicts an anal rape. The actress involved (Maria Schneider) has complained about it in the years since, saying that it was not in the script and that she found it traumatic.

*** I don’t know how teaching awards work in all universities, but at my current one nominated lecturers have to submit a five page application explaining why they are ‘excellent’ teachers. I just can’t bring myself to argue that I am an excellent lecturer. 


Monday, May 24, 2021

Flipping the Script: When do technologies disrupt morality?




Answer: when they flip the social script.

Technologies change how humans perform tasks. Consider what I am doing right now. I’m typing words onto a screen using word processing software. Later, I plan to publish these words on a website where they can be accessed by all and sundry. This is a very different way of writing and sharing one’s thoughts than was the historical norm. If I was living in Europe in, say, the 1600s, I would probably first write out these words by hand using paper and ink, then, if I was lucky and wealthy enough, I might pay to have them printed up as a pamphlet. I would then hand out at that pamphlet at street corners and public meetings.

But just because technologies change how humans perform tasks, it does not follow that they will be morally or socially disruptive. Some changes in what we do don’t have substantive ripple effects on our social relations and social organisation. For that to happen, technologies have to do more than simply change what we do; they have to change how we relate to one another.

That, at any rate, is one of the arguments developed by Stephen Barley in his research on technological change in the workplace. Barley argues that it is only when technologies disrupt our ‘role relations’ that they have substantial impacts on the normative and bureaucratic frameworks in which we live out our lives. Barley’s empirical research focuses almost entirely on technology in the workplace, but I think his research has broader lessons. In particular, I think it can help us to distinguish between technology that changes some day-to-day behaviours from technology that is truly morally disruptive, i.e. capable of changing our social-moral beliefs and practices.

I will develop this argument in the remainder of this article. I do so, first, by outlining the explanatory framework that Barley uses. I will then consider a practical illustration of this explanatory framework drawn from Barley’s research. I will conclude by considering the broader lessons that can be learned from this framework when it comes to understanding technology-induced moral disruption.


1. The Explanatory Framework: All the World’s a Stage…

Let’s consider the explanatory framework. One of my favourite bits of Shakespeare is Jaques “All the world’s a stage…” speech from As You Like It. The speech suggests that human life is a bit like a drama played out in seven acts. We play different roles in each act (the infant, the school-boy, the soldier, the lover etc) and hence our life can be said to follow a script. Of course, Shakespeare’s particular conception of the different roles we play is somewhat limited, and the main focus of the speech is on the ageing process, not necessarily the complexity of human social interactions. Still, the speech is memorable because it seems to capture something true about the human condition. Human life has a dramaturgical aspect to it.

It’s no surprise then to learn that social psychologists and sociologists have developed a dramaturgical theory of human social life. Barley draws from this in his research, taking particular inspiration from the work of Erving Goffman. The essence of the dramaturgical theory is quite straightforward. Humans encounter each other in different contexts in social life — the school, the restaurant, the workplace and so on. In these different contexts we play different roles — the pupil, the waiter, the boss. When doing so, we tend to follow a social script that tells us how we ought to behave. This is not a literal script, handed to us so that we can learn our part. It is, rather, something that we learn through imitation and observation. We see that there is a structured pattern to each social encounter. If we disrupt the script, and try to play a different part, then this can cause anxiety and unease, even if sometimes the disruption is warranted.

One of the classic examples of this dramaturgical theory is the interaction between a waiter and a customer at a restaurant. When you enter a restaurant, you expect your interaction with the waiter to play out in a certain way. You expect to be shown to your table. You expect to be handed the menu. You expect to be asked if you would like anything to drink before you order your food. And so on. If a waiter disrupted the script and asked you what you would like for dessert before you sat down, you would find this very strange.

The dramaturgical theory can be pushed quite far. Each social encounter can be said to play out on a stage. This stage is the physical and material environment in which the actors meet (e.g. the restaurant). The actors sometimes use props in their encounter (e.g. menus, notebooks to record orders and so on). There are also other supporting actors that can influence the interaction (your dinner companions; the chefs in the kitchen).

How does this relate to technology and social disruption? Barley’s research is about technology in the workplace. Drawing from the dramaturgical theory, he argues that workplaces are usually organised around roles and scripts. When you take up a particular job, you are given a role within an organisation. This organisation will occupy a physical stage of some kind (this is true even if it is a digital or remote workplace — more on this in a moment). It will consist of many supporting actors playing other roles. Each of these actors will follow scripts set down by organisational rules and habits.

Technology can have a profound effect on all of this. When you are playing out your role, you may have to use or interact with some new bit of technology. This could be part of the new material environment of the workplace or a prop that you rely upon to play your part. This can change how you play your part. Sometimes the effect might be minimal, only changing what you do but not how you interact with others. Sometimes the change can be more significant, affecting how you interact with other roles and how they interact with you. When this happens, the roles may need to be redefined and the script altered.

Barley’s main contention is that it is only when technology affects role relations (i.e. interactions between different the different social roles) that we see the more disruptive changes to workplace norms and organisational rules. Indeed, some of the most disruptive changes arise when technology alters the entire stage upon which the social interaction plays out. When this happens the actors scramble to figure out new roles and new scripts that fit the new stage.


2. The Impact of the Internet on Car Dealerships

Barley has studied the organisational impact of technology on a range of workplaces over the years. His typical mode of inquiry is ethnographic in nature, i.e. detailed on-site shadowing and observations, coupled with interviews. I’m just going to consider one of his case studies here: the impact of internet sales on car dealerships. I find this case study to be informative, in part because it shows how a technology can completely disrupt the social script associated with a workplace activity.

The focus of Barley’s study is on car sales in the US, specifically California. The traditional script — the one that long predated internet sales — is one that is baked into the American popular consciousness. Barley argues that there are three ‘acts’ to this script. In the first act, the customer would arrive at a car dealership and start to look around. They would be greeted by a salesperson (all male in Barley’s study). The salesperson would engage in lots of smalltalk, trying to build rapport with the customer, sometimes even lying in the process. As Barley puts it:


… if the salesman noted a car seat in the customer’s car, he would ask if the customer had a child and then inquire about its age. The salesman would then either profess to have a child of roughly the same age or reminisce about when his children were that age (sometimes even if he was childless) 
(Barley 2020, 56)


The goal of this first act was to ‘land’ a customer on a car and get them to agree to a test drive. Some customers would bow out at this point. If not, things would proceed to the second act: the test drive itself. This was a short act, typically lasting about 15 minutes, during which the salesperson would accompany the customer, point out all the features of the car, and answer any questions.

Upon return to the car dealership, the third act would begin. The customer would be invited to a back office to ‘complete the paperwork’. Again, some customers would bow out at this point. If not, the customer and the salesperson would haggle over the price of the vehicle. This act tended to be the most adversarial. The salesperson would insist there was a price below which they could not go. If the customer insisted on a lower price, the salesperson would sometimes leave the office to ‘consult’ with the sales manager. There was often an extended delay as a result, with the explicit goal of building suspense and anxiety for the customer. The salesperson would sometimes return with the manager, who would put additional pressure on the customer to purchase the car. Oftentimes, the salesperson would do things to up the ante, suggesting that they could not guarantee the negotiated price beyond today. The customer, for their part, could also engage in various negotiating tactics, threatening to take their business to another dealer or even disparaging the salesperson to their face. Overall, the tenor of these interactions could be quite unpleasant and tense:


In many cases, the interaction between the customers and salesmen became strained. It was not uncommon for one party to insult the other. Many negotiations, therefore, never reached an agreed-upon price and, hence, a deal. However, if a deal was struck, the atmosphere became less tense… 
(Barley 2020, 57)

 

What is noticeable about this traditional script is how formulaic it often was (standard talking points and negotiating tricks) and also how negative it seemed to be from the customer’s perspective. Customers often saw salespeople as sleazy and dishonest. They often brought negotiating partners with them (family, friends) to counterbalance the onslaught from the dealers.

The internet changed this. By the early 2000s, most dealers had extensive web catalogues of the cars they sold and also back office internet sales teams. An entirely new stage was set for the process of buying a car. Customers would first browse through the online catalogue, looking at various options, oftentimes armed with knowledge from other websites about makes and models. If they liked something, the website would encourage them to send an email notification that would be followed up with a sales call from the dealership (many online sales processes follow this model). Once they did this, a new script, with two acts to it, would play out.

The first act took place entirely over the phone. The salesperson would talk to the customer about their preferred make and model and give them a price quotation (sometimes they would just leave voice messages that may or may not be followed up by the customer). The price quotation during this phase of the discussion was remarkably honest. The salesperson would tell the customer how much the dealer paid for the vehicle and how much profit they wished to make on the sale. The purchase price quoted was, in Barley’s study, ‘always accurate’ and the profit was relatively minimal, often no more than a few hundred dollars per vehicle. If the customer disputed the price and suggested that another dealer was offering the same make and model at a cheaper price, the salesperson would do one of two things: (i) point out that the customer was mistaken (because the make and model were not the same) or (ii) tell the customer to purchase the vehicle from this other dealer. There was never any haggling over price and none of the standard negotiating tactics were used by the internet salespeople.

If the customer was still interested in the car, they would be invited to the dealership to look at the car, take a test drive and, if they wished, 'complete the paperwork'. This phase of the interaction was often straightforward. Customers that showed up to the dealership typically wanted to make a purchase. If they changed their mind after seeing the vehicle or taking it for a test drive, they would leave amicably. Overall, the atmosphere of the interactions was much more pleasant and much less tense. Customers, indeed, seemed to prefer internet sales in Barley’s study, finding the internet salespeople less ‘pushy’.

Why did this happen? The internet changed the stage for the social interaction and hence required a new script. It equalised the power differential between the salespeople and the customers. Customers were given the power to start the process and could easily terminate whenever they wished. Customers typically had more information at their fingertips (or at the end of online search) and salespeople couldn’t get away with the same pressure tactics that they employed during in-person negotiations:


…Internet salesmen [could not] avail themselves of supporting actors to create pressure on the customer to buy. Instead, the Internet salesmen had to work entirely with information contained in databases. Under these conditions, it would be disadvantageous for a salesman to misrepresent the data, because doing so would eventually undermine the sale… the Internet pushed the salesman to be highly factual and to forgo the stance of a negotiator to sell vehicles successfully. 
(Barley 2020, 63)

 

Another way of putting it: the internet transformed car sales from a margin business — in which the goal was to maximise profit on each sale — to a volume business — in which the goal was to maximise sales. The customer benefitted from this technologically-mediated disruption (at least in the dealerships that Barley studied).


3. Lessons for Moral Disruption

As should be obvious from the preceding description, the technological disruption caused by the internet to car sales changed social moral beliefs, attitudes and behaviours. Car sales no longer depended on perceived dishonesty, hard bargaining and inequality of power. Instead, honesty and relative equality ruled the day. 

This is a welcome form of moral disruption. The traditional process was unpleasant, possibly harmful from the customer’s perspective and arguably corrosive to the virtue of salespeople. No one would, I think, view the traditional salesperson as the paragon of virtue, at least in their professional lives (they may have been wonderfully virtuous in other respects).

But this is just one case study. Are there any general lessons to be learned? Is Barley right to say that technology is most disruptive when it affects role relations? Could we take Barley’s explanatory framework, apply it to other contexts, and, perhaps, predict the possible direction of technologically-mediated moral disruption? Let me conclude by trying to answer some of these questions.

First, is it true that technology is most disruptive when it affects role relations and not simply tasks? I think this is true, at least to some extent. In previous writings, I have endorsed Michael Tomasello’s theory of the origins of human social morality. In brief, Tomasello (following a philosopher called Stephen Darwall) argues that human social morality is characterised by a ‘second personal’ psychology. We don’t just view the world from our own perspective but can switch perspective to that of other people with whom we interact. In Tomasello’s recounting, this second personal psychology is a role-based psychology. We see other people as occupying certain social roles and we expect them to behave in a manner that fits those roles. This generates concepts of duty and obligation — ‘If you occupy role X, then you ought to behave in manner Y”. If someone fails to live up to their role-related duties, then we develop reactive attitudes toward them. We get angry, upset, jealous, disappointed. This, in turn, can generate moral blame and condemnation.

If Tomasello’s theory is correct, then human social morality is a role-based morality. Our moral beliefs and attitudes centre on the roles that we and others perform. If those social roles get disrupted, and if the expected performances associated with them change, then it stands to reason that there will be greater disruption to social morality. This doesn’t mean that disruptions to role relations are the only thing that matters from a moral perspective, but they are one of the more significant forms of moral disruption.

That said, the concept of a role relation is a little fuzzy and figuring out whether a technology disrupts role relations can be tricky. In obvious cases of disruption — like those observed in car dealerships and internet sales — there may be little disagreement, but in other cases there may be some room for disagreement. Barley, for instance, insists that some technologies can change task performance without changing role relations, at least not in a significant way.

One of his go-to examples of this is the relationship between academics (professors, lecturers etc) and administrative assistants in universities (Barley 2020, 30). He points out that in the 1980s, administrative assistants used to type letters and documents for academics, in addition to performing many student-facing roles (answering queries etc). Nowadays, due to computerisation in the workplace, academics tend to do all their own typing and word processing. Administrative assistants have had to learn to work with new software programs to manage many of their day to day tasks, developing a new skills profile in the process. Yet, according to Barley, this has not had much of an impact on the relations between academics and administrative assistants:


... administrative assistants and faculty continue to have roughly the same relationship as they had in the past. There is no doubt about who has the greater status and who works for whom. 
(Barley 2020, 30)

 

This doesn’t ring true for me. I’ve worked in universities for over a decade now and have interacted with many administrative assistants, but I have never thought that I had a higher status to them or that they worked for me or on my behalf. I see us as involved in a common endeavour. Furthermore, while I do not depend on them for most of my day-to-day tasks, administrative assistants provide essential background support for the smooth functioning of the department in which I work. I don’t wish to learn how to use all the complicated web-based apps for managing finances and timetabling. They must do so as part of their jobs. As a result, if anything, I would suggest that the status of administrative assistants has grown.

But this comment from Barley may reveal an assumption that underlies some of his research, namely: that role relations are primarily power relations and ‘significant’ disruptions to them involve some change in the balance of power (this is based on reading two of his case studies; I have not read them all). I don’t see things the same way. Technology can also result in significant moral disruption if it changes what people expect of one another. It seems to me that this clearly has happened in the case of the relationship between academics and administrative assistants. I don’t think they have a standing obligation or duty to do my typing. It would be insulting if I asked them to do so. I can, however, expect them to help with timetabling and room bookings since they have the skills to manage the online platforms for these services. So, even if the power differential hasn’t changed, the moral expectations have.

Could we take Barley’s framework and apply it to other contexts? Of course we could. He and his colleagues have done so on several occasions. Other interesting applications of it (beyond the workplace) might include how technology has disrupted the relationship between politicians and constituents. Instead of relying on door-to-door canvassing and in-person clinics, politicians increasingly rely on social media broadcasts and web-based interactions. It seems obvious that this has changed the content and civility of those interactions to some extent. Likewise, the impact of technology on various human relationships (friendship and intimate relationships) is something I have considered in my own research. Technology can completely change the social script when it comes to those relationships. For instance, it can change how we find friends (online first instead of in person first), how we interact with them (zoom calls, texts and messaging groups instead of in-person meetups), and even who our friends might be (long distance friends, machine ‘friends’). I don’t believe that these technological changes to human relationships are necessarily good or bad, but it seems to me that they are quite disruptive of the previous social scripts.

Can we use this framework to predict the course of future moral disruptions? This is challenging. It seems unlikely that we could make precise predictions about future changes. A lot will depend on (a) the existing social script and role relations and (b) the nature of the technological disruption. Still, we might be able to predict some general patterns. If we go back to the power question, some technological disruptions can have an equalising power by removing advantages that one role has over another. Contrariwise, some disruptions may reinforce and compound existing inequalities. It is possible that we could predict these changes by carefully mapping the existing power relationships and the likely effect of certain technologies on the existing power differentials.


4. Conclusion

This brings me to the end of this article. To briefly recap, I have been looking at Stephen Barley’s explanatory framework for understanding how technology can lead to disruptive social change. Barley’s framework focuses on social scripts and social roles. His claim is that technology is at its most disruptive when it changes the social script and hence how different roles relate to one another. Although he applies this framework to the impact of technology on the workplace, I have argued that it can apply to the impact of technology on social morality. Why? Because social morality is, in large part, dependent on a role-based moral psychology. If we disrupt the roles, we disrupt our expectations of what we owe one another.


Monday, May 17, 2021

What Matters for Moral Status: Behavioural or Cognitive Equivalence?


Here's a new paper. This one is forthcoming in the July issue of the Cambridge Quarterly of Healthcare Ethics. It's part of a special edition dedicated to the topic of other minds. The paper deals with the standards for determining whether an artificial being has moral status. Contrary to Henry Shevlin, I argue that behavioural equivalence matters more than cognitive equivalence. This paper gives me the opportuntity to refine some of my previously expressed thoughts on 'ethical behaviourism' and to reply to some recent criticisms of that view. You can access a preprint copy at the links below.


Title: What Matter for Moral Status: Behavioural or Cognitive Equivalence?

Links: Official (to be added); Philpapers; Researchgate; Academia

Abstract: Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.