I just published a new paper in Science and Engineering Ethics. The paper is my first extended defence of a position called 'ethical behaviourism'. This is a principle/theory that can be applied to debates about the moral status of disputed entities (e.g. animals or artificial beings). I first talked about this principle on this blog a couple of years back (though I don't claim that it is original to me). I am grateful to have the chance to defend it at greater length in this article. As per usual, you can download a free preprint of the paper at the various links below.
Abstract: Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots.
There is a famous Seinfeld joke about public speaking. It's based on an old opinion poll result that reported that people fear public speaking more than death. Seinfeld used this to make the wry observation that the next time you are at a funeral you should reflect on the fact that the person giving the eulogy would rather be in the coffin.
Suffice to say, I don't feel that way about public speaking. I have many social anxieties but speaking in front of a large (or small) audience is not one of them.1 That's not to say I'm any good at it, of course. But I have at least done a lot of it and grown accustomed to its rhythms and its demands. Furthermore, I have learned from the mistakes that I have made over the years so that even if I amn't particularly good at it, I am at least better than I used to be.
This is all by way of justifying what you are about to read. I get asked quite often for advice on giving talks (by students) and I am frustrated that I have still not got around to formalising my thoughts on the matter. What follows is my first attempt to do so. If you are in a hurry and are just interested in reading my 'tips' on how to give a talk, then you can find them summarised in the poster that accompanies the text. If you have more time, and are willing to tolerate the occasional diversion, then I hope you will read the full thing because I'm not just going to explain the methods I follow when giving talks, I'm also going to reflect on things I love and hate about the process, give some rants about academic conferences, consider the larger purpose and philosophy behind the practice of giving talks.
As always, what follows is my own take on things. I am not claiming that the things I find useful when giving talks will be useful to others, or that I have undertaken a detailed survey of the evidence concerning what works and doesn't. I'm just distilling the lessons I have learned from my own experiences. This means, inevitably, that my reflections are geared toward giving academic-style presentations. I have some experience giving other kinds of talks too, so I hope what I say is of more general interest. I'll include links to examples of talks I have given along the way and I will also include several as an appendix at the end.
For ease of analysis, I am going to structure the discussion around a timeline that corresponds to the major steps involved in preparing and delivering a talk. The timeline is illustrated below along with the 'tips' that correspond to each step in the timeline. As you can see, it starts at the point in time at which you accept an invite to give a presentation, then proceeds through to preparation, delivery and follow-up. The preparation step is, proportionally, the longest and this is because I think it is the most important.
1. The Invite or Acceptance
The journey to a talk starts when you accept an invite to give one. This will either be because you have deliberately sought an invitation (e.g. by submitting a paper to a conference) or because someone contacts you out of the blue asking you if you would be interested in giving one. The former is common early in an academic career; the latter common later in a career.
When I was young and eager to establish myself, I accepted all invitations to give talks without hesitation (assuming I could afford the travel or someone else was going to pay for it). Nowadays, I take a bit more time reflecting on whether it is something I really want to do. There are several reasons for this. The most obvious is that preparing a talk takes a lot of time (or, at least, it should take a lot of time) and I need to figure out whether I have that time to spare. But that's only part of the picture. There are other, less practical and more existential, reasons that loom larger for me now.
I have developed quite a cynical attitude toward academic conferences and gatherings over the years. Academic conferences are strange affairs. They are made up of hordes of earnest scholars gathering together in brightly-lit meeting rooms and poorly-catered conference suites, to speak at each other in 10-20 minute timeslots. Most of the talks are poorly attended and poorly delivered. The speakers assume that their audiences are interested in what they are saying. The attendees repay this assumption by appearing bored and listless, busily scrolling through their phones or checking email from their real jobs back home.
Having attended dozens of these events over the years, I have turned into something of a 'conference nihilist', at least when it comes to the talks delivered at them (I'll say more about the social aspects of conferences later on). I think conference talks generate a lot of sound and fury but ultimately signify nothing. I see them as a holdover from a bygone era. At one point in time, attending conferences and listening to papers may have been the only way to 'keep in touch' with what was happening in your field. It also may have been the only way to contribute and get attention for the work that you do. I read nowadays of the lore surrounding the Solvay conferences on quantum physics in the early part of the 20th century and they sound like exciting affairs. Groundbreaking work was presented and debated, and the frontier of human knowledge was expanded.
I have never attended a conference like that. It seems to me to be clearly no longer true that attending conferences is essential to academic work. I can access more working papers and preprints than I have time to read at the click of a button, and I can interact with and solicit feedback from academics all over the world from the comfort of my home. Indeed, the experience of reading, writing and deliberating over ideas from the comfort of my home is usually (though not always) superior to the experience I get at a conference. So I have really started to question the value of attending and participating.
My commitment to conference nihilism tends to vary depending on the size of the event. Very large conferences tend to generate the most profound sense of nihilism. I'm talking here about conferences with hundreds (maybe even thousands) of attendees where your talk takes place in one of half a dozen parallel streams. At such an event, your contribution will feel like a small drop in a large ocean: you'll be lucky if anyone notices a ripple. Smaller events generate less nihilistic feelings. My sweetspot is the 'workshop' with 15-20 participants, each of whom is given a decent amount of time to talk, and all of whom are curious and interested in what the others have to say. But sometimes those events are lacklustre too because they are poorly organised and poorly run. An event where I am the sole speaker (e.g. a guest seminar or lecture) can seem quite attractive and less nihilistic on paper, but my experience of these is mixed too. Guest seminars and guest lectures are often poorly attended (maybe it's just me?) and having organised a few myself, I know that there is sometimes a desperate, last-minute attempt to get 'bodies in the room'. This means attendees are less engaged and interested than you initially suspect and the talk can generate less useful discussion as a result.
All of this might make it sound as though I hate giving talks and I am blame others for the nihilistic nature of academic conferences - as though the problems all stem from the organisation, format and attendees, and not from the speaker and their inability to say anything interesting or valuable. That's not the case. As you'll see below, I do think you can enjoy the process of giving talks, and I do think the speaker has a heavy burden to discharge: they have to try to make their talk as good as possible. My point here is simply that before you accept an invitation to give a talk you have to know what you are getting yourself into. You have to realise that most talks, at most events, are relatively pointless. You have to embrace that pointlessness.
In this sense, conference nihilism can be quite liberating. Once you acknowledge that most conferences are nihilistic affairs, you are freed from the ordinary expectations and obligations associated with attending and delivering talks at them. You are free to shape your own conference destiny, at least to some extent, by being a little more picky and selective in what you are doing.
In this respect, I have three 'tips' when it comes to accepting invitations to give talks:
Don't over-leverage yourself: Don't accept too many invitations to give too many talks. Only agree to do as many as you feel able to do to the best of your ability. This is a lesson I have learned the hard way. Realising that most talks I give won't change the intellectual landscape, or be life-changing or career-shaping, gives me the courage to be more selective.
Limit expectations: Don't expect too much from the process or experience of giving a talk. Don't be surprised if no one attends, or seems to care about what you have said. Make sure you are comfortable with that possibility before accepting.
Focus on the process not the outcomes: Before accepting ask yourself whether you will enjoy the process of preparing and delivering the talk. Is it going to be on a topic that you want to talk about? Will you enjoy the challenge of preparing and refining the talk? If so, do it. If not, and if you see the talk as a stepping stone to future success, maybe reconsider.
There are other reasons to accept an invitation too. Sometimes I accept invitations because it allows me to visit a place I have never visited, or meet up with people I would like to meet up with, but these reasons have become less compelling as I have aged. I find that the obligation of preparing and delivering a talk tends to suck energy away from any wider enjoyment of the trip or the destination, and if I'm at an event with other participants and speakers, I feel an obligation to attend those talks too (more on the reason why a bit later on). Finally, when you travel to a lot for specific events, it all tends to get a bit monotonous. You see the world through airports and hotel rooms. Sometimes these are nice places, but they can be a bit same-y.
2. Preparing a Talk
The common cliché is that preparation is paramount. I try to avoid clichés, but when it comes to giving talks this is one that I wholeheartedly endorse. Most talks I attend (and give!) are bad. I can't say for sure why this is the case, but my guess is that the majority of the time the problem is that the speaker hasn't prepared properly. They haven't thought about the audience and their expectations; they haven't rehearsed and refined what they want to say; they haven't given due consideration to the time constraints of the talk; and they haven't put a proper structure on what it is they want to say.
I understand why this is. Proper preparation takes a lot of time and, given the low stakes of most talks, it's hard to justify that temporal investment. Other deadlines intervene and, before you know it, it is the night before your talk and you are frantically pulling together some slides and jotting down some bullet points so that you will have enough content to fill your time-slot.
I've been there.
The problem is that this under-investment of time and frantic last-minute preparation just feeds the cycle of nihilism: you don't expect much from your talk, so you don't put much effort into it and, sure enough, your talk is a flop and this confirms your worst suspicions about the process. This is another reason not to over-leverage yourself and commit to giving too many; and another reason to only agree to give talks when you are willing to invest the time and effort required to make the talk as good as possible.
There is an odd paradox to this. I am aware of it. Once I embraced conference nihilism, I found that I was able to take the process of preparing and giving talks seriously once again. This was because I was free to reject invitations that I might otherwise have accepted out of some sense of professional obligation or personal ambition, and free to focus on accepting the few to which I was willing to dedicate myself. This has enabled me to enjoy the preparation process once again, to see it as something that can be intrinsically rewarding and fascinating, not just an unwelcome chore. The net result seems to be that I live the opposite of conference nihilism, while still being committed to conference nihilism in the abstract. I am happy to live with that paradox.
But what of the preparation process itself? Through trial and error, I have hit upon the following method that I try to follow when giving a talk. I don't always succeed, but when I do, I find that the end result is better.
Write it out and learn your speed limits
The first thing I like to do is write out the content of my talk in full. I don't aim for perfection. I try to come up with a rough first draft that I will subsequently refine. I do this for two reasons. The first, and most important, is that it allows me to control the length of my talk. Over the years, I have learned how many words I can say in a given period of time. I find this to be a powerful tool when preparing talks. For example, I know that if I have to give a ten minute talk, I will need to produce approximately 1200 words of text; if I have to give a twenty minute talk, I will need to produce about 2500; and so on. Writing it out gives me a clear sense of whether I am within those limits and whether something needs to be cut out or included to make it work (this is what I mean by learning your speed limits).
The other reason why I write it out is that it helps me to remember the content of my talk. I very rarely 'read' a talk, though sometimes I do refer to notes. Indeed, I find talks that are read out to be quite dull (even if some people can do it very well). This was one reason why I resisted writing anything out for years. I thought talks should be more casual, spontaneous monologues, and I worried that writing them out would make me a slave to a script. But I now realise that this isn't true. If you have a written script, and you learn it off and rehearse it, you can still be quite natural in your delivery and include some spontaneous ad libs and remarks. You can, however, do this safe in the knowledge that you know what you want to say and how long you have to say it.
Build an Enticing and Transparent Structure
When writing out the talk, I try to ensure that it has an enticing and transparent structure. In other words, I try to ensure that it says something that the audience might want to hear and that it is clear about its aims and objectives. I appreciate that this is a very generic 'tip', but it is hard to be more specific since the content of a good talk is highly variable. A dense, data-rich talk might go down well at a scientific conference, but not so much at a meeting of local politicians. My sense is that you should try to meet your audience's expectations as much as possible, but not at the expense of sacrificing your own values and competencies. So, for example, you might think it is important for the local politicians to hear the data-rich talk. That's fine. You just have to do more work to make them willing to hear it.
From my own perspective, there are three things I try to do when structuring a talk:
I try to build rapport and/or intrigue at the outset. In other words, I try to lead with an interesting story or example that sets up the problem I am going to discuss in the remainder of the talk. I also then try to explain where I want to bring the audience by the end of the talk. What proposition or thesis am I going to defend? Do I expect them to agree with it? Are they likely to be resistant to what I say? What assumptions will I make that they might not share? I see talks as an attempt to bridge the gap between different minds. My working hypothesis is that the gap between my mind and the mind of others is quite large and so I have to do a lot of work to bridge it. I have adopted this working hypothesis based on my own experiences when listening to other people talk. I find they assume that I know more than I do about the topic they are talking about it, that I will find it just as interesting as they do, and that we share similar methodological or theoretical assumptions. I try not to make those assumptions (though we all have our blindspots).
I try to include 'memorable moments' within the structure of the talk. These memorable moments could be interesting stories, thought experiments, visuals, statistics and so on. I like to pepper these throughout the talk and have at least one towards the start and one towards the end. How many I have in the middle depends on the length of the talk. The basic rationale behind this is that including such moments draws in the attention of those whose minds may have wandered away from the talk. Trying to have some audience participation can be a good way of doing this too. But I'm often too cowardly to do this.
I try to be provocative/interesting and not comprehensive. My academic instinct is to be comprehensive. When I'm writing something, I want to address every objection I can think of, to identify all the gaps in what I am saying, and acknowledge all the complexity and nuance. The problem is that I cannot do that in a talk, particularly a short talk of ten to fifteen minutes. I have to resist the urge to hedge every argument and highlight every nuance. I have to get the core idea across and I have to make that core idea (at least somewhat) provocative and interesting. This is because I want the audience to engage with it. If they have objections, great -- we can discuss them in the Q&A -- but I have to get them excited enough to even bother raising those objections. That said, I will admit that this is a bit of a balancing act. You don't want to be needlessly provocative and you don't want to come across as being naive or cavalier about the complexities of the issues you are talking about. So it is a judgment call, but my judgment is that academics tend to lean too far in favour of hedging and complexity when giving talks. This means they never get to the interesting idea within their allotted time.
Remember Less is More, Particularly with Visuals
A good heuristic for preparing a talk is to cut about a third from your initial draft. At least, that's always been a good heuristic for me. I try to stuff too much into my talks. This might be okay if I stuck to the script but I like to ad lib and wander when it seems appropriate to do so. This usually results in a rushed presentation and I end up sacrificing some of the interesting ideas I wanted to include anyway. I find it's better to nip this problem in the bud by murdering some of my darlings before I finalise the script, even if this is hard to do.
I've found that preparing for formal debates has been a good training ground for this. I'm not a huge fan of the debate format, but I've participated in a few (you can see one of my debate contributions on YouTube, here, listen to the audio of one here, and read another here), and the one thing they do have going for them is that they force you to condense your key arguments and ideas into a short timeframe. You typically get 10 minutes in a formal debate (sometimes more; sometimes less). If you want to present a robust argument for a proposition in that space of time, you have to cut out a lot of the fluff and nuance. I find this strict time limit to be creatively liberating as it counteracts my natural tendency to prolixity (he says 3,500 words into this article).
Less is more is definitely true when it comes to visual accompaniments to talks. I know it's a cliché to talk about 'death by powerpoint' but it amazes that people still produce horrendous powerpoint presentations to accompany their talks. You know the type: densely packed slides, with 12-point font that the speaker proceeds to read out with their back facing the audience. I've always tried to avoid that, but I have gone through different phases on how best to design a powerpoint.
For years, I adopted a powerpoint style that was similar to the one Lawrence Lessig employed (the so-called 'Lessig Method'). You can see what this is like in this video. The classic Lessig method is to have lots of slides, often consisting of single words or sentences, that serve as visual emphasis to the spoken words. The slides and the speaker thus perfectly complement one another, like different instruments in an orchestra. I never quite approached the staccato-esque style of Lessig, but I aim for something similar.
I now look on this as a mistake. I now think that you shouldn't use slides or visuals if they don't genuinely complement and accentuate what you are saying. This belief is born out of my experiences at conferences where the speakers have learned the 'less is more' lesson but have taken it too far. For example, I was recently at a conference where one speaker produced a slide deck that contained four slides, each of which consisted solely of a numbered heading for the sections of their talk, and another speaker produced a slide deck that consisted solely of photographs that served as obscure (and often unexplained) visual metaphors for what they were saying. I went away with the sense that both talks would have been better if the speakers had dropped the slides altogether and made themselves the focus of attention.
Nowadays, when I produce slides to accompany a talk, I try to have just a few images that help to explain the key concepts or ideas and I cut out all the other fluff. I've embedded two example slide decks from talks I have given in the past 12 months that hopefully illustrate my approach. The first is from a talk I gave about robotics and discrimination and the second is from a talk about algorithmic domination (written versions of both talks were formerly published as blog posts here and here). Both talks were about 25-30 minutes in length.
I should also add that I am a big fan of using handouts to accompany a talk. Indeed, early in my academic career I used to produce elaborate handouts to accompany every talk. I got out of the habit over the years because it was so labour-intensive, but I recently started to get back into it. So, for example, for the talk on algorithmic domination that I just mentioned, I produced a 2-page handout that summarised the key arguments. You can download it here (the back page with objections has blank spaces included by intention: the idea is that audience members can add their own objections and take note of my replies).
Rehearse and Refine
Once I have written out my talk, and prepared some accompanying visuals, I start to rehearse it and refine it. I do this in iterative phases. I'll start just by reading it out loud a few times to get a sense of where the 'beats' and points of emphasis need to be. Doing this will help me to identify clunky or awkward turns of phrase. It will also help to confirm how long the talk really takes. Oftentimes, I will do a reading first, before creating visuals. Remember, I want the visuals to complement what I actually say, and don't want to create visuals for elements of the talk that I may end up omitting (editing before creating visuals helps to avoid the problem of attachment).
Once I've read through the talk a good few times, I'll see if I can deliver it without reading it. I'll do this repeatedly until I seem to have it memorised. I'll then record myself and listen back. Listening back is useful because it allows me to experience the talk as a listener and this helps with further edits and refinements.
Ideally, I would do all this over an extended period of time. In other words, I would leave some space between the rehearsals and the listening back just to make sure I refine it to the best of my ability and have it thoroughly learned off. But I have to be honest and say that I rarely end up meeting this ideal. I usually only rehearse the day before the talk because, despite all my best efforts, I'm invariably still time-constrained when preparing, and everything tends to come down to the wire.
Still, I want to emphasise something: I think rehearsing and refining the talk is the single most important thing you can do to make your talk better. You might feel awkward reading aloud or rehearsing in your hotel room (or whereever) but doing this is, in my experience, the best way to prepare. It's only by performing the talk that you really get a feel for whether it works. And I use the term 'performing' deliberately. It's not enough to just memorise the words. It's about genuinely performing what you are saying, multiple times.
Despite the fact that I think this is the single most important thing you can do, my general impression is that very few people do it. I know this partly because I suggest it every year to students who have to prepare presentations for my classes and when I ask them if they did it, most say they did not. I also have a sneaking suspicion that the majority of the time when I attend talks, the speakers are saying the words for the first time and surprising themselves in the process.
3. Delivering the Talk
If you have done a thorough job preparing the talk, the actual delivery should be a doddle. This will be particularly true if you have rehearsed it several times. Like a trained musician, you will just slip into autopilot and become absorbed by your performance. You won't need to worry about nerves or distractions because you will be so well drilled.
But, of course, it's never that easy. For one thing, even with the best will in the world, you are likely to not do a thorough job in the preparation phase. You'll be stressed and probably nervous when it comes to delivery. How can you ensure that things go smoothly? I have no panaceas. My personal experience tells me that things rarely go smoothly. In fact, I cannot honestly say that I have ever given a talk where things went completely smoothly. Still, there are some things to bear in mind to mitigate the damage.
First, regarding nerves, the only thing I can say is that it gets easier with experience. I used to get quite nervous when giving talks; I no longer do. The sheer volume of public speaking I have to do as an academic (the hundreds of lectures and classes per year, plus other occasional talks) is an antidote to that. This doesn't mean that I'm nonchalant and completely relaxed. I'm pretty sure I'm in a state of mild anxiety (or physiological stress) whenever I give a talk, but this is a catalyst rather than an inhibitor.
Of course, that won't be very reassuring to someone who is experiencing a lot of nerves and doesn't have much experience. Beyond medication, the one bit of advice I remember from my early days, that I found helpful, was this: you are a natural talker. You talk to people every day and you do so quite comfortably. If you approach the talk with the same attitude, you should be fine. Easier said than done, perhaps, but I found it to be a useful mental re-framing.
Second, don't forget the importance of stage presence. How you physically occupy the space from which you give your talk is nearly as important what you say. If you aren't relying heavily on slides and visuals, then the audience's attention will be primarily on you, so you have to make sure you are comfortable to look at. If you are a concatenation of nervous tics and awkward habits, then the audience will disengage from what you are saying. They will look away, probably reaching for their phones for some relief. I tend to think that standing in one place, planting your feet, with some occasional movement or walking, is the best option. Hand gestures are good, but try not to have too many.
This is, again, one of those things that gets easier with repetition. I have to admit that I am pretty awful when it comes to nervous habits and stage presence. I'm often hunched and awkward when speaking. I breathe too heavily. I say 'kind of' too many times. I gesture wildly and sometimes do a Donald Trump-esque air pinch. I also have a tendency to pace back-and-forth from one side of the stage to the other (or, worse, rock back-and-forth on one foot), much to the annoyance of everyone. I never noticed any of this until someone pointed it out to me. Now I'm more self conscious and try to wean myself off these bad habits.
Third, you should commit to what you are saying, no matter how large or small the audience is. When you arrive at the venue and you are told that 2 people (or 200) have shown up, you might start to second-guess yourself. You might think that the joke you have used in your opening won't go down well with only two people; or you might want to leave out the more controversial stuff for the larger audience. You should try to nix those thoughts. Trust in your preparation. You have put a lot of thought into the content and you should commit to what you are saying. There is nothing worse from an audience's perspective than someone who nervously qualifies or apologises for everything they say. This doesn't mean you should be arrogant or aggressive. It just means you shouldn't doubt yourself at the point of delivery.
Finally, stick to the time limit. The biggest complaint I hear at academic events (and at talks in general) it is that (a) people speak for far too long and (b) the moderators don't stop them from doing this. This leads to conference chaos and lots of ill will. If you are on a panel with other speakers, then don't cut into their time with your own ramblings. If you are speaking on your own, then resist the temptation to exceed your pre-agreed time limit. Contrary to what you might think, you are not holding the audience captive. We all slip up in this regard, but sticking within the time limits should be seen as the primary ethical duty of the speaker.
I don't want to go on a long rant about this, but the inability to keep time is one of the things frustrates me most about talks and conferences. I was once at a conference that started at 8:30 am and, at 11:30 am, we were told by the organisers that we were running over 40 mins behind schedule and were asked (told!) if we would mind if we reduced the lunch break to make up for this. There is no need for this. If everyone prepared properly, they should know exactly how long their talk is going to be, and they should rarely, if ever, run over time. If everyone did this, the (conference) world would be a better place.
4. The Aftermath
What happens when it is all over? If you are lucky, people will want to ask questions or chat to you about you have said. This is a good thing. It means they were engaged. Even if they seem annoyed or they disagree with you, you should view this is a compliment. The worst thing is to finish a talk and be greeted by a sea of soporific faces, all of whom eager to get out of the room in which you have imprisoned them for last half hour. So I think politeness is key. I think you should always thank people for their questions and engage with them in good faith. Obviously there are limits to this and sometimes you may feel physically threatened by a questioner, but outside of those extremes I believe the default mode should be politeness, not aggression or sneering. I'm not always good at doing this, but again it's an ideal towards which I strive.
Politeness extends to your participation and attendance at other talks too. My view is that if you are invited to speak at a conference, and if there are other speakers and sessions, you should try to attend and participate in those other sessions. If you expect people to show up and listen to you, why shouldn't you return the favour? Surprisingly enough, there are many academics who don't do this. This is particularly true of senior academics. They get invited to give a keynote at a conference and then quickly leave when their session is done -- no doubt in a rush to catch the next flight to the next conference. I think this is a bad faith gesture2. I think such individuals have an obligation to hang around for a bit longer and engage with others. If they cannot do this, then they shouldn't accept the invitation.
In this respect, I always remember fondly a conference I attended where John Gardner (then the Professor of Jurisprudence at Oxford) was the keynote speaker. The conference was for postgraduate students and, with the sole exception of his keynote, all the other speakers were either PhD or masters' students. Nevertheless, Gardner hung around all day, spoke to several students at lunch about their research, and attended panel sessions in the afternoon. It says something about the nature of academic conferences that this event sticks out in my memory.
Finally, what about the long-term follow-up? Should you review your past talks? Try to figure out what went wrong or improve it in the future? Yes, I think you should. If a talk I have given has been recorded, I like to listen back or watch it and take notes on what I think worked and what didn't. I don't like to overdo this, but I think it is a useful exercise. I will also sometimes write out some reflective notes for talks that weren't recorded, usually the day after I gave them.
One problem I do, however, have with this long-term review and follow-up is that with one or perhaps two limited exceptions, I have never given the same talk twice. I don't mean this in the trivial sense that every talk is somewhat different. I mean I have never delivered a paper or talk that had the same title or was on the same core topic/argument. This makes it hard for me to learn something specific for a future presentation. It's not like I have an 'act' that I am constantly refining. Recently, I've been reading about stand-up comics and learning about how they develop and hone their acts. They do by developing, refining and then rehearsing their material into an hour (or half-hour) of material that they can repeat over and over again. I wish I could do that. Maybe I will in the future. But right now, I don't seem to be built for that. I always want to move onto something new.
Symbols and their Consequences in the Sex Robot Debate (TEDx talk) - Video; Script; Slides
Are we ready for robot relationships? Debate - Video (from 22:11)
The Algorithmic Self in Love - Video (not one of my better efforts: ran a few minutes over time)
Exploitation, Commodification and Harm: Navigating the Ethical Debate about Commercial Surrogacy - Video
Artificial Intelligence and the Constitutions of the Future - Script; Slides
Technological Unemployment and the Search for Meaning - Script; Slides
Slaves to the Machine: Understanding the Paradox of Transhumanism - Script;
I'm sure I know the reason for this: When speaking in front of an audience, I get to control what happens; when conversing with someone I don't know, I lose that sense of control. In other words, public speaking appeals to my inner narcissist and control freak. ↩︎
And one that I was recently guilty of committing. So I am, as I have noted before, a hypocrite. In my defence, I offered to pull out the relevant conference when I knew this would be a problem and had it agreed with the organisers. I also tried my best to participate in other sessions when I was there ↩︎
In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.
[This is a slightly expanded version of a talk I gave at the SIENNA workshop on the ethics of human enhancement in Uppsala, Sweden on the 13th June 2019. The talk was intended to be a provocation rather than a comprehensively reasoned argument.]
I've been asked to say a few words about the challenges that emerging enhancement technologies might pose for how we define human nature (with a nod towards how this might also interact with the 'dual use' nature of technology). I didn't say this to the organisers when they asked me, but this is a difficult topic for me to talk about. That's because I am a sceptic of human nature. I tend to agree with Allen Buchanan (2009; 2011) that discussions of 'human nature' in the enhancement debate tend to obscure more than they clarify. This is because the term 'human nature' usually functions as a proxy for something else that people care about. My feeling is that people should talk about that something else instead, and not about human nature.
That said I'm clearly in a minority in taking this sceptical view. People are hungry for discussions of human nature. The library shelves groan under the weight of scholarly volumes dedicated to the topic. Just to illustrate, there was a book I read many years ago as a student by Leslie Stevenson calledSeven Theories of Human Nature. It was first published in 1987. In 2017, they published the seventh edition of the book, now titledThirteen Theories of Human Nature- apparently the number of theories of human nature had doubled in the intervening 30 years. At that rate of growth, the number of theories of human nature will exceed the total number of humans in just over 900 years. Clearly people are obsessed with this topic.
What is it that obsesses them? Obviously, I can't do justice to the diversity of thinking on this matter -- I'm just setting up a conversation -- but I can at least help to structure that conversation by considering three senses in which people use the term 'human nature' and by explaining what I find problematic and interesting about them.
The first sense is as a descriptive-explanatory theory, i.e. as a theory that describes some fundamental truth(s) about what it is to be a human being. The classic descriptive theories of human nature are essentialist in nature. They try to identify the characteristics that are bothnecessaryandsufficientfor belonging to the kind 'human being', They do this usually by engaging in human exceptionalism: i.e. by focusing on characteristics that distinguish members of human kind from other animals. Typical examples of such characteristics include things like the capacity for self-consciousness, altruism, language, laughter, art, complex tool use and so on.
These essentialist theories are scientifically dubious. In this regard I find myself swayed by an old argument by the philosopher David Hull to the effect that modern evolutionary biology undermines essentialistic theories of human nature. This is because modern evolutionary biology endorses the view that world is filled with genetically varying individuals that occasionally form stable reproductive populations that we call 'species', but these 'species' are temporary, and at least partially, linguistic facts. As he put it:
[I]t is simply not true that all organisms that belong to Homo Sapiens as a biological species are essentially the same… periodically a biological species might be characterised by one or more characters which are both universally distributed among and limited to the organisms belonging to that species, but such states of affairs are temporary, contingent and relatively rare.
Even if you don't buy that argument, there are two other fatal flaws with the essentialist theory. Whatever characteristic you pick as being distinctive of humans (self-consciousness, altruism etc) you can (a) find animals that share primitive or proto-versions of those traits (with perhaps the exception of true language) and, more importantly, (b) find individuals (or groups of individuals) that we would like to call 'human' that lack them, either due to disability or disease or some other factor.
These problems with the essentialist theory have led some scientists and philosophers to endorse non-essentialist theories of human nature. These theories do not pretend to identify distinctively human characteristics but, rather, try to identify characteristics that tend (statistically) to be shared by humans in virtue of their evolutionary and developmental origins. Edouard Machery, for example, has defended a 'nomological' theory of human nature that focuses on traits that have their origins in our shared evolutionary history. Similarly, Michael Tomasello, in his recent trilogyA Natural History of Human Thinking,A Natural History of Human Morality, andBecoming Human: A Theory of Ontogeny, has defended a theory of human nature that focuses on characteristics that emerge from our shared evolution and ontogenetic development (although I find that Tomasello leans too far in favour of the human exceptionalism that is typical of essentialist theories of human nature). Related to this, it is also worth noting that some people argue that we should move away from theories of human nature that expect it to be a stable and unique 'thing' and should, instead, favour theories that view it as a 'process'. This is because an individual human being is not a stable thing but is, rather, a process that develops and changes over time (proponents of this view include John Dupré and Paul Griffiths).
These non-essentialist theories strike me as being much more plausible, but after reading about them I tend to wonder how useful they are, even as scientific theories. The problem is that they all tend to allow for a lot of individual and cultural variation in the traits that are supposed to define our natures. Furthermore, I often get the sense that their proponents pick and choose characteristics that they think are important and interesting and use those to define what it means to be human. In this sense, I worry that proponents of these theories are like dog breeders that measure each individual dog relative to an 'ideal breed type', which as best I can tell are arbitrary constructions. In other words, just as there is no ideal dalmation or poodle; so too is there no ideal human. The problem is that even these non-essentialist theories of human nature tend to assume that there is.
This brings me to the second sense in which people use the term 'human nature', namely: as a normative theory of what is good/bad (and permissible/impermissible) for 'creatures like us'. This normative approach to human nature is probably the approach that we are most interested in here today. We are all presumably familiar with the way in which normative theories of human nature get weaponised in debates about the ethics of enhancement. Some people claim that enhancement is against human nature and so ought to be stopped; some people claim that it is expressing our most human traits and so ought to be celebrated. Neither side persuades the other.
Normative theories of human nature could be thought of as being entirely distinct from descriptive theories of human nature. If they were, I would probably find them unobjectionable, but that's only because they would then be indistinguishable from theories of human well-being and flourishing (which, though contested, do, provide some genuine normative guidance with respect to enhancement). The problem is that many people try to ground their normative theories of human nature in descriptive theories, presumably to give them some extra normative 'oomph'. Suffice to say, I find this practice highly dubious because I find those grounding theories highly dubious. There is the same 'pic-and-mix' mentality at play: people select characteristics they happen to like and then reify them into this descriptive-normative theory of what it is to flourish as a human being.
A clear example of this mentality in action, at least based on my reading, is the theory of human nature that the conservative philosopher Roger Scruton puts forward in his 2017 bookOn Human Nature(Princeton University Press, 2017). Scruton, who has always been a controversial figure, is much-maligned recently due to his apparent sympathy for the right-wing governments in Poland and Hungary and the fact that he has been favoured by both governments. I mention this not to poison the well but because I expect people reading this would find it odd if I didn't make some allusion to this ongoing controversy. Anyway, in the book Scruton argues against reductionist/scientific theories of human nature and in favour of an emergentist/Kantian theory. Roughly, he claims that what is distinctive about humanity is that we understand ourselves and our fellow humans to be moral agents, who possess a unique first-person perspective on the world, and are capable of grasping and acting for moral reasons. A couple of quotes will give you a flavour of his approach:
I want to take seriously the suggestion that we must be understood through another order of explanation than that offered by genetics and that we belong to a kind that is not defined by the biological organization of its members.
We are animals certainly; but we are also incarnate persons, with cognitive capacities that are not shared by other animals and which endow us with an entirely distinctive emotional life--one dependent on self-conscious thought processes that are unique to our kind.
I don't want to dismiss these thoughts entirely. Clearly, there is a sense in which it is true that this mode of self-understanding and interpersonal relationality is central to the human experience1, but there is also a sense in these are the properties that Scruton wouldliketo associate with what it means to be human. Shining the spotlight on these characteristics obscures the fact that some humans don't express or exemplify these properties in the form that Scruton imagines, and that most (all?) humans are more than just these properties.
Despite my scepticism of theories like this, I do think that (non-essentialist) descriptive theories of human nature can provide some important normative heuristics to those of us interested in the enhancement project. They might help us to identify practical limits to what it is possible to change about most humans without doing harm. This might be useful when it comes to setting policies at a population level (while acknowledging exceptions at an individual level). Nick Bostrom and Anders Sandberg discussed this point several years ago in their paper 'The Wisdom of Nature: An Evolutionary Heuristic for Enhancement'. They accepted that there was some room for a form of Burkean conservatism in the enhancement debate: the human body was a complex, evolved system and we should be cautious about tinkering with it too much and too quickly -- though they certainly didn't rule out that tinkering entirely.
That said, one problem with the enhancement project is that -- at its most speculative limits -- it threatens to entirely destabilise any descriptive theory of human nature. What I mean here is that if we achieve near perfect technological control over every aspect of our biology, then there will be no practical constraints on what we can do to ourselves and hence nothing to provide even heuristic normative guidance to our policy-making. It will be entirely up to us to decide what form of life we want live. Some people find this idea deeply disturbing. It is almost like they want to cling to a mythical form of human nature in order to avoid the burden of choosing what kind of life they want to live for themselves (and, yes, there is something redolent of Sartrean existentialism in this but I don't have time to explore it further in these remarks).
This brings me to the final sense in which people talk about 'human nature', namely: as a catch-all explanation (and maybe excuse) for the 'darker' things we do. This is human nature as an 'anti-normative' theory. What I have in mind here is people who say things like 'it is in our nature to be violent' or 'it is in our nature to be jealous/envious'. These sentences, which are common, all seem to be wistful lamentations about the dark side of what it means to be human. They are designed to caution us against ourselves.
This third sense of the term suffers from the same basic flaws as the second sense of the term. To the extent that it is a theory of human evil or human badness it is relatively unobjectionable; to the extent that it tries to ground itself in a descriptive-essentialist theory, it has some problems. That said, this third sense of human nature has obvious implications for debates about the dual-use nature of technology. If humans tend (statistically) to have a dark side, and if it is relatively fixed and stable, then it will pose regulatory and strategic challenges when it comes to the development of technologies that can be used for good or ill. I think one of best recent expositors of these challenges is Phil Torres. Phil has written some thought-provoking and terrifying essays about the threat that 'omnicidal' or 'apocalyptic' agents pose to the future of humanity (I interviewed him about his work here). These are human beings whose dark side is, for whatever reason, turned up to eleven. Phil's point (echoed by Nick Bostrom in his paper 'The Vulnerable World Hypothesis' and Ingmar Persson and Julian Savulescu in their series of books and papers on the 'unfit for the future' idea) is that as powerful technologies become more widely dispersed, the probability that one of these apocalyptic agents will misuse them starts to get unnervingly high.
Unless we can do something to identify, reform and/or neutralise these individuals, then human nature (whatever we take it to be) doesn't have much of a future. What is interesting to me is that Phil and the others who raise this point often suggest technological solutions to the problem. The idea seems to be that powerful and widely dispersed technologies, when combined with the dark side of human nature, could lead to our doom. The problem is that we cannot (or are highly unlikely) to stop the development and dispersal of powerful technologies. Therefore, we need some technological fix that will either (a) identify and neutralise potentially threatening humans (Phil's suggestion) or (b) correct for the dark side of humanity using some kind of moral enhancement technology (Persson and Savulescu's suggestion). As best I can tell, all proponents of this argument admit that their technological solutions are highly speculative and unlikely to work in practice. They are, in a sense, a 'hail Mary pass', a last desperate attempt to stop humanity from sliding over the cliff. But if that's the case, I'm not sure that anti-technological solutions (i.e. solutions focussed on preventing the development and dispersal of powerful technologies) can be dismissed so quickly2.
Either way, it seems that if you believe that human nature is dark and relatively fixed, you should be very worried about the future.
For what it is worth, these properties also feature strongly in Tomasello's scientific theory of human nature↩︎
Myself and Phil went back and forth on this point in the podcast I did with him. You can listen here, if you are interested↩︎
Here's a new preprint. It's a penultimate draft of a chapter I have contributed to the upcoming edited collection Algorithmic Regulation (edited by Karen Yeung and Martin Lodge), which is due to be published by Oxford University Press later this year. As per usual, more details and links are below.
Abstract: We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory systems respond? This chapter defends three claims by way of response. First, it argues that autonomy is indeed under threat in some new and interesting ways. Second, it evaluates and disputes the claim that we shouldn’t overestimate these new threats because the technology is just an old wolf in a new sheep’s clothing. Third, and finally, it looks at responses to these threats at both the individual and societal level and argues that although we shouldn’t encourage an attitude of ‘helplessness’ among the users of algorithmic tools there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer.
This audio essay looks at the Epicurean philosophy of death, focusing specifically on how they addressed the problem of premature death. The Epicureans believe that premature death is not a tragedy, provided it occurs after a person has attained the right state of pleasure. If you enjoy listening to these audio essays, and the other podcast episodes, you might consider rating and/or reviewing them on your preferred podcasting service.