Tuesday, July 30, 2024

Generative AI and the Future of Equality Norms


The Romans in their Decadence - by Thomas Couture


I have a new paper in a special edition of the journal Cognition. It's about generative AI and its capacity to affect how we understand and pursue the value of equality. The paper is part of my ongoing work on how technology can affect social morality and involves some (hopefully informed) speculation on future scenarios involving generative AI. Check out the abstract below. The paper is available in Open Access format at the journal webpage.


Abstract: This article will consider the disruptive impact of generative AI on moral beliefs and practices associated with equality, particularly equality of opportunity. It will first outline a framework for understanding the mechanisms through which generative AI can alter moral beliefs and practices. It will argue that actual and perceived cognitive ability is one of the determinants of social outcomes in modern information economies, and that one of the potential impacts of generative AI is on the distribution of this ability. Emerging, tentative, evidence suggests that generative AI currently displays an ‘inverse skills bias’, which favours those with less actual and perceived cognitive ability. This could have a disruptive impact on current norms of equality of opportunity, particularly with respect to the means and the purpose of such norms. The longer-term impact of generative AI on equality norms is less clear. Generative AI may shift the entire focus of equality norms or deprioritise the value of equality.


Check out the full paper here (it's not that long!).

This is my first paper in a psychology journal. Thanks to Jean Francois Bonnefon for inviting me to submit to the special issue.

Monday, July 15, 2024

The Ethics of Personalised Digital Duplicates: A Minimally Viable Permissibility Principle


It's now possible, with the right set of training data, for anyone to create a digital copy of anyone. Some people have already done this as part of research projects, and employers are proposing to do it for employees. What are the ethics of this practice? Should you ever consent to having a digital copy made? What are the benefits and harms of doing so? In a new paper with Sven Nyholm, we propose a minimally viable permissibility principle for the creation and use of digital duplicates. Overall, we think there are significant risks associated with the creation of digital duplicates and that it is hard to mitigate them appropriately. The full paper is available open access here.

Here's the abstract.


Abstract: With recent technological advances, it is possible to create personalised digital duplicates. These are partial, at least semi-autonomous, recreations of real people in digital form. Should such duplicates be created? When can they be used? This article develops a general framework for thinking about the ethics of digital duplicates. It starts by clarifying the object of inquiry– digital duplicates themselves– defining them, giving examples, and justifying the focus on them rather than other kinds of artificial being. It then identifies a set of generic harms and benefits associated with digital duplicates and uses this as the basis for formulating a minimally viable permissible principle (MVPP) that stipulates widely agreeable conditions that should be met in order for the creation and use of digital duplicates to be ethically permissible. It concludes by assessing whether it is possible for those conditions to be met in practice, and whether it is possible for the use of digital duplicates to be more or less permissible.

 

And here's the minimally viable permissibility principle that we propose in the text:


Minimally viable permissibility principle (MVPP) = In any context in which there is informed consent to the creation and ongoing use of a digital duplicate, at least some minimal positive value realised by its creation and use, transparency in interactions between the duplicate and third parties, appropriate harm/risk mitigation, and there is no reason to think that this is a context in which real, authentic presence is required, then its creation and use is permissible.


Read the rest. 


 



Wednesday, July 10, 2024

Mind the Anticipatory Gap: Genome Editing, Value Change and Governance




I was recently a co-author on a paper about anticipatory governance and genome editing. The lead author was Jon Rueda, and the others were Seppe Segers, Jeroen Hopster, BelĂ©n Liedo, and Samuela Marchiori. It's available open access here on the Journal of Medical Ethics website. There is a short (900 word) summary available on the JME blog. Here's a quick teaser for it: 


 "Transformative emerging technologies pose a governance challenge. Back in 1980, a little-known academic at the University of Aston in the UK, called David Collingridge, identified the dilemma that has come to define this challenge: the control dilemma (also known as the ‘Collingridge Dilemma’). The dilemma states that, for any emerging technology, we face a trade-off between our knowledge of its impact and our ability to control it. Early on, we know little about it, but it is relatively easy to control. Later, as we learn more, it becomes harder to control. This is because technologies tend to diffuse throughout society and become embedded in social processes and institutions. Think about our recent history with smartphones. When Steve Jobs announced the iPhone back in 2007, we didn’t know just how pervasive and all-consuming this device would become. Now we do but it is hard to put the genie back in the bottle (as some would like to do). 

The field of anticipatory governance tries to address the control dilemma. It aims to carefully manage the rollout of an emerging technology so as to avoid the problem of losing control just as we learn more about the effects of the technology. Anticipatory governance has become popular in the world of responsible innovation and design. In the field of bioethics, approaches to anticipatory governance often try to anticipate future technical realities, ethical concerns, and incorporate differing public opinion about a technology. But there is a ‘gap’ in current approaches to anticipatory governance.

They fail to factor in the mismatch between present and future moral views about a technology. We know, from our own social histories, that moral beliefs and practices can change over time. Things our grandparents thought were morally unexceptionable have become quite exceptionable. It is possible that future generations will have very different attitudes to genome editing than we do today. That’s something we need to consider when governing its rollout..."


More at this link

 

Tuesday, July 2, 2024

The Structure of Academic Writing: Lessons from John McPhee




In the world of literary non-fiction, John McPhee is a god. Through his New Yorker essays, and prize-winning books McPhee has mastered the art of narrative non-fiction. In fact, he pretty much invented the genre. He has many fans; many of whom are themselves well-known writers. They gush about his capacity to make the most turgid-sounding topics -- oranges, boats, plate tectonics -- fascinating explorations of people, culture, science and history.

Ironically, I have never warmed to him. I've tried. Honestly, I've tried. I have started reading several of his books, each time hoping I would find the hook that has lured in other readers. It doesn't seem to work for me. I usually give up after a few dozen pages. Perhaps his prose belongs to another era. A more thoughtful, more languid era. Perhaps I lack the patience to 'get it'.

There is, however, one aspect of McPhee's writing that I have warmed to: his writing about writing. In his essay collection, Draft No 4, he shares lessons from a lifetime of writing. Some of these essays contain insightful and useful advice. In this article, I want to reflect on one of his primary lessons: the importance of structure in non-fiction writing. I then want to see how that lesson can be applied to academic writing, using one of my own academic articles as a guinea pig.

As you shall see, McPhee has quite an elaborate and playful way of thinking about the structure of writing. A lot of academic writing is formulaic and routine. Rarely does anything break out of the conservative mould of traditional article structures. I think academics could benefit from adopting McPhee's elaborate and playful approach. If nothing else, they might have fun in the process.


1. McPhee on Structure

McPhee says that he learned about the importance of structure from his high school English teacher, Olive McKee. She taught him in the 1950s. She made him, and the rest of his class, do three writing assignments every week (which, wearing my teaching hat, sounds like an exercise in self-punishment from her perspective). They could write about anything but they had to prepare a structural outline for each and every piece. This encouraged him to foreground structure in his approach to writing. He has since passed the lesson on to his own students, telling them:


You can build a strong, sound, and artful structure. You can build a structure in such a way that it causes people to want to keep turning pages. A compelling structure in nonfiction can have an attracting effect analogous to a storyline in fiction." 
(McPhee 2018, 20)

 

But he adds some important nuance to this. Noting that (a) structure should be relatively invisible to the reader and (b) that structure must serve a purpose. In his own words:


"Readers are not supposed to notice the structure. It is meant to be about as visible as someone's bones...A piece of writing has to start somewhere, go somewhere, and sit down when it gets there." 
(McPhee 2018, 34)

 

Indeed. These aphorisms aside, the most interesting aspect of McPhee's exploration of structure is his attempt to bring the reader under the skin of some of his most famous works, and to explain how he came up with the structure of those pieces. Draft No. 4 is full of examples of this. I'll just summarise a few of them, including along the way some of McPhee's infamously bizarre structural diagrams, to illustrate his process.

I'll start with profile pieces. Early on in Draft No. 4, McPhee notes that most magazine profile pieces have the same basic structure. You interview a person and the people around that person, and you thereby triangulate on a vision or perspective on that person. That gives each profile piece the following basic structure, where the X in the middle represents the person being profiled and the dots around the edge represent the people being interviewed about that person.




After about a decade of professional writing, McPhee grew tired with this structure and wondered if he could try something different. He came up with the idea of a dual-profile piece, in which two connected people could be profiled, incorporating the perspectives somewhat overlapping circles of interviewees. The structure is illustrated below.



Of course, this was a structure in search of a subject. Eventually, McPhee hit upon the pair of profiles he could use to fill out the structure: Arthur Ashe and Clark Graebner. Both of whom were well-known US tennis players in the 1960s. They had very different backgrounds. Graebner came from a privileged background; Ashe, an African-American, did not. Nevertheless, due to their talent, they had known and interacted with one another from an early age. They played each other in the semi-final of the US Open in 1968. Watching the game, McPhee realised they provided the perfect fodder for his experiment in the dual-profile. The resulting book -- Levels of the Game -- is considered a classic in sports' journalism. For added structural nuance, McPhee built the profiles around a description of that semi-final game.

McPhee felt that this experiment in structure was a success: that the dual profile had a depth and range that was lacking in the traditional single profile. This led him to consider further experiments in structure. One such experiment was based on a very simple structural diagram:




Translated into English, the idea was to write a series of three connected dual profile pieces featuring a common protagonist (the common denominator in the diagram). In a sense, the common denominator would be the main character in the work, but the other three characters would be given a good share of the spotlight and through their interactions they would shed a unique light on the main character. McPhee found a suitable subject matter for this piece. The end result was one of his most celebrated pieces of writing Encounters with the Archdruid. This was about Dave Brouwer, a famous American climber, environmentalist and conservationist, who founded Friends of the Earth. The book was based on Brouwer's clashes with people that did not share his worldview. They were, respectively: (i) Charles Park, a mineral engineer and proponent of mining; (ii) Charles Fraser, a property developer; and (iii) Floyd Dominy, a federal bureaucrat and evangelist for hydroelectric power dams.

Admittedly, these two examples are not representative of McPhee's typical approach to structure. Both of these examples involve structures in search of subjects. Most of the time, McPhee had subjects in search of structure. In other words, he had researched a topic, accumulated an abundance of material, and then needed to reduce it all to some manageable, informative and insightful structure. One example of this, which is probably my favourite in his book, is an essay he wrote about a canoeing trip in Alaska. (The essay appears in his collection, about Alaska, entitled Coming into the Country).

The trip took place over 9 days, from the 13th to the 21st of a particular month (I don't know which one since I have not read the essay itself, only descriptions of it). The natural structure and the one that most writers would probably follow, would be to write a chronological story of the trip, starting on day one and ending on day nine. McPhee decided not to do that. Why not? Because it didn't fit with the themes he wanted to explore in his writings about Alaska. Those themes included the hardship of nature, the struggle for existence and, perhaps most importantly, the cycles of time in the natural world (birth, life, death, seasonality etc).

So, instead of adopting a linear chronological structure, he adopted a circular structure. He started the narrative (using the present tense) on day five of the trip (the 17th of the month) and then continued it right to the end of the trip on day nine (the 21st). The narrative, however, didn't end there: there was a flashback to day one of the trip (the 13th) and the story continued, told in the past tense, back to day five (the 17th). In addition to this structure replicating the cyclical nature of time in the natural world, it also created dramatic tension. On day one of the trip, the intrepid river explorers, encountered a grizzly bear. This was, as you might imagine, a tense moment, encouraging some reflections on mortality and the mismatch between humans and bears. But if he had adopted a linear chronological structure, the encounter with the bear would have been near the start of the narrative. By switching to the circular structure, the encounter came just after the half-way point. The diagram below illustrates the circular structure adopted.




There is a lot more in McPhee's essay on structure. Hopefully, this is enough to give you a sense of his method. Two things stand out for me. First, is the care and attention that he pays to the structure of his writing. I probably haven't conveyed this effectively in my summary but it is clear that McPhee agonises over structure, often taking weeks or months to figure out how best to structure his pieces. He uses props and diagrams to help him map them out. The illustrations I provided above were created by McPhee himself to explain his thought process to students. Second, is his willingness to play with and experiment with structures. He doesn't follow structural cliches. He tries to alter structures in order to better serve the purpose of the writing.

Could academic writing benefit from taking a similar approach?


2. Experimenting with the Structure of Academic Writing

I think it can. As I already said, a lot of academic writing is formulaic. Nowhere is this more apparent than in the sciences, where the typical academic journal article, particularly if it is reporting on the results of an experiment, tends to follow the same basic structure: introduction/literature review, methods, results, discussion. This structure is so deeply engrained in the academic culture that people are often penalised (or simply ignored) for deviating from it.

Even in non-science disciplines, articles tend to follow the same routine structures. So, for example, in philosophy writing (which is what I am most familiar with), articles will usually set out some problem or puzzle, then introduce an argument that solves that problem, and then defend that argument from counterattacks. The sub-sections within a philosophy article may not be prescripted (as they are in the sciences) and there may be some more experimenting with form, but usually those elements are there and they occur in the same sequence.

Most of the time it makes sense to follow the norms of your given academic discipline. I say this to students all the time. Go back to McPhee's comments on structure: writing should serve a purpose; it should bring the reader somewhere and then sit down. Most of the time, academic articles serve an argumentative or persuasive purpose. The goal is to convey an argument to a reader. The conclusion is the location to which you bring them. You want them to sit down once the conclusion is reached. If there are tried and tested structures for doing this, then most people, most of the time, should adopt those structures. Otherwise, they risk haphazard, hard to follow and nonfunctional writing. This risk is probably highest for beginning students, who haven't yet grasped the purpose of academic writing. At the very least, they should try to master the basic forms of academic writing before experimenting with structures.

This doesn't however, mean that academic writing must be formulaic and boring. And it doesn't mean that some creativity with structure is off limits. One of the reasons that McPhee's comments on structure appeal to me is that, without realising it, I think I have long adopted a similar approach to how I think about the structure of academic writing. In theory if not in practice.

As I just said, the purpose of most academic writing (and certainly most of the academic writing that I do) is argumentative. I think of arguments as having a natural structure. They are chained sequences of premises, leading to conclusions, responding to or rebutting objections and counterarguments. The many many argument diagrams that I have prepared for this blog, and for my classes with students, illustrate these argumentative structures. Consider the diagram below, taken from a piece I wrote on this blog earlier this year (it was about Anselm's ontological argument).



Anselm's Ontological Argument with Objections


The purpose of writing is to get the structures of those arguments into the reader's head. That doesn't mean, however, that you have to follow the same, predictable course through the argument. You don't have to start at premise 1 and work forward from there. You don't have to follow your own pathway through the argument. Just because you started with premise 1 doesn't mean the reader has to do so too. It is possible to divide it up in different ways. When you translate the multidimensional (or at the very least two-dimensional) structure of an argument into a linear sequence of prose, you can make creative choices that could, if done right, accentuate the argumentative purpose of the piece.


3. A Worked Example: My Article on 'Tragic Choices and Responsibility Gaps"

Sadly, I don't always practice what I preach. When it comes to the majority of my academic writing, I tend not to experiment much with structure. Usually, I come up with an argument, I map it out -- sometimes on paper; sometimes just in my head -- and then I write it up, typically following the journey I took through the argumentative structure myself on the page. I don't agonise and second-guess myself in the same way that McPhee appears to do.

I'm not sure why this is the case. It may be due to my own character: I'm quite impatient and, once I come up with an idea, I like to just write it up and not overthink it. I've said in interviews before that, for me, the first draft is usually the last draft. I don't enjoy rewriting and editing. I am not one of these people that 'finds' their writing in the edit. It may also be due to the pressures of academic work. The publish or perish incentive scheme doesn't lend itself to lengthy meditations on structure. McPhee, for instance, talks about spending weeks lying on his back, pondering the best structure for his writing. Most academics don't have that luxury (or, at least, don't feel like they have that luxury).

But that's a pity. Recently, as life has filled with other, more pleasant, distractions, and I have slowed down my rate of academic productivity, I've been thinking that I should play more with the structure of my academic writing. Rather than think about this in the abstract, I thought it might be fun, and instructive, to do a worked example with a piece of my own writing.

The piece in question is my article "Tragic Choices and the Virtue of Techno-Responsibility Gaps". To be very clear, when I wrote this piece, I did not think much about its structure. I just came up with the idea and I wrote it. Some of the structure was added later in response to critical reviewers of the piece. If I were to go back and reconsider its structure -- play with its form a bit more -- how might I do it?

It helps if I break the article down into its main component parts. I won't provide a detailed argument map. Instead, I will break the article down into a few main structural elements. Overall, the purpose of the article (as I conceived it) was to argue that, contrary to received opinion, the responsibility gaps created by autonomous machines could, in some cases at least, be a good thing. Something to be welcomed, not feared.

To argue for this conclusion, the article had the following structural elements:


The Problem: This is the bit of the article in which I outlined the received wisdom, i.e. the common view that responsibility gaps are a problem and something ought to be done to eliminate or minimise them. This required a review and analysis of the existing literature on the topic. Having stated the problem, I then introduced my alternative view, i.e. the proposition I wished to defend.

 

I then introduced three main claims which, when combined, led to my desired conclusion. These claims were numbered Claim 1, 2 and 3 in the article:


Claim 1: There are such things as tragic choices, i.e. moral decisions in which there is no clear 'right' answer and in which every choice seems to leave behind a moral 'taint' or 'remainder'. These tragic choices pose a problem for responsible moral agents.
Claim 2: There are three different strategies we can use to cope with the problem of tragic choices -- illusionism (i.e. pretend its not a problem), responsibilisation (pretend there is a right answer and we bear responsible for it), and delegation (make it someone else's problem) -- each of which has a unique blend of costs and benefits, none of which is ideal.
Claim 3: Autonomous machines could be a useful tool in addressing tragic choices because they allow for a reduced cost form of delegation, but in order to embrace this possibility we have to welcome, not reject, responsibility gaps.


My view is that claim 1 + claim 2 + claim 3 supports my desired conclusion. But claim 3, in particular, is controversial. So, to bulk up the argument, I responded to four objections to claim 3. Some of these I came up with myself; some of which I added in response to critical feedback. They were:


Objection 1 (O1): The randomisation objection - why not use randomisers, not machine learning devices, to address the problem of tragic choice?
O2: The impossibility objection - you cannot delegate responsibility to a machine (or any other entity) in a way that reduces the moral costs of the delegated decision. This is because, in delegating, you retain moral control.
O3: The agency laundering objection - even if the argument is correct isn't there a danger that unscrupulous actors will use it as an excuse to hide their responsibility (launder their agency) for decisions they have made?
O4: The explicit tradeoff objection - because decisions need to be coded into algorithms, doesn't this make them more explicit, and the moral tradeoffs inherent in them, more salient, not less? In other words, doesn't delegating to machines heighten the moral costs associated with tragic choices, not lessen them?

 

I offered, what I thought were appropriate, responses to each of these objections, thus leading me back to the desired conclusion: we should embrace delegation to machines, and the associated responsibility gaps, at least in the case of some tragic choices.

Obviously, there is a lot more nuance in the original article, which, as always, I encourage you to read. If you wanted to find additional structural elements in it, you could. But this should give a clear enough outline of its main contents.

When I wrote it, I essentially wrote it in the sequence I just outlined to you. I started with the problem, I then introduced and justified the three claims, before responding to the four objections. The diagram below illustrates this structure.


Original structure - Tragic choices with focus on the problem of responsibility gaps


How could I have done it differently? Taking the diagram above as a starting point, it is easy to think about ways in which the material could have been rearranged and presented in a different order. Doing so, the same argument would be conveyed, but the emphasis and focus would vary. Altering the structure in this way could also affect the framing of the argument. In the original version, the article was clearly intended to be a contribution to the debate about responsibility gaps and autonomous AI (this was, to some extent, forced on me since the article was part of special collection of articles on that topic). But shifting the initial entry point into the argument, and the point at which the argument ends, could have helped to reframe the argument as a contribution to a different debate.

Let's consider some alternative structures. I could, for instance, have started the article with Claim 1, the problem of tragic choices itself. I could have said to the reader "hey, there is this problem with moral decision-making and it poses a threat to responsible agency. How can we address this problem?" This would have made tragic choices, not responsibility gaps, the main focus/frame for the paper. I could then have proceeded to consider a potential solution to the problem, namely the randomisation solution (Objection 1 in the original draft). I could have argued that this was a partial solution at best and that a better alternative needed to be found. This could have led, naturally, to a discussion of Claim 2 and the different costs and benefits associated with the different solutions to the problem of tragic choice. From there, I could have introduced my preferred solution -- reduced cost delegation to machines -- which is, of course, Claim 3. This would have led to a discussion of the remaining objections to Claim 3 (O2, O3, O4). Then, by way of a general conclusion, I could have pointed out the ramifications of my argument for the responsibility gap debate. The diagram below illustrates this structure.

Alternative Structure 1 - Tragic choices with focus on tragic choices


Here's another possibility. In the middle of my original discussion of Claim 2, I talked a bit about the phenomenon of delegation and its importance in human social life. I used Joseph Raz's famous (in the legal philosophy world!) service conception of legal authority to illustrate this idea. Roughly, one of Raz's claims is that legal authorities sometimes mediate between humans and their moral reasons for action. When there are disputes between multiple agents about the right or desirable thing to do, a legal authority can resolve that dispute by doing the moral reasoning for us and giving us a decision to implement/enforce. We don't always second-guess or question the reasoning of the legal authority because that would defeat the point of having the authority in the first place. It performs a service for us, obviating the need for certain moral debates and disputes.

Properly expressed, I think this is an interesting idea and does highlight something important about the role of delegation in moral life. I could have started the argument there, saying to the reader "Hey look, delegation is a really important, and perhaps misunderstood, aspect of moral agency: sometimes, as moral agents, we need to delegate to others". But delegation is just one of several strategies we use to address the challenges of moral agency (Claim 2). I could then have started to talk about the particularly acute challenges associated with tragic choices (Claim 1), leading to my proposed solution (Claim 3). This would have entailed a discussion of reduced cost delegation as an important new form of delegation. This would have introduced a 'Delegation: Part 2' into to the article. In other words, there would be a degree of circularity in the structure of the article, akin, perhaps, to McPhee's circular journey down the river. Albeit, in my case, I would need to defend this proposed form of delegation 3 from the four objections, leading finally to some mention, perhaps peripheral, of the responsibility gap literature.


Alternative Structure 2 - Tragic Choices with Focus on Delegation


That's two suggestions. No doubt more could be proposed. Hopefully, even with this limited discussion, you can see how restructuring is possible and the potential benefits of doing so. Each of the proposed restructurings I have presented changes, in important ways, the emphasis and focus of the article. All the elements are still present, but the rearrangement allows those elements to serve a new purpose, and appeal to a different audience.

Since I work in the ethics of technology, it made most sense for me to frame my article as a contribution to the debate about responsibility gaps. But perhaps I should not have been so conservative in my approach. I could have presented the article as a contribution to legal and political theories of legitimate authority and delegation. I could have presented it as a contribution to moral philosophy and the resolution of moral dilemmas.


4. Conclusion

This brings me to the end of this article. To briefly recap, I started out looking at John McPhee's reflections on the importance of structure to non-fiction writing. Structure ensures that the writing serves a purpose - that it brings the reader somewhere and then sits down. McPhee has developed his thoughts about structure to a high level, constantly playing with and experimenting with form in order to improve the quality of his writing.

Structure is important in academic writing too. Most academic writing serves an argumentative purpose - it brings the reader to a conclusion and then sits down. But many academics are quite conservative and rigid in how they structure their writing. What I have tried to suggest in this article is that they should reconsider this conservatism. This doesn't require radical experimentation with form, per se. Once you have decided on the main elements of your argument, you can rearrange them to serve (subtly) different purposes. Doing so might allow you to see potentialities in your own writing that would be missed by following the well-trodden path.







Thursday, April 18, 2024

Automation, Utopia and Everything In Between


I've been quiet for a while. I know. But here's something to fill the gap: an interview I did for the Network Capital Podcast hosted by Utkarsh Amitabh. It covers a bit of everything: who I am; why I became an academic; whether academia is an ethical career choice; my views on effective altruism; themes from automation and utopia; and some thoughts on the ethics of sex robots. Video version is embedded above. If you prefer audio, check out the link below:

https://open.spotify.com/episode/36c0WFTFJDBckdA68z8PJv

Friday, January 26, 2024

Do Counterfeit Digital People Threaten the Cognitive Elite?




In May 2023, the well-known philosopher Daniel Dennett wrote an op-ed for The Atlantic decrying the creation of counterfeit digital people. In it, he called for a total ban on the creation of such artifacts, arguing that those responsible for their creation should be subject to the harshest morally permissible legal punishments (not death, to be clear, since Dennett does not see that as legitimate).

It's not entirely clear what prompted Dennett's concern, but based on his memoir (I've Been Thinking) it's possible that part of his unease stemmed from his own experiences with the DigiDan project by Anna Strasser and Eric Schwitzgebel. Very briefly, this project involved the creation of an AI chatbot (DigiDan), trained on the writings of Daniel Dennett. DigiDan could generate responses to philosophical questions in the style of the real Daniel Dennett (I'll call him RealDan). As part of a test to see how good the AI simulation was, Strasser and Schwitzgebel got DigiDan and RealDan to answer ten philosophical questions. They then asked Dennett experts to examine the answers and see if they could tell the difference between RealDan and DigiDan. While they were above chance at doing so, they were sometimes fooled by the simulation.

Developments since the DigiDan project, which was based on the GPT3 platform, suggest that it is now relatively easy to create digital simulations of real people. It is happening all the time. Popstars, academics and social media influencers (to name a few examples) have all created digital recreations of themselves. They do so for a variety of purposes. Sometimes it is just a fun experiment; sometimes a marketing gimmick; sometimes a desire to enhance productivity (and profitability). Since the technology underlying these platforms has undergone significant performance gains in the past couple of years, it is to be expected that digital simulations are likely to proliferate and become more convincing. And, of course, simulations of real people are just one example of the broader phenomenon: the ability to create fake people-like AI systems, whether they are based on real people or not. It is this broader class of systems that attracts Dennett's ire. He calls them 'counterfeit people' in light of the fact that they are not really people (in the philosophical sense) but merely fake versions of them.

In the remainder of this article, I want to critically analyse and evaluate Dennett's argument against counterfeit people. I do so not because I think the argument is particularly good -- as will become clear, I do not -- but because Dennett is a prominent and well-respected figure and his negative attitude towards this technology is noticeably trenchant. I will add that Dennett is someone that I personally respect and admire, and that his writings were a major influence on me when I was younger.

The remainder of the article is broken into two main sections. First, I critically analyse Dennett's argument, trying to figure out exactly what it is that Dennett is objecting to. Second, I offer an evaluation of that argument, focusing in particular on what I think might be the ulterior motive behind it. Not to bury the lede: I think that one plausible interpretation of Dennett's fear, which is similar to the fears of many well-educated people (myself included), is that the creation of counterfeit people undercuts a competitive advantage or privilege enjoyed by a cognitive elite (people with advanced degrees and the like, who have, in recent times, been well-positioned to reap the rewards of the information economy). Undercutting this privilege is threatening and destabilising to members of this elite and this can explain their staunch opposition to the technology, but whether such destabilisation is, all things considered, a bad thing is more open to debate. That said, I will not be presenting a dyed-in-the-wool optimistic perspective about the advent of counterfeit people. There are many legitimate reasons for concern and while the fears of a cognitive elite need to be put in perspective, they should not be entirely discounted.


1. What is Dennett's Argument?

The first thing to do is to try to figure out what Dennett's case against counterfeit people actually is. This is far from easy. The op-ed is short (possibly heavily edited down, given how these things work) and packs quite a large number of claims into a short space. It starts with an intriguing analogy between counterfeit currency and counterfeit people:


...from the outset counterfeiting (money) was recognized to be a very serious crime...because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people...These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.

 

This suggests that the underlying argument might be a simple analogical one:


  • (1) The creation of counterfeit currency ought to be punished severely because it undermines social trust.
  • (2) Counterfeit people are like counterfeit currency (in the important respects).
  • (3) Therefore, the creation of counterfeit people ought to be punished severely.

But this is not quite right. The analogy between counterfeit currency and counterfeit people is interesting, and I will consider it again in more detail when offering some critical reflections on the argument, but to make it the centrepiece of the argument doesn't do justice to what Dennett is saying. For one thing, you can see, even in the quoted passage, Dennett slips from talking about the erosion of trust (in the case of money) and freedom (in the case of people). For another thing, later in the article Dennett talks about counterfeit people not just being a threat to freedom but to civilisation more generally.

The key paragraph (in my mind) is the following one:


Creating counterfeit people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive ignorant pawns. This is a terrifying prospect.

 

There is a lot going on in this passage. What is the ultimate thing we should worry about losing and why is it that counterfeit people put us on a pathway to losing that thing? It's clear that Dennett is worried about civilisation in general, but he seems to initially define or characterise civilisation in terms of democracy (i.e. democratic civilisation), but then there are the additional concerns about loss of agency (manipulation, control, passivity), which hearken back to his earlier concerns about freedom. There is also a bit in the middle about the redistribution and entrenchment of power, which may be linked to democracy and freedom, but also may be thought of as a distinct concern.

It's not worth belabouring the interpretation of the article. Cutting through the noise, I think Dennett's argument can be boiled down to the following simple syllogism:


  • (1) If something risks destroying or undermining one of the foundational concepts/institutions of our civilisation (specifically, democracy or freedom), then it should be outlawed and those involved in creating that risk should be severely punished.
  • (2) The creation of counterfeit people risks destroying or undermining both democracy and freedom.
  • (3) Therefore, the creation of counterfeit people should be outlawed and those involved in their creation should be severely punished.

The first premise is convoluted, but does, I believe, capture the essence of what Dennett is worried about. The second premise, of course, is the empirical/predictive claim about the effect of counterfeit people in the real world. What does Dennett say in support of this? A lot of different things, but this is probably the most important:


  • (2.1) Counterfeit people exploit our natural inclination to trust anything that exhibits human-like properties or characteristics (they hijack our tendency to adopt the 'intentional stance')

The intentional stance is a concept long-associated with the work of Dennett. I will not get into its intricacies but the gist of it is simply that, for some classes of system, we can best predict and understand that system by assuming that it has a mind and acts on the basis of beliefs, desires, and intentions. We are supported in doing so by certain externally observable characteristics of those agents/objects (behaviour, appearance, interactions etc). Counterfeit people can copy those external characteristics and hence hijack our tendency to adopt the intentional stance. This has a number of knock-on implications (I've structured this as a logical sequence of thoughts but not a valid deductive inference):


  • (2.2) The prevalence of counterfeit people sows the seeds of social mistrust because we can never simply take it for granted that we are interacting with a real person; we always have to check and, eventually, we may not be able to tell the difference.
  • (2.3) The means of creating counterfeit people is controlled by an economic and political elite (big tech) and they can exploit our tendency to trust counterfeit people to manipulate and misinform us to suit their own agendas.
  • (2.4) The challenge we face in separating real people from counterfeit people, and in protecting ourselves from manipulation and misinformation, may become so overwhelming that we simply switch off and become passive, thereby losing our freedom and agency.
  • (2.5) This is, in turn, problematic insofar as democratic governance depends on a well-informed and active citizenry that can meaningfully consent to its structures and rules.

That, in a nutshell, is Dennett's argument. Is it any good?


2. Evaluating Dennett's Argument: Who benefits from counterfeit people?

There have been several critical assessments of Dennett's argument. Eric Schliesser, for instance, wrote a long critical appraisal of it on the Crooked Timber blog and there is an extended discussion of it over on the Daily Nous blog as well (in the comments section). Some have raised valid concerns about the argument; some have defended. I will not repeat everything that has been said.

There is one point that I want to get out of the way at the outset. Some people have suggested that Dennett's staunch opposition to counterfeit people is hypocritical in some way, given his previous work on the intentional stance. The criticism runs something like this: Dennett views the intentional stance as a useful pragmatic tool for interpreting and understanding the behaviour of certain systems. But it is not just a pragmatic tool. Dennett also commits himself to a more radical view, namely, that if it is useful to act 'as if' a system has beliefs and desires, then, for all intents and purposes, that system does have beliefs and desires. This is a problem for his critique because he presumes there is some important metaphysical difference between counterfeit people and real people. But if he is right about the intentional stance, then if counterfeit people can be reliably and usefully explained from that stance, they are not really counterfeit people. They are just the same as real people and cannot be so easily dismissed or pejoratively labelled.

I think this is a bad critique of Dennett's argument. This is for three main reasons. First, even if Dennett is committed to that view of the intentional stance, it doesn't follow that current AI systems can, actually, be usefully and reliably explained from that stance. It's fair to say that it is useful in some contexts to assume that current AI systems they have beliefs and desires that are somehow similar to ours, but in other contexts this assumption breaks down. This may change in the future, of course, as AI gets better and better at approximating human-like intentionality, but in the meantime there is a meaningful distinction between person-like AI and actual human beings. Second, even if AI systems ought to be treated as intentional systems, it does not follow that they are the same as human persons. Personhood and intentionality are not equivalent. Intentionality may be a precondition of personhood, but not the only aspect of it. Other properties may be required such as sentience, sense of self as a continuing agent, and so on (Dennett has a theory of personhood too). To put the point another way, a theory of intentionality is not the same thing as a theory of moral standing or significance. AIs could be intentional without having moral standing and this may be an important difference between them and actual humans. So, again, the concern about counterfeit people remains. Finally, and perhaps most importantly, even if AI people were equivalent in all important respects to human people, this would not invalidate all of Dennett's concerns. A large part of what worries him is that powerful actors can now create large armies of counterfeit people to manipulate and exploit others for their own ends. This is a fear we already have in relation powerful actors and 'armies' of real human people. The problem is that AI allows for greater control and scalability. Similar points have been made by others before. For instance, David Wallace on the Daily Nous blog has some perceptive comments about what Dennett's views on consciousness and intentionality do and do not entail.

Other criticisms of Dennett's argument are possible. Some may say he overstates the fears about social trust and agency. Perhaps there are technical workarounds that will allow us to distinguish real people from counterfeit people. Dennett himself floats the idea of digital watermarks on counterfeit people, though we can wonder how sustainable and effective they might be. Others might say that our agency and capacity for resilience in the face of this threat are greater than we might suppose, or that there are ways in which counterfeit people might enhance our agency and capacity, e.g by enhancing our productivity or providing personalised tutoring or assistance to overcome challenges we might face. The technology can be used in agency-enhancing and agency-undermining ways. For Dennett's argument to work, we must assume the agency-undermining ways will swamp the agency-enhancing ways. Maybe we should not be so pessimistic? Still others (e.g. Eric Schliesser) might argue that Dennett has the wrong model of democracy in mind. It is not true that democracy depends on the informed consent of the governed. Quite the contrary, democracy just depends on the consent of the governed. The governed do not need to be well-informed. Critics of democracy sometimes raise this as an objection. John Stuart Mill, famously, lamented the ignorance of the masses and thought that educated people's votes should count for more. In recent times, Jason Brennan has written a book-length defence of epistocracy (rule by epistemic elite) that is premised on a similar lament.

These are all criticisms worth pursuing in more depth. But I want to focus on a different line of criticism, one that engages less with the premises of Dennett's argument than with its possible ulterior motive. Why is Dennett so afraid? Why are many members of my peer group (college-educated people and fellow academics) so afraid? Of course, I don't know what really motivates them (maybe, in a Freudian sense, they don't know either) but I can speculate. One aid to this speculation is the analogy Dennett draws between counterfeit people and counterfeit money. There is more to this analogy that initially meets the eye and more the history of counterfeit currencies than Dennett lets on in his piece. Counterfeit currencies didn't always undermine social trust and they didn't always get punished for that reason.

As Tim Worstall points out in a comment over on the Crooked Timber blog, with coined money, there were two main types of counterfeit:


Debased metal counterfeits: this was currency made with a cheaper base metal (or quantity of base metal) which, once discovered in circulation, changed perceptions as to the value of the currency, sowing seeds of suspicion, and undermining the trust needed for economic exchange.

 

Wrong source counterfeits: this was currency made by someone other than the sovereign, thereby disrupting the sovereign's control over the money supply in a given state. Such counterfeits did not always undermine social trust, but they would undermine the sovereign's power.

 

Oftentimes, historically, the main motivation for punishing counterfeiters was not because they devalued the currency but because they threatened sovereign power. Indeed, this is underscored by the fact that sovereigns themselves often debased currencies for their own political reasons (to fund wars and personal expenditures etc).

Worstall goes on to suggest that it might be useful to distinguish AI that fakes real people (and thereby undermines social trust) from AI that simply comes from the wrong source. He doesn't do much more with this comment except offer it as a suggestion. But I find it intriguing. Could it be that the ulterior concern is not about counterfeit people but about AI that comes from the wrong source?

Maybe, but I don't think the 'wrong source' is the right way of framing it. In the case of counterfeit currency, the sovereign's concern was with power, control and benefit. They didn't like that they were being disempowered to the benefit of others. It's possible that something like this may be happening with the rise of AI, particularly recent iterations of generative AI.

To explain what I mean, it is worth noting that there have been several studies in the past 18 months examining the productivity gains associated with the use of generative AI. Many of these studies, though not all, have found some meaningful productivity gain among workers in the knowledge economy. What's interesting about some of these studies, however, is that these productivity gains are not always equally distributed. One finding, which has cropped up in three different studies of three different kinds of work (here, here and here), suggests that lower-skilled workers (those with less education and less experience) benefit most. Indeed, a couple of studies suggest that higher-skilled workers don't benefit much at all.

On the one hand, these are encouraging findings. They provide tantalising evidence to suggest that generative AI might assist with equality of opportunity in the workplace. In other words, that it can work to negate some of the competitive advantage gained by those with elite educations or problem-solving ability (what I am calling, for want of a better term, the 'cognitive elite'). From a general social justice perspective, this looks like a good thing. Who wouldn't want more equality of opportunity? Who wouldn't want to suppress the unfairly won gains of an elite? But, of course, members of the cognitive elite may not see it the same way. They might be threatened by this development because it reduces an advantage they were enjoying.

It could be that fears about this loss of status and privilege motivate fears about counterfeit people. Cynically, we might even suppose that talk of counterfeit people is a distraction. It shifts focus to the sexier or more philosophically contentious concept of 'personhood', and away from the material and economic effects of the technology.


3. Conclusion: Let's Not Get Ahead of Ourselves

The preceding argument might give the impression of being naively optimistic. I would hope that I am not naively optimistic (see my article on Techno-Optimism for more). So let me offer some final and important caveats to what I have just said.

First, the equalising effects of generative AI may not hold up in practice. The studies I have cited are early and restricted to certain tasks and contexts. Whether the effect replicates and holds up across broad sectors of the knowledge economy remains to be seen. It may just be a temporary blip. As AI systems grow in capability they may, finally, and as others such as myself have suggested, effectively replace all workers. Everyone loses out, equally, but no one really gains. At least not in the long run.

Second, in commenting on these studies I have focused on the way in which it empowers lower-skilled workers in some settings. This ignores the elephant lurking in the background. Unless these workers are designing and creating their own generative AI systems (which is not impossible), they are relying on systems created by others, often powerful big tech corporations. While the lower-skilled workers may experience some modest gain in their bargaining power in the labour market, the people that really gain from this technology are those that own and control the means of AI production. So, ironically, this technology may have the same effect on the power of the cognitive elite that early waves of computerisation had an middle-skill, middle-income workers. The cognitive elite lose their power and influence. There is a modest redistribution to the lower-skilled and a big redistribution to the owners of the relevant capital. (A lot of people hated it, but I still think my earlier article on AI and cognitive inflation has some light to shed on this problem)

Third, there is no reason to think that the cognitive elite will take all this lying down. There could be a significant backlash, perhaps coming with the attempt to shut down use of AI in certain industries (strikes in the entertainment industry have already, partially, touched upon this). As social theorists like Peter Turchin have long argued, competition among the elites and elite overproduction may be responsible for many historical revolutions and upheavals. AI might be the crucial prompt for our generation's elite to revolt.

Fourth, and finally, my comments about who benefits from AI and the threat they pose to the cognitive elite, does not undermine or call into doubt Dennett's other fears about counterfeit people. The technology can still be used to manipulate and exploit. It can still pose a threat to our freedom and agency. However, I don't think this is a threat that is primarily associated with the person-like properties of AI. I think many manifestations of AI can pose a threat to freedom and agency.


Tuesday, January 9, 2024

Technology and the Dematerialisation of Sex



The 'sex scene' from Demolition Man

(This article was originally commissioned for the Wired Ideas column, but due to delays on my part, and the subsequent discontinuation of that column (as I understand it) it never appeared. Rather than consign it to the dustbin of history, I have decided to publish it here. Obviously, given the intended audience for the original piece, it is a bit shorter and snappier than most of the things I write).

As ever, science fiction got there first. In the largely forgettable 1993 action movie, Demolition Man, two characters from the 1990s, a hard-hitting cop played by Sylvester Stallone and a psychopathic criminal played by Wesley Snipes, are cryogenically frozen for their misdeeds. They are resuscitated in the year 2032. The future, they quickly learn, is very different. A good-natured, pacifist ethic that eschews violence and confrontation has become widely adopted. Physical sex is disfavoured. This is comically revealed to Stallone's character when he enthusiastically welcomes an invitation to have sex from the female lead (played by Sandra Bullock). Sex, for her, involves donning a neurostimulator helmet that allows for a 'digital transference of sexual energies' between two people. When Stallone suggests they do it 'the old-fashioned way', she reacts with disgust.

I don't suppose we will ever fully embrace the Demolition Man-style ethics of virtual sex, but we could end up in a world in which virtual sex is the ethical preference for most casual or first-time sexual encounters, with the 'old fashioned' method being reserved for special intimate relationships and procreation. 

It is important to be clear about the nature of this claim. An extended definitional analysis of what it means to 'have sex' or what counts as 'sexual activity' would take more time than it is worth. Suffice to say these concepts are contentious and open to interpretation. For the remainder of this article, I presume that sexual activity is any activity involving sexual stimulation and gratification. Although masturbation is an important form of sexual activity, I presume that most people, when they talk about 'having' sex, have a partnered or interactive form of sex in mind. I then draw a distinction between physical, in-person, sex and digital or virtual sex. The crucial point about the latter is that it does not involve direct, physical contact, between sexual partners. It involves an interaction through a digital/virtual medium and via a digital/virtual avatar (I use the terms 'digital' and 'virtual' interchangeably). What I am suggesting is that this latter form of sexual activity might become the ethical default. In other words, it will be presumed to be the primary form of permissible sex and it is only if special conditions are met that physical, in-person, sex will be deemed ethically permissible.

Three factors point toward this outcome. The first is that there is already some evidence to suggest that people are avoiding, or reducing, the amount of in-person sex they have. For example, in 2021, the US Center for Disease Control, published a study indicating that only 30% of teenagers reported that they had ever sex, down from over 50% in 1990. The ensuing suggestion of a “sex recession” among Gen Z  may be overblown—for example, some commentators have counter-argued that although younger people may not be having as much penetrative sex as previous generations, they are engaging in other kinds of sexual activity, and perhaps their sex lives are overall better and more satisfying—but the CDC finding is not an outlier. Studies in Japan,  Australia,  the UKSweden and Finland all indicate that people are having fewer sexual encounters than in previous generations. This is true both within long-term committed relationships and in more casual sexual encounters. 

There are many potential explanations for the great 21st century sex famine, from technology to the modern workplace.  The Finnish study provides one intriguing hypothesis. Every few years since the 1970s, an ongoing study called Finsex has collected data on the sexual behaviours of Finnish adults. In its 2015 iteration, it found that both male and female respondents had masturbated significantly more in recent decades, and that the more people masturbated, the less partnered sex people had. This was particularly prevalent among younger generations. The suggestion from the study's authors was that perhaps people were using masturbation as an alternative to partnered sex. To put it another way: a substitution effect was at play. People were swapping in person sex for a more convenient, and almost as good, alternative. 

Is it really that surprising that masturbation is on the up, and partnered sex on the decline, given the pervasive, always-at-the-tip-of-your-finger, availability of internet pornography? In general, people want to do things that help them promote or pursue their values. If they can access a cheaper, almost as good version of sexual pleasure, through other means that don’t require navigating the complex social dynamics of dating and casual hookups, then they might be enticed to do so via digital or virtual forums. According to one 2019 study, there is evidence to suggest that people do substitute pornography for interpersonal affection.  

This leads to the second factor supporting the move to virtual sex. Internet pornography, at least right now, may do it for some people, some of the time, but it is not so close to the real thing that we are likely to see it as the ethical default or norm for sex. But developments in sextech, both ongoing and future, will make it likely that more people will see virtual sex as a meaningful substitute for the real thing. Developments in generative AI, for instance, already allow people to create realistic and emotionally satisfying AI companions. The emotional turmoil experienced by users of the Replika AI chatbots, when changes were made to that platform in early 2023 -- changes that effectively resulted in the 'deletion' of prior companions -- provides clear evidence of this. It seems likely that people will be able to generate realistic 3D virtual sex partners, with emotionally satisfying 'personalities', in the near future. When this possibility is coupled with advances in immersive VR, and haptic teledildonics (the ability to transmit sexual touch via a digital medium), it is not hard to imagine virtual sex becoming a more plausible and desirable alternative to physical sex. And virtual sex with an AI partner is just one of the new sexual options added by technological innovation. Advances in VR and haptics, in and of themselves, will allow humans to see the virtual medium as an 'almost as good' way to interact with one another.

You may be wondering, however, how we get from this to the idea that virtual sex will become an ethicaldefault. You could accept the argument that people are turning their backs on physical sex in favor of digital sex without supposing that the substitution of virtual sex for in-person sex will become moralized in any way. How could the moralization happen? 

This is where a third factor becomes important. If the perceived cost of in-person sex—not just financial costs, but emotional, social, and health-related costs—increases to the point that people are presumed to be taking a significant ethical risk if they opt for that over the virtual equivalent, then this could precipitate a change in social moral attitudes. Variations in the perceived cost of an action are already known to play a role in changing social moral beliefs. One of the best-studied examples of changes in social moral attitudes concerns how  non-marital and casual sex become more and more permissible in the course of the 20th century. A commonly cited cause of this is that the availability of effective forms of contraception reduced the negative costs associated with casual sex, particularly for women.  This meant more people were willing to engage in sex outside of marriage, which made it more socially acceptable and, eventually, this altered social moral attitudes. Casual sex lost some of the moral stigma it once had.

The same thing can happen in reverse. If the perceived costs of an activity go up, then it can acquire a moral stigma that it didn't previously have. This is something that may be slowly happening with respect to the use of fossil-fuel based automobiles and the consumption of meat. It’s not much of a stretch to suppose that something similar may happen with in-person sex. Sex undoubtedly has significant benefits, but  it also has significant costs. Not all sex is pleasurable or satisfying. Some sex is coerced and morally unacceptable. As a society, we are becoming increasingly aware of both the prevalence of non-consensual, unwanted sexual contact and the harms that it can cause. Victims of sexual assault and violence are speaking out and calling out their attackers, and their attackers are facing both social and legal reprimands as a result. This is all well motivated: there are strong moral reasons to favour this increased moralization of sex. But this could, in turn, have an impact on the perceived permissibility of in-person sex: if it carries the risk of significant interpersonal harms, unwanted trauma and social ostracisation, then we should be very cautious about its pursuit. If this happens, substituting in-person sex for a more convenient, almost as good, and less costly form of virtual sex, could become the social norm.

Admittedly, this presumes that there is an important moral difference between in-person sex and virtual sex. Some people might dispute this, arguing that, the potential costs are equivalent: one can also be harmed by unwanted virtual sex and one can be morally chastised for perpetrating virtual sexual assault. (Indeed, I have argued for something like this view in several academic papers over the past decade.) But even I would concede that there are some differences between the two kinds of sex that can reduce the perceived moral costs of virtual sex, such as the increased physical distance between participants, and the greater flexibility when withdrawing from unwanted or unpleasant contact. In addition, costs arising from healthcare risks and unwanted pregnancy are also reduced in the virtual environment. 

This does not mean that in-person sex will disappear. There are strong emotional and biological reasons why people will still be drawn to it. It just means that the moral barriers to in-person sex will be raised and that it may become less frequent and less socially acceptable as a result.