Friday, January 26, 2024

Do Counterfeit Digital People Threaten the Cognitive Elite?




In May 2023, the well-known philosopher Daniel Dennett wrote an op-ed for The Atlantic decrying the creation of counterfeit digital people. In it, he called for a total ban on the creation of such artifacts, arguing that those responsible for their creation should be subject to the harshest morally permissible legal punishments (not death, to be clear, since Dennett does not see that as legitimate).

It's not entirely clear what prompted Dennett's concern, but based on his memoir (I've Been Thinking) it's possible that part of his unease stemmed from his own experiences with the DigiDan project by Anna Strasser and Eric Schwitzgebel. Very briefly, this project involved the creation of an AI chatbot (DigiDan), trained on the writings of Daniel Dennett. DigiDan could generate responses to philosophical questions in the style of the real Daniel Dennett (I'll call him RealDan). As part of a test to see how good the AI simulation was, Strasser and Schwitzgebel got DigiDan and RealDan to answer ten philosophical questions. They then asked Dennett experts to examine the answers and see if they could tell the difference between RealDan and DigiDan. While they were above chance at doing so, they were sometimes fooled by the simulation.

Developments since the DigiDan project, which was based on the GPT3 platform, suggest that it is now relatively easy to create digital simulations of real people. It is happening all the time. Popstars, academics and social media influencers (to name a few examples) have all created digital recreations of themselves. They do so for a variety of purposes. Sometimes it is just a fun experiment; sometimes a marketing gimmick; sometimes a desire to enhance productivity (and profitability). Since the technology underlying these platforms has undergone significant performance gains in the past couple of years, it is to be expected that digital simulations are likely to proliferate and become more convincing. And, of course, simulations of real people are just one example of the broader phenomenon: the ability to create fake people-like AI systems, whether they are based on real people or not. It is this broader class of systems that attracts Dennett's ire. He calls them 'counterfeit people' in light of the fact that they are not really people (in the philosophical sense) but merely fake versions of them.

In the remainder of this article, I want to critically analyse and evaluate Dennett's argument against counterfeit people. I do so not because I think the argument is particularly good -- as will become clear, I do not -- but because Dennett is a prominent and well-respected figure and his negative attitude towards this technology is noticeably trenchant. I will add that Dennett is someone that I personally respect and admire, and that his writings were a major influence on me when I was younger.

The remainder of the article is broken into two main sections. First, I critically analyse Dennett's argument, trying to figure out exactly what it is that Dennett is objecting to. Second, I offer an evaluation of that argument, focusing in particular on what I think might be the ulterior motive behind it. Not to bury the lede: I think that one plausible interpretation of Dennett's fear, which is similar to the fears of many well-educated people (myself included), is that the creation of counterfeit people undercuts a competitive advantage or privilege enjoyed by a cognitive elite (people with advanced degrees and the like, who have, in recent times, been well-positioned to reap the rewards of the information economy). Undercutting this privilege is threatening and destabilising to members of this elite and this can explain their staunch opposition to the technology, but whether such destabilisation is, all things considered, a bad thing is more open to debate. That said, I will not be presenting a dyed-in-the-wool optimistic perspective about the advent of counterfeit people. There are many legitimate reasons for concern and while the fears of a cognitive elite need to be put in perspective, they should not be entirely discounted.


1. What is Dennett's Argument?

The first thing to do is to try to figure out what Dennett's case against counterfeit people actually is. This is far from easy. The op-ed is short (possibly heavily edited down, given how these things work) and packs quite a large number of claims into a short space. It starts with an intriguing analogy between counterfeit currency and counterfeit people:


...from the outset counterfeiting (money) was recognized to be a very serious crime...because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people...These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.

 

This suggests that the underlying argument might be a simple analogical one:


  • (1) The creation of counterfeit currency ought to be punished severely because it undermines social trust.
  • (2) Counterfeit people are like counterfeit currency (in the important respects).
  • (3) Therefore, the creation of counterfeit people ought to be punished severely.

But this is not quite right. The analogy between counterfeit currency and counterfeit people is interesting, and I will consider it again in more detail when offering some critical reflections on the argument, but to make it the centrepiece of the argument doesn't do justice to what Dennett is saying. For one thing, you can see, even in the quoted passage, Dennett slips from talking about the erosion of trust (in the case of money) and freedom (in the case of people). For another thing, later in the article Dennett talks about counterfeit people not just being a threat to freedom but to civilisation more generally.

The key paragraph (in my mind) is the following one:


Creating counterfeit people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive ignorant pawns. This is a terrifying prospect.

 

There is a lot going on in this passage. What is the ultimate thing we should worry about losing and why is it that counterfeit people put us on a pathway to losing that thing? It's clear that Dennett is worried about civilisation in general, but he seems to initially define or characterise civilisation in terms of democracy (i.e. democratic civilisation), but then there are the additional concerns about loss of agency (manipulation, control, passivity), which hearken back to his earlier concerns about freedom. There is also a bit in the middle about the redistribution and entrenchment of power, which may be linked to democracy and freedom, but also may be thought of as a distinct concern.

It's not worth belabouring the interpretation of the article. Cutting through the noise, I think Dennett's argument can be boiled down to the following simple syllogism:


  • (1) If something risks destroying or undermining one of the foundational concepts/institutions of our civilisation (specifically, democracy or freedom), then it should be outlawed and those involved in creating that risk should be severely punished.
  • (2) The creation of counterfeit people risks destroying or undermining both democracy and freedom.
  • (3) Therefore, the creation of counterfeit people should be outlawed and those involved in their creation should be severely punished.

The first premise is convoluted, but does, I believe, capture the essence of what Dennett is worried about. The second premise, of course, is the empirical/predictive claim about the effect of counterfeit people in the real world. What does Dennett say in support of this? A lot of different things, but this is probably the most important:


  • (2.1) Counterfeit people exploit our natural inclination to trust anything that exhibits human-like properties or characteristics (they hijack our tendency to adopt the 'intentional stance')

The intentional stance is a concept long-associated with the work of Dennett. I will not get into its intricacies but the gist of it is simply that, for some classes of system, we can best predict and understand that system by assuming that it has a mind and acts on the basis of beliefs, desires, and intentions. We are supported in doing so by certain externally observable characteristics of those agents/objects (behaviour, appearance, interactions etc). Counterfeit people can copy those external characteristics and hence hijack our tendency to adopt the intentional stance. This has a number of knock-on implications (I've structured this as a logical sequence of thoughts but not a valid deductive inference):


  • (2.2) The prevalence of counterfeit people sows the seeds of social mistrust because we can never simply take it for granted that we are interacting with a real person; we always have to check and, eventually, we may not be able to tell the difference.
  • (2.3) The means of creating counterfeit people is controlled by an economic and political elite (big tech) and they can exploit our tendency to trust counterfeit people to manipulate and misinform us to suit their own agendas.
  • (2.4) The challenge we face in separating real people from counterfeit people, and in protecting ourselves from manipulation and misinformation, may become so overwhelming that we simply switch off and become passive, thereby losing our freedom and agency.
  • (2.5) This is, in turn, problematic insofar as democratic governance depends on a well-informed and active citizenry that can meaningfully consent to its structures and rules.

That, in a nutshell, is Dennett's argument. Is it any good?


2. Evaluating Dennett's Argument: Who benefits from counterfeit people?

There have been several critical assessments of Dennett's argument. Eric Schliesser, for instance, wrote a long critical appraisal of it on the Crooked Timber blog and there is an extended discussion of it over on the Daily Nous blog as well (in the comments section). Some have raised valid concerns about the argument; some have defended. I will not repeat everything that has been said.

There is one point that I want to get out of the way at the outset. Some people have suggested that Dennett's staunch opposition to counterfeit people is hypocritical in some way, given his previous work on the intentional stance. The criticism runs something like this: Dennett views the intentional stance as a useful pragmatic tool for interpreting and understanding the behaviour of certain systems. But it is not just a pragmatic tool. Dennett also commits himself to a more radical view, namely, that if it is useful to act 'as if' a system has beliefs and desires, then, for all intents and purposes, that system does have beliefs and desires. This is a problem for his critique because he presumes there is some important metaphysical difference between counterfeit people and real people. But if he is right about the intentional stance, then if counterfeit people can be reliably and usefully explained from that stance, they are not really counterfeit people. They are just the same as real people and cannot be so easily dismissed or pejoratively labelled.

I think this is a bad critique of Dennett's argument. This is for three main reasons. First, even if Dennett is committed to that view of the intentional stance, it doesn't follow that current AI systems can, actually, be usefully and reliably explained from that stance. It's fair to say that it is useful in some contexts to assume that current AI systems they have beliefs and desires that are somehow similar to ours, but in other contexts this assumption breaks down. This may change in the future, of course, as AI gets better and better at approximating human-like intentionality, but in the meantime there is a meaningful distinction between person-like AI and actual human beings. Second, even if AI systems ought to be treated as intentional systems, it does not follow that they are the same as human persons. Personhood and intentionality are not equivalent. Intentionality may be a precondition of personhood, but not the only aspect of it. Other properties may be required such as sentience, sense of self as a continuing agent, and so on (Dennett has a theory of personhood too). To put the point another way, a theory of intentionality is not the same thing as a theory of moral standing or significance. AIs could be intentional without having moral standing and this may be an important difference between them and actual humans. So, again, the concern about counterfeit people remains. Finally, and perhaps most importantly, even if AI people were equivalent in all important respects to human people, this would not invalidate all of Dennett's concerns. A large part of what worries him is that powerful actors can now create large armies of counterfeit people to manipulate and exploit others for their own ends. This is a fear we already have in relation powerful actors and 'armies' of real human people. The problem is that AI allows for greater control and scalability. Similar points have been made by others before. For instance, David Wallace on the Daily Nous blog has some perceptive comments about what Dennett's views on consciousness and intentionality do and do not entail.

Other criticisms of Dennett's argument are possible. Some may say he overstates the fears about social trust and agency. Perhaps there are technical workarounds that will allow us to distinguish real people from counterfeit people. Dennett himself floats the idea of digital watermarks on counterfeit people, though we can wonder how sustainable and effective they might be. Others might say that our agency and capacity for resilience in the face of this threat are greater than we might suppose, or that there are ways in which counterfeit people might enhance our agency and capacity, e.g by enhancing our productivity or providing personalised tutoring or assistance to overcome challenges we might face. The technology can be used in agency-enhancing and agency-undermining ways. For Dennett's argument to work, we must assume the agency-undermining ways will swamp the agency-enhancing ways. Maybe we should not be so pessimistic? Still others (e.g. Eric Schliesser) might argue that Dennett has the wrong model of democracy in mind. It is not true that democracy depends on the informed consent of the governed. Quite the contrary, democracy just depends on the consent of the governed. The governed do not need to be well-informed. Critics of democracy sometimes raise this as an objection. John Stuart Mill, famously, lamented the ignorance of the masses and thought that educated people's votes should count for more. In recent times, Jason Brennan has written a book-length defence of epistocracy (rule by epistemic elite) that is premised on a similar lament.

These are all criticisms worth pursuing in more depth. But I want to focus on a different line of criticism, one that engages less with the premises of Dennett's argument than with its possible ulterior motive. Why is Dennett so afraid? Why are many members of my peer group (college-educated people and fellow academics) so afraid? Of course, I don't know what really motivates them (maybe, in a Freudian sense, they don't know either) but I can speculate. One aid to this speculation is the analogy Dennett draws between counterfeit people and counterfeit money. There is more to this analogy that initially meets the eye and more the history of counterfeit currencies than Dennett lets on in his piece. Counterfeit currencies didn't always undermine social trust and they didn't always get punished for that reason.

As Tim Worstall points out in a comment over on the Crooked Timber blog, with coined money, there were two main types of counterfeit:


Debased metal counterfeits: this was currency made with a cheaper base metal (or quantity of base metal) which, once discovered in circulation, changed perceptions as to the value of the currency, sowing seeds of suspicion, and undermining the trust needed for economic exchange.

 

Wrong source counterfeits: this was currency made by someone other than the sovereign, thereby disrupting the sovereign's control over the money supply in a given state. Such counterfeits did not always undermine social trust, but they would undermine the sovereign's power.

 

Oftentimes, historically, the main motivation for punishing counterfeiters was not because they devalued the currency but because they threatened sovereign power. Indeed, this is underscored by the fact that sovereigns themselves often debased currencies for their own political reasons (to fund wars and personal expenditures etc).

Worstall goes on to suggest that it might be useful to distinguish AI that fakes real people (and thereby undermines social trust) from AI that simply comes from the wrong source. He doesn't do much more with this comment except offer it as a suggestion. But I find it intriguing. Could it be that the ulterior concern is not about counterfeit people but about AI that comes from the wrong source?

Maybe, but I don't think the 'wrong source' is the right way of framing it. In the case of counterfeit currency, the sovereign's concern was with power, control and benefit. They didn't like that they were being disempowered to the benefit of others. It's possible that something like this may be happening with the rise of AI, particularly recent iterations of generative AI.

To explain what I mean, it is worth noting that there have been several studies in the past 18 months examining the productivity gains associated with the use of generative AI. Many of these studies, though not all, have found some meaningful productivity gain among workers in the knowledge economy. What's interesting about some of these studies, however, is that these productivity gains are not always equally distributed. One finding, which has cropped up in three different studies of three different kinds of work (here, here and here), suggests that lower-skilled workers (those with less education and less experience) benefit most. Indeed, a couple of studies suggest that higher-skilled workers don't benefit much at all.

On the one hand, these are encouraging findings. They provide tantalising evidence to suggest that generative AI might assist with equality of opportunity in the workplace. In other words, that it can work to negate some of the competitive advantage gained by those with elite educations or problem-solving ability (what I am calling, for want of a better term, the 'cognitive elite'). From a general social justice perspective, this looks like a good thing. Who wouldn't want more equality of opportunity? Who wouldn't want to suppress the unfairly won gains of an elite? But, of course, members of the cognitive elite may not see it the same way. They might be threatened by this development because it reduces an advantage they were enjoying.

It could be that fears about this loss of status and privilege motivate fears about counterfeit people. Cynically, we might even suppose that talk of counterfeit people is a distraction. It shifts focus to the sexier or more philosophically contentious concept of 'personhood', and away from the material and economic effects of the technology.


3. Conclusion: Let's Not Get Ahead of Ourselves

The preceding argument might give the impression of being naively optimistic. I would hope that I am not naively optimistic (see my article on Techno-Optimism for more). So let me offer some final and important caveats to what I have just said.

First, the equalising effects of generative AI may not hold up in practice. The studies I have cited are early and restricted to certain tasks and contexts. Whether the effect replicates and holds up across broad sectors of the knowledge economy remains to be seen. It may just be a temporary blip. As AI systems grow in capability they may, finally, and as others such as myself have suggested, effectively replace all workers. Everyone loses out, equally, but no one really gains. At least not in the long run.

Second, in commenting on these studies I have focused on the way in which it empowers lower-skilled workers in some settings. This ignores the elephant lurking in the background. Unless these workers are designing and creating their own generative AI systems (which is not impossible), they are relying on systems created by others, often powerful big tech corporations. While the lower-skilled workers may experience some modest gain in their bargaining power in the labour market, the people that really gain from this technology are those that own and control the means of AI production. So, ironically, this technology may have the same effect on the power of the cognitive elite that early waves of computerisation had an middle-skill, middle-income workers. The cognitive elite lose their power and influence. There is a modest redistribution to the lower-skilled and a big redistribution to the owners of the relevant capital. (A lot of people hated it, but I still think my earlier article on AI and cognitive inflation has some light to shed on this problem)

Third, there is no reason to think that the cognitive elite will take all this lying down. There could be a significant backlash, perhaps coming with the attempt to shut down use of AI in certain industries (strikes in the entertainment industry have already, partially, touched upon this). As social theorists like Peter Turchin have long argued, competition among the elites and elite overproduction may be responsible for many historical revolutions and upheavals. AI might be the crucial prompt for our generation's elite to revolt.

Fourth, and finally, my comments about who benefits from AI and the threat they pose to the cognitive elite, does not undermine or call into doubt Dennett's other fears about counterfeit people. The technology can still be used to manipulate and exploit. It can still pose a threat to our freedom and agency. However, I don't think this is a threat that is primarily associated with the person-like properties of AI. I think many manifestations of AI can pose a threat to freedom and agency.


3 comments:

  1. Upon seeing this post title, I formed a knee-jerk judgment, as I imagine others might have done. The criminal aspect and expression of counterfeiting is front and center here, as it always has been. Remembering Dennett's disdain for phoniness, I was reminded of the parable of the*wandering two-bitser*, from his collection on intuition pumps, etc. If, and if only, for this reason he is unnerved, I think concern is legitimate. It is easy enough, also, to understand why entertainers or other celebrities could go for this sort of chicanery. Possibly, even doctors: they never have enough time to return calls and answer inquiries about ill patients. Imagine their delight, if counterfeits of themselves could accurately and effectively field calls? Counterfeiting, put towards a humanitarian and legitimate cause! I doubt that Professor Dennett's scope of interest goes there. Interests, motives and preferences are more personal than that...

    ReplyDelete
  2. Nah, couldn't happen, right? No altruism behind AI. _DD spots phony---many miles away.

    ReplyDelete
  3. *Wrong source counterfeits* misses the boat. Counterfeit is bogus, from the get. How can counterfeit be other than wrong source? Does anyone understand fake, bogus, or phony? What are you missing here?

    ReplyDelete