Tuesday, November 3, 2020

Technology and the End of Reality: Is the infocalypse imminent?



It is now common to hear people fret about the power of technology to distort our perception of reality. With the advent of deepfakes, cheapfakes, fake news, and filter bubbles, it seems that technological forces are aligning to make it harder for us to sort fact from fiction. Some take this fear to an extreme. They worry that advances in technology will bring about the end of a shared sense of reality. No longer will people debate a common core of shared facts and assumptions about the world. Instead, everyone will live inside their own bubbles and dismiss the views of outsiders. We are living, according to these critics, in the shadow of the “infocalypse”.

Will this happen? What are the mechanisms that might bring it about? And does it really matter if it does? In this article, I want to try to explore and tentatively answer some of these questions. Although the claim about the imminent end of reality is common, I’m not sure that it is always well defended. It is easy to point to an emerging technology such as deepfakes and say something provocative about its power to bring about the end of reality; it’s harder to prove that this will actually happen.

I don’t claim to provide definitive proof in what follows, but I do hope I to provide some clarity on how it might happen. In brief, I want to suggest that the mechanisms that could bring about an end to a shared sense of reality are more complex and pervasive than is commonly assumed. This should be a cause of concern though there are some obvious historical parallels to our current situation that should not be overlooked. Furthermore, bringing an end to a shared sense of reality has some attractions, although there is a paradox associated with this.


1. Why is a shared sense of reality important?

Let me start by considering the practical importance of this inquiry. It may seem obvious to you why having a shared sense of reality — by which I mean a belief that you and your peers live in the same world, share some common beliefs and assumptions about that world, and are trying to understand it in good faith — is important but it is always worth asking the obvious question. The answer might be surprising.

The classic philosophical view of humans is that we are reasoning creatures. We have been endowed with senses that allow us to perceive the world around us and an intelligence that allows us to bring some order to these perceptions. We use our intelligence — our ‘reason’ to use the more traditional term — in two distinct ways: to understand (theoretical reason) and to plan and act (practical reason). Furthermore, we don’t just exercise our reason independently and individually. We are social creatures. We reason together, working towards a common understanding of the world and figuring out what we ought to do with the time we spend in it. Having a shared sense of reality is important for all of this.

It is important from the perspective of theoretical reason because it allows us to develop theories and gain insights into the world around us. In order to do this, both individually and collectively, we need that world to be reasonably stable, and capable of being probed and experimented upon from multiple perspectives. To put it another way, we need it to be not entirely contingent on our perception or thoughts. Some degree of dynamism and instability is, of course, inevitable, but as long as it is orderly and pattern-like we can still hope to bring it within the scope of theoretical reason. Some element of perceptual distortion is inevitable too. Our minds are not always good at gaining an accurate impression of the world around us, but through effort we can collectively limit the distortions and make reality sensible. None of this works if we all assume we are working with different realities, or if the world around us is, in fact, chaotic, disorderly or highly mind-dependent.

A shared sense of reality is also important from the perspective of practical reason. In order to make plans and implement them, we need a world that is reasonably predictable and stable. If the world is not stable and predictable, plans go out the window. What’s more, this can also have an impact on the moral dimension of practical reason. We need a shared sense of reality in order to develop and follow moral rules. There are several reasons for this. First, most moral norms assume certain stable facts about the world we live in. Consequentialist and utilitarian moral norms, for example, assume that certain actions have predictable effects on human (or sentient) suffering and pleasure. Kantian moral norms assume that we live in a world inhabited by other agents that experience and act in the world in a similar manner to ourselves. Without assuming these external facts, most moral theories dissipate into nihilism. Similarly, when we wish to speak truth to power and address moral injustices or outrages, we rely on the ability to get others to ‘see’ some aspect of the moral reality that they were previously ignoring or overlooking. Ending slavery, for example, relied on getting others to see that we shared a common humanity and hence suffered and experienced the world in a similar way. Seeing this shared reality was key. Again, some dynamism and probabilistic uncertainty is inevitable, and does not completely scupper our moral projects. We just need a reasonable degree of stability and order. 

More reasons could be adduced but for now this should suffice for now. It can be added, as well, that I don’t think any of the claims just made rests on particularly controversial philosophical foundations. A lot of this is because I’m focused on our shared sense of reality and not on the more controversial claim that there is, in fact, a single shared reality that we are capable of knowing. Maybe there is; maybe there isn’t. Perhaps we can only ever access imperfect representations — shadows on the cave wall — of reality. Perhaps the Buddhists are right and reality is ultimately formless and chaotic. I don’t think this affects the claims I just made. It would still be the case that having a shared sense of reality would be valuable for theoretical and practical reasons.


2. Is Technology Undermining a Shared Sense of Reality?

Assume what I have just argued is correct. The next question is whether technology really is undermining the shared sense of reality. The answer to this, I think, is that it certainly has the power to do so, but it doesn't necessarily do so, and if it does it does so in tandem with other causal factors, in particular with human psychology and social institutions. In other words, it is the combination of these three forces — technology, psychology, and society — that poses the threat to the shared sense of reality not technology alone. Allow me to elaborate.

Consider the technological mechanisms that might undermine a shared sense of reality. Three seem particularly important:


The algorithmic curation of information: It’s a banal observation but it bears repeating: we live in the information age. Never before in human history has so much information been collected, stored, organised and broadcast to human beings. So much so, in fact, that it is impossible for us to make sense of it without considerable technological assistance. We now rely, daily, on algorithmic platforms (Google, Facebook, Netflix, Amazon, Twitter etc) to curate and make sense of this information. These platforms often personalise and adapt the information that they present to us and our peers, creating filter bubbles and echo chambers to cater to our informational preferences. The end result is that we increasingly live inside informational hubs that reflect our preferred perception of reality and not necessarily the general and shared perception of reality.

 

Deepfakes and other easy forms of media manipulation: Media has always been manipulated. From the first tablets and scrolls to the printing press and beyond, people have always sought to create fake or forged documents, photos and movies. What’s happened more recently is that, with the rise of Deepfakes and other, easy-to-use forms of media manipulation, the power to create hyperrealistic fake media has become more widely distributed. This degrades our informational commons. As the philosopher Regina Rini has argued, the rise of deepfakes (and their cousins ‘cheapfakes’) removes an honesty check we have on the truth or falsity of testimony: we cannot use video to speak truth to power if video is easily faked. Similarly, and more generally, Don Fallis argues that the rise of deepfakes reduces the amount of information carried by audiovisual signals by increasing the likelihood of their being false positives. The end result is a world in which it is easy to both believe in the media you prefer to believe in and dismiss the media you don’t wish to believe in, on the grounds that it is likely to be fake.

 

The emergence of immersive and realistic forms of virtual and augmented reality: This is, in a sense, the apotheosis of the filter bubble. Instead of just relying on highly curated, possibly faked stories and audiovisual records, we now also have the power to immerse ourselves in absorbing (if not yet hyperrealistic) virtual worlds and to overlay computer generated images and other content onto our perception of the real world. These technologies can often trick the senses into thinking (if only for a moment) that the virtually constructed reality is the same as the real thing, and they give people an escape valve from the drudgery of the real world. The end result is that people can escape into their own preferred worlds when they desire, and do not need experience the same reality as others even when they don’t.

 

Many discussions of the 'infocalypse' or the end of a shared sense of reality tend to focus on one of these technological developments to the exclusion of the others but it is the combination of them that is the problem. Furthermore, these technological forces are complemented by the other mechanisms that I will now discuss.

Consider the psychological mechanisms at play. As Henrik Skaug Saetra points out in his article ‘The Tyranny of Perceived Opinion’, the algorithmic curation of information plays off certain psychological biases that most people share. In particular, it plays off the related phenomena of selective exposure and confirmation bias. These are well-evidenced psychological biases. The former describes people’s tendency to only notice or seek out information that supports their pre-existing values and perceptions of the world; the latter describes the tendency to interpret any dissonant or incongruous information in a manner that confirms pre-existing biases. It is these psychological tendencies that push both users and platform creators into filter bubbles and echo chambers. This is connected to how people use emotions to evaluate and express their perception of the world. As Steffen Steinert argues in relation to the phenomenon of emotional contagion on online platforms, we all use emotions to attach values to our perceptions of the world. We also share these emotions with others and feed off other people’s emotional responses. If you are afraid of something, and clearly express that fear, my own fear response is more likely to be triggered. Emotions are important. Without them we couldn’t select and sort information in a useful way, but emotions can also distort our perception of reality. The problem with technology, particularly social media, is that it promotes ‘hot’ emotional responses to the world. Expressions of outrage and anger are more likely to be seen by more people. When people are angry and outraged they are more likely to adopt a conservative, closed-minded view: suspicious of others and protective of themselves. This further promotes filter bubbles and echo chambers.

Finally, consider the institutional mechanisms at play. By the use of the term ‘institutional mechanisms’ I mean to refer, primarily, to political and economic institutions. The economic actors that create many of the informational tools we currently rely upon — and that are undermining our sense of a shared reality — rely on a particular business model that encourages them to both appeal to and promote a polarised and fragmented view of the world. Multiple news organisations now exist to ’niche’ themselves to particular segments of popular opinion. The same is, increasingly, true of political institutions too. Political parties niche themselves by appealing to particular fragments of popular opinion. Ezra Klein has discussed the net effects of both institutional forces in his book Why We’re Polarized. Though this book is US-centric in terms of its focus some of the key insights apply more generally. At the heart of Klein’s argument is the claim that increased political polarisation is being driven by a feedback cycle: the population is becoming more fragmented (for a variety of reasons, including those having to do with the fragmented and personalised media landscape), and political institutions are responding to this by appealing to the fragments. This has a snowball effect: when one party is in power, they appeal to their fragment of the population, and this amplifies the sense of ‘identity threat’ among rival fragments who respond in kind with increased attachment to their preferred values. The process cycles on, resulting in an increasingly polarised political climate. The polarised communities see outsiders as belonging to a different world and a major threat to their own existence. It is worth noting, as well, that different factions can nefariously manipulate the polarised informational environment to suit their own. Many governments, most notably Russia, do this to foreign countries in order to support their own power and influence.



These three mechanisms feed off one another and have two main effects:


E1: It is now much easier for individuals to slide into their own preferred construction or perception of reality (and not encounter or be disrupted by dissonant information).

 

E1: It is now much easier for people to dismiss or doubt the reality presented to them by others (on the grounds that it might be faked or distorted in some way)

 

Both of these effects undermine a shared sense of reality.


3. Is It Really So Bad?

The preceding paints a bleak picture. We might wonder whether things are really so bad. I close with three observations.

First, as noted above, media manipulation is nothing new. Humans have always manipulated media to various ends. Furthermore, the psychology of selective exposure and the problems of political polarisation are hardly unprecedented in human history. They have always been with us. What is usually earmarked as being different this time round is the volume of information with which we must contend, the relative ease with which it can be manipulated, and the realistic nature of the end product. But part of me wonders whether things really are so different from what they were once like. One of my hobbies is biblical history, particularly the history of early Christianity (I’ve written about aspects of this before on this blog). If you approach that era from a secular perspective, one thing that often strikes you about it is how epistemically chaotic it seems to have been. Different religious sects with different worldviews were commonplace. People within those sects listened to stories told to them by friends of friends of friends that might have witnessed the events that originated the sect. They often adopted life-changing worldviews on the basis of this imperfect testimony. What’s more, when these stories were written down they were often presented in forged documents (in the sense that the author’s pretended to be people they obviously were not) and occasionally mistranscribed or misedited, distorting some crucial element or lesson. Lots of rival, distorted and dubious stories fought with one another for attention. People believed on thin evidential foundations. I’m sure much of the ancient world was similarly epistemically chaotic. What might be happening now is that technology is just returning us to the epistemic chaos from which we briefly escaped. This doesn’t make it a good thing: a step back is a step back. But it may not make it a radically new thing either.

Second, as Henrik Skaug Saetra pointed out to me, the argument I have presented doesn't necessarily undermine a shared sense of reality. It could well be the case that technology causes greater fragmentation of the human population into different bubbles. But unless those bubbles are completely individualised, it's likely that we will share a sense of reality with the other occupants of our respective bubbles. For example, people living in, say, the left-leaning liberal bubble on Twitter are likely to share a lot of common beliefs about the nature of the world they inhabit. Indeed, it may even be the case that they share more beliefs than would previously have been the case since they primarily associate with one another. The problem arises more at the inter-group level. The techno-social infrastructure is now such that there is greater disparity and dissonance across the bubbles than within them. All of this sounds right to me and may prevent the slide towards complete epistemic chaos, but I nevertheless think it still undermines a shared sense of reality since there may be many more fragmented bubbles than before. This is because, instead of having to figure out some common modus vivendi with our neighbours and fellow citizens, technology affords us the opportunity to retreat into a world of our own making.

Third, and finally, there is one potential argument for thinking that the end of shared sense of reality is not that bad One of the arguments I experimented with in my book Automation and Utopia was Robert Nozick’s defence of the idea of a meta-utopia. In short, Nozick argued that people will never agree on what the best world or best life is. We should stop trying to get them to do so. Instead, we should create a meta-utopia which is a world in which people are allowed to create and join mini-worlds that suit their own values and preferences. In Automation and Utopia, I argued that one of the advantages of technology — including technologies like AR and VR — is that it may allow us to create a functional meta-utopia: people can use technological tools to build worlds that match their preferences. However, there is, as I pointed out, a bit of a paradox in this. The meta-utopia is only desirable if people respect the boundaries between all the mini-worlds — i.e. they don’t seek to undermine or ‘colonise’ another person’s preferred world. How can we guarantee this? The only conceivable way is if we agree on some general constitutional order that polices the boundaries between all the mini-worlds. So, ironically, we all have to share enough of the same sense of reality to create that constitutional order before we can happily escape into our own realities.

 

2 comments:

  1. My problem with Nozick’s concept of meta-utopias coexisting on Earth is that underneath the veils of illusion there is our common reality of a shared ecology. Moving into a planetary system with sharing resources seems like a sane way forward.

    ReplyDelete