Pages

Friday, June 6, 2014

Is Modern Technology Creating a Borg-like Society?


Sounds Swedish...

The Borg are the true villains of the Star Trek universe. True, the Klingons are warlike and jingoistic, the Romulans are devious and isolationist, and the Cardassians are just plain devious, but their methods and motivations are, for want of a better word, all too human-like. The Borg are truly alien: a hive-like superorganism, bent upon assimilating every living thing into their collective mind. To hardy individualists, this is the epitome of evil.

In a somewhat alarmist article entitled “We are the Borg! Human Assimilation into Cellular Society”, Ronnie Lipschutz and Rebecca Hester wonder whether we are at risk of creating a Borg-like society of our own. After all, we are on the verge of creating a world in which every object or event is monitored for data, which is then uploaded to the internet (the so-called “internet of things”). The pace at which we are doing so is truly outstanding. As Jeremy Rifkin highlights in his recent book, in 2007, there were an estimated 10 million sensor devices connecting human artifacts to the internet; by 2013 this was believed to have risen to 3.5 billion; and by 2030 it is estimated to reach 100 trillion. With the increasing ubiquity of these sensors, and with brain sensor devices being included among their ranks, is it really that far-fetched to say that we are creating a Borg-like society?

Lipschutz and Hester argue that it isn’t, and that even if true Borg-likeness is a distant possibility, there are aspects of our current technological infrastructure that are pushing us in that direction. Their article performs a useful service to those of us who interested in technology and the future of human society. It draws attention to aspects of our prevailing political ideologies that draw us toward a Borg-like future; it highlights the technologies that may make this possible; and it suggests why this might be problematic (though it doesn’t spend nearly enough time on this latter issue).

In this post, I want to take a more detailed look at their arguments, focusing, in particular, on the ideological and evaluative aspects. In doing so, I hope to clarify, expand upon and critically engage with what they have to say. I want to stress at the outset that I think there are many potential benefits to the kinds of technology mentioned below. It is those very benefits, however, that make this such an important and interesting debate.

In what follows, I focus on three questions. First, what kinds of technology might make the Borg-like society a reality? Second, what kinds of ideology might drive us to use those technologies in a Borg-like fashion? And third, is this worrying and if so why?


1. What kinds of technology might make this possible?
Before thinking about the technologies that would make a Borg-like society possible, it is worth pausing to consider what a “Borg-like” society would actually look like. In the world of Star Trek, the Borg are a superorganism, much like an ant or termite colony, with an underclass of workers/drones, headed-up by a “queen”. The colony works by “assimilating” new individuals, races and species into a collective mind. Every newly-assimilated drone has their mind and identity completely fused into the colony’s collective consciousness. They consequently lose any sense of individuality and autonomy: their thoughts are no longer their own; they think and act solely for the benefit of the group. The queen may be the one exception to this.

Lipschutz and Hester do not think that we are literally on the verge of creating a collective mind that would rival what we see in the Trek TV series. Instead, they think that some of our technologies are making our societies more Borg-like, where this falls short of what is depicted on screen. To be precise, they think that a variety of technological and social changes “point toward a “cellular society”, in which individual identities and autonomy are submerged in a greater whole” (Lipschutz and Hester, p. 2 of the online version).

With this in mind, we can craft a rough-and-ready definition of Borg-likeness:

Borg-likeness: A society can be said to become more Borg-like to the extent that it minimises the diversity of individual identities and reduces the scope individual autonomy by subsuming those individuals into a greater whole.

The problem with this, of course, is that all societies are Borg-like to some extent. There would be no society without some submerging of individuality into a greater whole. After all, societies work by policing individual behaviour through the use norms (social, legal, moral etc.). Nevertheless, there are obviously degrees of Borg-likeness. It is those degrees that are important in this debate. Some societies are incredibly authoritarian, and work hard to minimise individuality. Others are much looser in the restrictions they place on individuality. The question is what degree of Borg-likeness is acceptable.

There are variety of mechanisms through which society can become more Borg-like. Technological changes are merely one part of the picture. Nevertheless, Lipschutz and Hester suggest — as do others — that they are an important part of the picture. Modern technologies, in particular technologies relating to the internet of things, can greatly facilitate the creation of Borg-like power structures.

So what are these technologies? The most obvious, and most prevalent, are the technologies of surveillance. This includes anything that records what we do, where we go, what we say, who we say it to, and so on. Such technologies are becoming more and more prevalent, from the CCTV cameras that dominate our urban environments, to the metal detectors and X-ray machines that protect our public buildings, to the smartphones we carry in our pockets. Every one of these technologies helps to facilitate the control and constriction of individual behaviour.

These technologies of surveillance are greatly assisted by the internet and by modern data-processing algorithms. The internet allows for the information recorded by surveillance technologies to be uploaded, shared and stored across a global network. This allows governments and other members of society to police individual behaviour. The data-processing algorithms add an additional layer to this. They help to “tame” the overwhelming volume of data that is recorded by these technologies. They spot patterns and draw connections. If they are integrated within an artificial intelligence control system, they can even automate the control of individual behaviour, warning us when we violate norms, and perhaps even issuing punishments and commencing court proceedings against us.

In addition to all this, there are specific forms of surveillance and control that make the possibility of a Borg-like society even more tangible. Lipschutz and Hester draw particular attention to technologies for the mobile-monitoring of health-related data. We already have some primitive forms of this (with things like the Fitbit) but there are some pipeline technologies that would be more impressive. Similarly, there are technologies for reading brain-based data, and inferring mental states from that data, and also technologies that allow for direct brain-to-brain or brain-to-computer communication.

I have to say that I’m wary of the claims made in relation to brain-reading technologies. I know a fair bit about brain-based lie detectors and the like (I’ve published a few bits and pieces on it already, and should have a new article on the topic in the next couple of months), and I don’t think anything we currently have allows for pervasive “mind-reading”. I worry about overstating the effectiveness of these technologies, and of the attendant hype and panic-mongering. Still, not even I can dismiss long-term possibilities.


2. What are the ideological pressures that might encourage this?
As I said above, technology is just part of the picture. Technology doesn’t simply pop into existence. There are ideological, cultural and economic pressures behind every technological development. Indeed, by themselves, technologies of surveillance and control do not create a more Borg-like society. They need ideological assistance to do that. One of the real strengths of Lipschutz and Hester’s article is their attempt to draw attention to the ideological mechanisms that might hasten such a creation.

Chief among these ideological mechanisms is the prevalence of risk-based thinking. We live in risk societies. These are societies that are deeply concerned with identifying and pre-empting social and personal risks. These include things like economic meltdowns, natural disasters, the spread of infectious diseases, terrorism and other threats to national security, public health crises (e.g. cancer, obesity), and other possible environmental and technological disasters. These threats are constantly discussed in the public space, and governments are repeatedly tasked with addressing and resolving these incipient risks.

This preoccupation with risk creates the ideological impetus for the Borg-like society. The concern for risk alters the subjective worldviews of virtually every actor within a society. They now tend to be on the look out for potential risks; and to seek control of individual behaviour in risk-minimising ways. The kinds of technologies discussed in the previous section are ideal assistants in this. They allow us to constantly monitor and intervene in individual behaviour.

Lipschutz and Hester highlight two further, closely-related, mechanisms that might push us toward the Borg-like society. These mechanisms operate in the shadow of risk-based thinking. That is to say, they cannot really be understood apart from that background ideological framework. They are:

The “Double-Down” Mentality: Whenever a risk materialises, governments and other social actors tend to double-down on that risk. In other words, they dedicate more resources to policing and controlling that risk. We see this pattern repeatedly in recent social history. A good example would be the doubling-down on the threat of terrorism post 9/11. This brought with it a host of new laws and technologies for monitoring and controlling individual behaviour.

The Willing Consent Mentality: Although the policing of risk can be coercively enforced by governments from the top-down, it is also willingly consented to by citizens from the bottom-up. The fact is, most people want to avoid risks to their personal well-being. They will happily accept the technological infrastructure that makes this possible, even if it means sacrificing a degree of autonomy and independence.

This is not to suggest that people are simply willing slaves to governmental policy. There is still some resistance to these technologies, and that resistance may not be entirely futile. The important point is that the technologies that allow for the creation of a Borg-like society have a seductive ideological appeal. This appeal is felt by governments and individual citizens alike. Lipschutz and Hester use terrorism and national security as an example of how this has already happened (they say nothing about the Snowden leak and the ensuing debate, which suggests that their article was written before that came to light). I might use healthcare as another example. A lot of people are enthusiastic about health-tracking hardware and software. They use it to improve their fitness, enhance their well-being, increase their productivity and reduce their waistlines. This can be autonomy-enhancing. But the technology that makes this possible can — unless we are careful — also be used to police and control individual behaviour, as governments and health insurers try to reduce risks, and as members of our peer group encourage us to be healthier and to reduce their own exposure to health risks. There may be good rationales for all this intervention and control, ones that we buy into, but it still increases the Borg-likeness of our society.


3. Should we be worried about this?
Grant for now that the technologies and ideologies identified above are increasing (and will continue to increase) the Borg-likeness of our societies. Is this something we should be worried about? This is an evaluative question. It is asking us: would a more Borg-like society be worse, all things considered, than a less Borg-like one? This depends on one’s evaluative commitments. Unfortunately, Lipschutz and Hester are not particularly strong on this matter in their article. They (somewhat ironically) highlight the general risks associated with it, and point to questions that need to asked in the future.

I want to be a little more systematic in my analysis. I would say that there are three general classes of evaluative concern one might have about creating a more Borg-like society. The first two are present at minimal degrees of Borg-likeness. Indeed, they are present right now. The last would only be present at high degrees of Borg-likeness, but is probably worth considering anyway. I won’t analyse these concerns in any great depth; I will merely sketch them.

The first concern has to do with the further opportunities for risk that are created by these technologies. This concern is somewhat ironic since we are assuming that one of the rationales for introducing and deploying these technologies is their ability to minimise risk. But even if this is the case, we must acknowledge that these technologies bring with them fresh opportunities for risk creation. Technologies of surveillance and control can be co-opted by those with nefarious goals, be they governments, corporations or other groups of citizens. For example, devices that automatically administer insulin to diabetics could be hacked into and used to lethal effect. The same goes for any semi-automated medical device with wireless technology. And this is just one subset of technology. Additional risks and harms will be made possible by other technologies. Identity theft, for instance, is now more common thanks to the huge amount of personal data that is now inputted and stored online.

The second concern has to do with harms to privacy. This is, in many ways, the classic concern when it comes to technologies of surveillance and control. I am one of those people who think that privacy is not an overwhelmingly important good. I certainly don’t think it is an intrinsic good. There is a line in one of Seneca’s Letters from a Stoic, where he suggests that the best way to live one’s life is to have nothing to fear if its details are exposed to one’s worst enemy. I tend to think that is the right ideal. I would certainly like to think that I have done nothing in my life that I am ashamed to share with others.

Nevertheless, I don’t wish to be naive in this. I think that privacy is an important bulwark against the moral imperfection of others. What do I mean by this? Well, I think that ideally I would have nothing to fear from disclosure of personal information to others, but I realise that this only works if those others are morally enlightened. The problem is that other people have morally imperfect attitudes. They can be intolerant of alternative lifestyle choices, even when those choices are perfectly acceptable. They can perceive those choices as a threat, and they can inflict various forms of harm on those who make those choices. Consider the way in which homosexuals have been treated throughout history. Privacy should be protected in order to guard against the moral imperfection of others. Otherwise, personal data could be used to persecute and oppress minorities and perpetuate inaccurate stereotypes.

What does this mean for the technologies that increase the Borg-likeness of society? The answer is complicated by the fact that not only do these technologies provide opportunities for oppression, they also provide opportunities for empowerment. To give an example, on the day that I write this post a video has gone viral in which a man has recorded a racist verbal attack on him by a woman. This video is, arguably, being used to positive effect by bringing racism into the light and holding those with racist attitudes to public account. Similarly, Tal Zarsky has argued that algorithmic control systems could be less prone to implicit bias than human control systems (I covered this argument previously). So I’m not sure what the answer is here. There are important values that are protected by the right to privacy and threatened by the technologies in question; but at the same time, those values can also be protected and enhanced by the same technologies. It all really depends on who controls the technological infrastructure.

The final concern has to do with threats to individual identity, autonomy and responsibility. This is the concern that dominates when it comes to the fictional Borg. The reason why the Borg are so villanous, and so disturbing to the Federation, is that they ride roughshod over these values, callously assimilating individuals into their collective. Is this a serious concern in the real world? I think it might be, certainly in the long-term.

I, for one, value my individuality. I think my identity is important, and my ability to choose my own course in life is something worth protecting (as long as it does not inflict moral harm on others). I think this is true for other individuals too, of course. There is a real concern that technologies of surveillance and control could impact negatively on these values. By constantly monitoring and policing our behaviour (and maybe even our thoughts) these technologies could reduce diversity and create an increasingly homogenised set of social actors. Such actors might become little more than moral patients — i.e. passive and controlled recipients of the benefits of technology — with no real sense of their own agency and responsibility. With direct brain-to-brain communication and control, this could be further exacerbated, leading to the creation of something very close to the fictional world of the Borg.

Not everyone is worried about this. Some people are far more communitarian in their moral outlook, and some even positively embrace the idea of collective minds in which our mentality is submerged. Although I don’t think he says so explicitly, Ray Kurzweil’s dream of creating a universe that is saturated by our intelligence seems to imply something like this (at least, I can’t see how this could be realised without the creation of something like a massive groupmind). I find nothing attractive in this vision. I like the idea of harnessing technology to increase the scope of individual autonomy; I don’t like the idea of submerging the individual in a collective mind. Perhaps this is just an irrational prejudice on my part.

Anyway, that brings us to the end of this post. To recap, the central thesis of Lipschutz and Hester’s article is that modern technologies of surveillance and control, coupled with the ideological superstructure of the risk society, make the creation of Borg-like societies a reality. I have tried to clarify their reasons for thinking this and to identify the evaluative concerns that such societies would raise. No doubt I have missed many important issues. Feel free to offer your thoughts in the comments section.

5 comments:

  1. You got something here. The main thing turning us into the Borg is social media and if media is bought by oligarchs friendly to the government in power - they churn out the same messages even if they're untruth or biased. Both instances greatly influence the political and social hemisphere in societies. You can see now in many countries how this influences the masses in their voting. Recently, see Hungary who voted again for autocracy and kleptocracy because they don't realize they're being robbed. See the Philippines who voted for individuals with little positive political experience, a past steeped in government fund robbery and tax fraud, as well as being from families who committed crimes and are still at large. Individuals may be reasoned with, but as a crowd: everyone becomes an idiot. You can say: well this is what the people wanted. But no - if people do not get unbiased factually checked information over and over, do they really know what they're choosing for? So yes, your concept of 'the borg' is quite a reality already.

    ReplyDelete
  2. I agree cellphone's imprisoned our women I wish they would all go off for a month only then will we ever seek freely our people are held hostages from technology!

    ReplyDelete
  3. This blog offers a thought-provoking perspective on the impact of modern technology on our society. The exploration of the Borg-like analogy is insightful and raises important questions about individuality and connectivity in today's digital age. Great read!

    ReplyDelete
  4. This is an insightful exploration of modern technology's influence on society. The comparison to a Borg-like existence raises important questions about individuality versus collective connectivity. It’s thought-provoking to consider how technology can both enhance our lives and challenge our autonomy. Excellent read, thank you for sharing! Sidney De Queiroz Pedrosa

    ReplyDelete
  5. This is a thought-provoking article! It raises an interesting point about how modern technology may lead to a more interconnected, hive-mind society, much like the Borg. It’s important to balance technological advancement with individuality and freedom to ensure we don’t lose our humanity in the process. Beatriz Barata

    ReplyDelete