In a handful of posts published earlier this year, I considered several arguments for thinking that humans ought to explore space and become an interstellar species. I looked at three in particular:
The Utopian Argument: We ought to explore space because it provides a utopian vision for the future of humanity - roughly: humanity expanding into an endless horizon of possibility.
The Moral Argument: We ought to explore space because we have a duty to ensure the future survival of ourselves (and possibly other earthbound life) and this is only going to be possible (in the long term) if we explore space.
The Intellectual Argument: We ought to explore space because doing so will expand our knowledge and understanding, both scientifically (by enabling new forms of inquiry and gathering new data) and culturally (by forcing us to interpret and manage new environments and social circumstances).
Collectively, these seem to provide a strong(ish) case for space exploration. Space barons like Elon Musk and Jeff Bezos might even approve. They, along with countless others, are trying to build the infrastructure that will enable full human exploration of the cosmos. Indeed, Bezos, in some of his public interviews, seems to directly echo the utopian and intellectual arguments in making the ‘pitch’ for space.
But what if space exploration has a darker side? Like, a much darker side? What if the exploration of space is likely to hasten the end of humanity and vastly increase the amount of suffering in the universe? Maybe then all the arguments in favour of space exploration ring hollow? Maybe then we ought to stop Elon Musk, Jeff Bezos and all the other space enthusiasts from realising their ambitions?
That’s effectively what Phil Torres argues in his recent article ‘Space Exploration and Suffering Risks: Reassessing the “Maxipok” Rule’. Using some ideas from international relations theory and the study of war, he argues that the prospects of a catastrophic intergalactic conflict are sufficiently serious to warrant extreme caution when it comes to space exploration. As he puts it in the article’s conclusion “every second of delayed colonization [of space] should be seen as immensely desirable, and the longer the delay, the better” (Torres 2018, 84).
Is he right to be so pessimistic? I won’t offer a definitive verdict here. But I will explore the logic of his arguments in some detail.
1. The Context: Responding to the Maxipok Rule
Torres presents his argument in a particular context. In many ways, this context is irrelevant to the argument — it could be understood without it — but since it does affect some of the language Torres uses when setting it out, it will probably help if I can sketch out that context.
Torres presents his argument as a response to an earlier paper by Nick Bostrom entitled ‘Astronomical Waste: The opportunity cost of delayed technological development’. In that paper, Bostrom argued that we should hasten the exploration of space. Bostrom defended this conclusion by following a simple impartial utilitarian logic. According to this logic, we ought to maximise the number of sentient beings that can live ‘okay’ lives (i.e. lives that are, on balance, worth living). Doing this will maximise the total amount of utility in the universe. When you then realise that the potential number of future sentient beings is vast (our universe has a long time left to run out on the clock) and that we could ensure that more of them exist by colonising space (because we could then escape the carrying capacity limitations of the Earth) you reach the conclusion that we ought to start colonising space as soon as possible. Every second of delay is an astronomically wasted opportunity.
Bostrom admitted that this argument hinged on a number of assumptions. Three of them are important here. The first is that you accept impartial utilitarian principles, which are not everyone’s cup of tea; the second is that you don’t massively discount the value of future lives in your utilitarian calculus, which is something we are usually wont to do; the third is that you assume future populations can avoid existential catastrophe, i.e. some event that would wipe them out or result in significant suffering or torment. These are big assumptions, but if you accept them, Bostrom argues that you would see the wisdom in the ‘maxipok’ rule:
Maxipok Rule: We ought to act so as to maximise the possibility of an okay outcome for ourselves and future civilisations.
Torres’s paper is sceptical of this rule. At a minimum he thinks it doesn’t speak in favour of space exploration. As an alternative, he thinks we should favour a ‘maximin’ rule, which states that ought to we try to achieve the best worst-case outcome. When this is applied to the question of space colonisation, he thinks it speaks decisively against the idea. The reason for this is that significant existential catastrophes — specifically a catastrophic intergalactic war — await us in space.
2. The Intergalactic ‘Warre’ of All Against All
Torres’s pessimism about space colonisation has its roots in Thomas Hobbes’s theory of violence. In his famous work of political philosophy — Leviathan — Hobbes argued that in a state of nature (i.e. in the absence of institutions of government to keep the peace), humans would get trapped in a cycle of violence — a war of all against all. Why so? Well, Hobbes argued that there were three basic motivations for violence:
Competition: People have to compete for scarce resources and will fight each other to gain access to them.
Diffidence: People want to protect themselves and will act violently in order to ensure their own safety (and possibly the safety of others who belong to their families/tribes)
Glory: People want to develop reputations for violence in order to gain access to scarce resources and to discourage people from attacking them. This leads them to launch pre-emptive strikes in order to cultivate a reputation.
In certain environments, these motivations provide the basis for a positive feedback loop: one act of violence begets another and, before you know it, the whole thing spirals out of control.
Hobbes argued that the only way out of this cycle of violence was for everybody to lay down their arms and agree to the creation of a powerful state (a ‘Leviathan’) with a monopoly on the use of force. The state could then prevent outbreaks of violence, enforce a common rulebook of standards, and ensure productive cooperation and coordination among citizens of the state. Hobbes himself favoured an extremely authoritarian style of government — an all-powerful monarch. Most people disagree with him on this, but agree that strong institutions are needed to prevent societal breakdown. These institutions need not come in the form of the state as we now conceive it. In certain small-scale societies, social norms and informal power hierarchies could do the trick, and in some instances markets could maintain order and keep the peace (but they usually require some background set of quasi-legal norms to function well). There are nuances and complexities here, but the gist of Hobbes’s idea — that some sort of ‘Leviathan’ is needed to maintain order — seems fairly robust, particularly in larger societies.
The problem, as Torres sees it, is that there is no way we can create an interstellar Leviathan. Consequently, there is every reason to suspect that a colonised space will descend into a Hobbesian ‘warre’ of all against all. Indeed, Torres goes further and argues that a colonised space will provide conditions that are ripe for a truly apocalyptic war. Not just some minor skirmishes in the outer colonies. The argument, in its basic outline, is as follows:
- (1) If we are to keep the peace in space (and avoid a catastrophic interstellar war), we need to have some sort of interstellar Leviathan (i.e. some set of institutions that can maintain order and prevent us slipping into the Hobbesian trap)
- (2) It will not be possible to construct an interstellar Leviathan of any sufficient sort.
- (3) Therefore, if we colonise space, we cannot avoid the possibility of a catastrophic interstellar war.
I’ve tried to formulate this argument in a way that respects what Torres has to say. You’ll note that the conclusion is somewhat modest: it doesn’t claim that a catastrophic war is definitely going to happen, just that we cannot stop it. I think this is the best way to understand Torres’s argument — particularly in light of his opening discussion of Bostrom and the maxipok rule — but I could be wrong about this. Sometimes it seems that Torres wants to make a stronger claim, viz. that a catastrophic interstellar war is pretty likely if we colonise space. If that’s what he is arguing, it could make his argument less persuasive. I’ll return to this later.
In the meantime, I want to consider Torres’s case for premise (2). Why will an interstellar Leviathan prove so elusive? Although he does not state this explicitly, I think Torres presents three main reasons for thinking this is likely to be the case:
- (4) Colonial Speciation: As different human colonies spread out into space, populating several geographically isolated regions (etc), and facing different adaptive challenges and selection pressures, they will have a propensity to speciate. This will be exacerbated if we have technological control over our biology or if there is increased cyborgisation. These new human (or post-human) species and sub-species could have radically different emotional repertoires and ways of understanding and interacting with the world. This increases the potential for conflict and interspecies tension/suspicion.
- (5) Distance and communication breakdown: The distances between space colonies will be vast. This will make it exceptionally difficult (if not impossible) to create some common institutional structure and rulebook for maintaining the intergalactic peace. What’s more, communication breakdown between the different colonies will be possible (even likely) and this could potentially stir up conflict or tension.
- (6) Future weapons: The space colonies could create advanced weaponry that would allow them to wreak havoc at an intergalactic scale — ‘Weapons of Total Destruction’. Examples include weaponised planets, heliobeams or ‘sun guns’, weaponised particle colliders that create galaxy-swallowing blackholes, and more that we haven’t even been able to think about. The scale and reach of these weapons, combined with the problems of speciation and communication breakdown, will make it difficult to maintain a policy of deterrence or mutually assured destruction.
Torres goes into more detail on each of these three reasons and discusses some elaborate intergalactic conflict scenarios. Many of these are both interesting and provocative. For example, Torres talks about the possibility that some space colonies might, through speciation and cultural change, become gripped by extreme negative utilitarianism and want to eliminate all suffering in the universe by eliminating all suffering creatures. Their advanced, intergalactic weaponry will enable them to do so. If you want to read about more such scenarios, I encourage you to read the article in full.
Torres also discusses some counterarguments to his view. Perhaps, for example, we don’t need a ‘Leviathan’ to keep the peace. Some scholars in international relations have argued that there is no global Leviathan here on the planet earth. The international order of sovereign states is, according to one influential theory, anarchical in nature. And yet, despite this, there are ways to keep the peace. For example, some argue that democratic countries tend not to fight with each other (‘Democratic Peace Theory’) or that countries that engage in mutually beneficial trade tend not to fight with each other (‘Capitalist Peace Theory’). But Torres argues that these mechanisms are unlikely to work at an intergalactic scale because there is no guarantee that space colonies will all be democracies or that they will engage in mutually beneficial trade with one another.
If this is right, we should be wary of space colonisation.
3. Evaluating the Argument
But is it right? I certainly think it is interesting and worth taking seriously. Torres is to be commended for sketching out the possible scenarios in some detail and for thinking about the topic in a rigorous and illuminating way. Like many arguments in the ‘existential risk’ debate, however, it depends on stringing together a number of propositions about cause-and-effect, many of which are relatively far-fetched but each of which is ‘possible’ and, perhaps, minimally plausible, and then showing how they could lead to some truly catastrophic outcome. The idea is that even if this truly catastrophic outcome is highly unlikely, it would be sufficiently bad if it happened that so we should do whatever we can stop it. This is Pascalian/precautionary reasoning at its finest.
But I, for one, have never found this kind of precautionary reasoning particularly persuasive. I’ve argued on another occasion that the possibility-mongering and precautionary reasoning that Bostrom uses to defend the idea that we face an existential threat from AI can lead to problems. If you have to start taking far-fetched, but minimally plausible, possibilities into consideration when thinking about what we should do, then there are lots of other daily activities that warrant extreme caution. Indeed, precautionary reasoning of this sort seems like the perfect gateway drug to the kinds of extreme ethical philosophies that worry Torres in the article. It’s people who reason like this that are going to be most inclined to act preemptively to prevent a possible, but highly speculative, threat to their existence. So maybe instead of delaying space colonisation we should be doing everything in our power to prevent this kind of precautionary reasoning from taking root? Maybe we should encourage people to be rosy and irrational optimists? Maybe that’s what will maximise the chances of a least worse case scenario?
As I have said before, I think we need clearer modal standards in the existential risk debate. How plausible or probable must some possibility be before we have to take it seriously? Something more than minimal plausibility is obviously required but how much more and how should it be weighed against other plausible scenarios. If the scenario has to be substantially plausible then we might take a very different attitude to an argument like Torres’s. For instance, when I think about geographically isolated colonies in space, I think this is more likely to reduce the chances of a Hobbesian trap than increase it. The Hobbesian trap arises when there is conflict over scarce resources and a salient threat from some ‘Other’ who could access those resources. In geographically isolated colonies, that don’t rely on or compete for shared resources, that speciate and develop radically distinct cultures, and that don’t communicate with each other often or easily, it seems to me that the risk of conflict reduces. Why should I care about what my neighbours 200 light years away are doing? Indeed, when you think about it in these terms, staying on the earth, with its limited carrying capacity and still-growing population, seems like a less safe bet (if what we care about is avoiding a catastrophic Hobbesian ‘warre’). The fact that we may not be able to avoid the possibility of an intergalactic conflict doesn’t countermand that for me.
This is not to say that we should definitely be exploring space. There are other arguments and objections to be considered, some of which I have discussed before. It’s just that when we indulge in this kind of precautionary-reasoning, and try to factor in all the different risks we might face not just the one that happens to tickle our fancy at a particular moment in time, it gets pretty difficult to figure out what we ought to be doing and how seriously we should be taking the threat. This is why I find myself less pessimistic than Torres about the case for space colonisation.
Still, maybe I’m one of those rosy but irrational optimists. I’d recommend reading Torres’s article to see if you disagree.