Should humans explore the depths of space? Should we settle on Mars? Should we become a “multi-planetary species”? There is something in the ideal of human space exploration that stirs the soul, that speaks to a primal instinct, that plays upon the desire to explore and test ourselves to the limit. At the same time, there are practical reasons to want to take the giant leap. Space is filled with resources (energy, minerals etc) that we can utilise, and threats we must neutralise (solar flares, asteroids etc).
On previous occasions, I have looked at various arguments defending the view that we ought to explore space. Those arguments fall into three main categories: (i) intellectual arguments, i.e. ones that focus on the intellectual and epistemic benefits of exploring space and learning more about our place within it; (ii) utopian/spiritual arguments, i.e. ones that focus on the need to create a dynamic, open-ended and radically better future for humanity, both for moral and personal reasons; and (iii) existential risk arguments, i.e. ones that focus on the need to explore space to both prevent and avoid existential risks to humanity.
For the purposes of this article, let’s assume that these arguments are valid. In other words, let’s assume that they do indeed provide compelling reasons to explore space. Now, let’s ask the obvious follow-up question: does this mean that humans should be the ones doing the exploring? It is already the case that robots (broadly conceived) do most of the space exploration. There are a handful of humans who have made the trip. But since the end of the Apollo missions in the early 1970s, humans have not gone much further than low earth orbit. For the most part, humans sit back on earth and control the machines that do the hard work. Soon, given improvements in AI and autonomous robots, we may not do much controlling either. We may just sit back and observe.
Should this pattern continue? Is space exploration, like so many other things nowadays, something that is best left to the machines? In this article, I want try to answer that question. I do so with the help of an article written by Keith Abney entitled “Robots and Space Ethics”. As we will see, Abney thinks that, with one potentially significant exception, we really should leave space exploration to the machines. Indeed, we might be morally obligated to do so. I’m sympathetic to what Abney has to say, but I still hold some hope for human space exploration.
1. Robots do it Better: Against Human Space Exploration
Why should we favour robotic space exploration over human space exploration? As you might imagine, the case is easy to state: robots are better at it. They are less biologically vulnerable. They do not depend on oxygen, or food, or water, or a delicate symbiotic relationship with a group of specially-evolved microorganisms, for their survival. They are less at risk from exposure to harmful solar radiation; they are less at risk from infection from alien microgranisms (a major plot point in HG Wells’s famous novel War of the Worlds). In addition to this, and as Abney documents, there are several major health risks and psychological risks suffered by astronauts that can be avoided through the use of robotic explorers (though he notes that the small number of astronauts makes studies of these risks somewhat dubious).
This is not to say that robots have no vulnerabilities and cannot be damaged by space exploration. They obviously can. Several space probes have been damaged beyond repair trying to land on alien worlds. They have also been harmed by space debris and suffered irrevocable harm due to general wear and tear. However, the problems encountered by these space probes just serve to highlight the risk to humans. It’s bad enough that probes have been catastrophically damaged trying to land on Mars, but imagine if it was a crew of humans? The space shuttle fatalities were major tragedies. They sparked rounds of recrimination and investigation. We don't want a repeat. All of this makes human space exploration both high risk and high cost. If we grant that humans are morally significant in a way that robots are not, then the costs of human space exploration would seem to significantly outweigh the benefits.
But how does this reasoning stack up against the arguments in favour of space exploration? Let’s start with the intellectual argument. The foremost defender of this argument is probably Ian Crawford. Although Crawford grants that robots are central to space exploration nowadays, he suggests that human explorers have advantages over robotic explorers. In particular, he suggests that there are kinds of in-person observation and experimentation that would be possible if humans were on space missions that just aren’t possible at the moment with robots. He also argues, more interestingly in my opinion, that space exploration would enhance human art and culture by providing new sources of inspiration for human creativity, and would also enhance political and ethical thinking because of the need to deal with new challenges and forms of social relation (for full details, see my summary here).
Although Abney does not respond directly to Crawford’s argument, he makes some interesting points that could be construed as a response. First, he highlights the fact that speculations about the intellectual value of human space exploration risk ignoring the fact that robots are already the de facto means by which we acquire knowledge of space. In other words, they risk ignoring the fact that without them, we would not have been able to learn as much about space as we have. Why would we assume that this trend will not continue? Second, he argues that claims to the effect that humans might be better at certain kinds of scientific investigation are usually dependent on the current limitations of robotic technology. As robotic technology improves, it’s quite likely that robots will be able to perform the kinds of investigations that we currently believe are only possible with human beings. We already see this happening here on Earth with more advanced forms of AI and robotics; it stands to reason that these advanced forms of AI can be used for space exploration too.
The bottom line then is that if our reasons for going to space our largely intellectual — i.e. to learn more about the cosmos and our place within it — then robots are the way to go. That said, there is nothing in what Abney says that deals with Crawford’s point about the intellectual gains in artistic, ethical and political thought. To appreciate those gains, it seems like it would have to be humans, not robots, that do the exploration. Perhaps one could respond to this by saying that some of these gains (most obviously the artistic ones) could come from watching and learning from robotic space missions; or that these intellectual gains are too nebulous or vague (what counts as an artistic gain?) to carry much weight; or that they come with significant risks that outweigh any putative benefits. For example, Crawford is probably correct to suggest that space exploration will prompt new ethical thinking, but that may largely be because it is so risky. Should we want to expose ourselves to those risks just so that philosophers can get their teeth into some new ethical dilemmas?
Let’s turn next to the more spiritual/utopian argument for space exploration. That argument focuses on the appeal of space exploration to the human spirit and the role that it could play in opening up the possibility of a dynamic and radically better future. Instead of being consigned to Earth, to tend the museum of human history (to co-opt Francis Fukuyama’s evocative phrase), we can forge a new future in space. We can expand the frontiers of human possibility.
This argument, much more so than the intellectual argument, seems to necessitate human participation in space exploration. Abney almost concedes as much in his analysis, but makes a few interesting points by way of response. First, he suggests that the appeal to the human spirit could be addressed by space 'tourism' and not space 'exploration'. In other words, we could look on human space travel as a kind of luxury good, and not something that we need to invest a lot of public money in. The public money, if it should go anywhere, should go to robotic space exploration only. Second, and relatedly, given the high cost of human space travel, any decision to invest money in it would have to factor in the significant opportunity cost of that investment. In other words, it would have to acknowledge that there are other, better, causes in which to invest. It would, consequently, be difficult to morally justify the investment. Third, he argues that, to the extent that human participation is deemed desirable, we should participate remotely, through immersive VR. This would be a lower cost and lower risk way for vulnerable beings like us to explore the further reaches of space.
I find this last suggestion intriguing. I imagine the idea is that we can satisfy our lust for visiting alien worlds or travelling to distant galaxies by using robotic avatars. We can hook ourselves up to these avatars using VR headsets and haptics, and really immerse ourselves in the space environment at minimal risk to our health and well-being. I agree that this would be a good way to do it, if it were feasible. That said, the technical challenges could be formidable. In particular, I think the time-lag between sending and receiving a signal between yourself and your robotic avatar would make it practically unwieldy. In the end, we might end up with little more than an immersive but largely passive space simulator. That doesn’t seem all that exciting.
2. The Interstellar Doomsday Argument
I mentioned at the outset that despite favouring robotic space exploration, Abney does think that there is one case in which human exploration might be morally compelling, namely: to avoid existential risk.
To be clear, Abney argues that robots can help us to mitigate many existential risks. For example, we could use autonomous robots to monitor and neutralise potential asteroid impacts, or to reengineer the climate in order to mitigate climate change. Nevertheless, he accepts that there is always the chance that these robotic efforts might fail (e.g. a rogue asteroid might leak through our planetary defence system) and Earth might get destroyed. What then? Well, if we had a human colony on another planet (or on an interstellar spaceship) there would be a chance of long-term human survival. Granting that we have a moral duty not to prevent the destruction of our species, it consequently seems to follow that we have a duty to invest in at least some human space exploration.
What’s more, Abney argues that we may have to do this sooner rather than later. This is where he makes his most interesting argument, something he calls the “Interstellar Doomsday Argument”. This argument applies the now-classic probability argument for “Doom Soon” to our thinking about the need for interstellar space exploration. This argument takes a bit of effort to understand, but it is worth it.
The classic Doomsday Argument, defended first by John Leslie and then championed by Nick Bostrom and others, claims that human extinction might be much closer in the future than we think. The argument works from some plausible initial assumptions and then applies to those assumptions some basic principles drawn from probability theory. I’m not going to explain the full thing (there are some excellent online primers about it, if you are interested) but I will give the gist of it. The idea is that, if you have no other background knowledge to tell you otherwise, you should assume that you are a randomly distributed member of the total number of humans that will ever live (this is the Copernican assumption or "self-sampling assumption"). You should also assume, if you have no background knowledge to tell you otherwise, that the distribution of the total number of humans that will ever live will follow a normal pattern. From this, you can conclude that you are highly unlikely to be at the extreme ends of the distribution (i.e. very near the start of the sequence of all humans; or very near the end). You can also conclude that there is highly probable upper limit on the total number of people who will ever live. If you play around with some of the background knowledge about the total human population to date and its distribution, you can generate reasonably pessimistic conclusions about how soon human extinction is likely to be.
That’s the gist of the original Doomsday Argument. Abney uses a variant on it, first set out by John Richard Gott in a paper in the journal Nature. Gott’s argument, using the standard tools of probability theory, applies to the observation of all temporally distributed phenomena, not just one’s distribution within the total population of humans who will ever live. The argument (called the “Delta t” argument) states that:
Gott’s Delta t Argument “[I]f there is nothing special about one’s observation of a phenomenon, one should expect a 95% probability that the phenomenon will continue for between 1/39 times and 39 times its present duration, as there’s only a 5% possibility that your random observation comes in the first 2.5% of its lifetime, or the last 2.5%”
(Abney 2017, 364).
Gott originally used his argument to make predictions about how long the Berlin Wall was likely to stand (given the point in time at which he visited it), and how long a Broadway show was likely to remain open (give the point in time at which he watched it). Abney uses the argument to make predictions about how long humanity is likely to last as an interstellar species.
Abney starts with the observation that humanity first became an interstellar species sometime in August 2012. That was when the Voyager 1 probe (first launched in the 1970s) exited our solar system and entered interstellar space. Approximately seven years have elapsed since then (I’m writing this in 2019). Assuming that there is nothing special about the point in time at which I am “observing” Voyager 1’s interstellar journey, we can apply the Delta t argument and conclude that humanity’s status as an interstellar species is likely to last between (1/39 x 7 years) and (39 x 7 years). That means that there is a 95% chance that we have only got between 66 days and 273 years left of interstellar existence.
That should be somewhat alarming. It means that we don’t have as long we might think to escape our planet and address the existential risks of staying put. In fact, the conclusion becomes more compelling (and more alarming) if we combine the Doomsday argument with thoughts about the Great Silence and the Great Filter.
The Great Silence is the concern, first set out by Enrico Fermi, about the apparent absence of intelligent alien life in our galaxy. Fermi’s point was that if there is intelligent life out there, we would expect to have heard something from it by now. The universe is a big place but it has existed for a long time and if an intelligent species has any desire to explore it, it would have had ample time to do so by now. This has since been confirmed by calculations showing that if an intelligent species used robotic probes to explore the universe (specifically it used self-replicating Von Neumann probes) then it would only take a few hundred million years to ensure that every solar system had at least one such probe in it.
The Great Filter is the concern, first set out by Robin Hanson, about what it is that prevents intelligent species from exploring the universe and making contact with us. Working off Fermi’s worries about the Great Silence, Hanson argued that if intelligent life has not made contact with us yet (or left some sign or indication of its existence) then it must be because there is some force that prevents it from doing so. Either species tend not to evolve to the point that their intelligence enables them to explore space, or they destroy themselves when they reach a point of technological sophistication, or they just don’t last very long when they reach the interstellar phase (there are other possibilities too).
Whatever the explanation of the Great Silence and the Great Filter, the fact that there do not appear to be other interstellar species and we do not know why, should give us reason to think that our current interstellar status will be short-lived. That might tip the balance in favour of human space exploration.
Before closing, it is worth noting that Doomsday reasoning of the sort favoured by Abney is not without its critics. Several people have challenged and refined Gott’s argument of the years, and Olle Häggström argued that the Doomsday argument is fallacious, and an unfortunate blight on futurist thinking, in his 2016 book Here be Dragons.
No comments:
Post a Comment