(Introduction and Index)
Note: The hope is that this series will be edited and updated in response to reader’s comments and exposure to more of the relevant literature. So if you have any suggestions, please make them in the comments section.
To freeze oneself or not to freeze oneself? That is the question.
In this post, I’ll take a closer look at the "cryonics dilemma", mapping out the basic contours of the decision-problem faced by anyone thinking about undergoing cryopreservation. This exercise will have two main benefits. First, it will allow us to confront some of the complex, and perhaps neglected, features of the decision. And second, the mapping exercise will provide a framework into which the subsequent ethical arguments can be placed.
Before we get down to serious business, it’s worth dealing with a terminological issue at the outset. As you will have noticed, I have titled this post the “Cryonics Dilemma” and have also structured the post and the series around a dilemmatic question: should we freeze ourselves or not? But this may not be the right way to go. To me, the term “dilemma” denotes a decision-making problem in which one has two choices which lead to two morally equal outcomes — the novel/movie Sophie’s Choice provides a classic example of this.
The thing is, the cryonics decision problem may not have these key features. The outcomes may not be morally equal, and the choices may not be limited to two. Thus, it might be best not to call this a dilemma. We could, perhaps, call it an false-dilemma: something that initially appears dilemmatic but, on closer inspection, is not. But that has other unfortunate connotations, particularly in that it might lead one to trivialise the ethical dimensions of the decision, which is something we want to avoid doing at the outset. The term “the cryonics decision problem” might be the most descriptive and accurate in this context, but it lacks punchiness. So, I’ve stuck with “cryonics dilemma” and added these few cautionary words.
1. Some Elementary Decision Theory
For the purposes of this entry, and for the rest of the series too, we will be analysing the decision to undergo cryopreservation with the tools of (elementary) decision theory. Consequently, we will need to be familiar with some of the key concepts in decision theory. I'll discuss them here. Some readers might be familiar with these concepts already, they advised to skip to the next section.
Decision theory provides us with various tools for understanding, predicting and guiding decisions. The predictive powers (or lack thereof) of decision theory are irrelevant in this series. We are not concerned with predicting whether or not people will undergo cryopreservation; we are solely concerned with working out whether they ought to do so. Hence, we will be looking at the decision from the perspective normative decision theory. To do this, we need to look at two things: (i) the tools for formally modelling a decision problem and (ii) the normative axioms or assumptions that guide decision-making.
When modelling decisions, decision theorists typically break them down into four elements: agents, actions, states and outcomes. An agent is the person or entity that makes the decision. An action is a choice (i.e. sequence of bodily movements) that an agent can actually make. A state is any feature of the world that causally independent of the agent’s actions, but which may affect the outcome of the decision. And an outcome is…well, an outcome is an outcome: it is a possible state of the world after a decision has been made.
One neat tool that decision theorists often use when analysing decision problems is that of the decision tree. This is a diagram that effectively and succinctly illustrates the four elements of a decision problem. Consider the example below. There is a node, which represents a decision point; two branches, which represent the two actions available to the agent; and two outcomes at the end of these branches.
There are other ways of representing decision problems — the decision matrix being the main one — but I’ll stick with the decision tree here. One reason I do so is because the decision tree can capture the sequential nature of some decision problems -- i.e. the fact that first you make one decision, which leads to another and so on -- more effectively than the matrix. Also, a decision tree handles probabilistic decisions more effectively (at least in my opinion) than a matrix. If a decision has an uncertain outcome, this can be represented by inserting a new decision node at the end of the relevant branch and by allowing a special agent (Nature) to roll the dice and choose the outcomes according to their respective probabilities. You can think of this as nature selecting the “state” that the world is going to be in. As below.
In addition to all these tools for formally modelling a decision problem, we need to introduce some normative axioms that will help us to “solve” the decision problem. If you ever read the literature on decision theory, you’ll find that there are quite a number of suggested axioms. I’m going to keep things simple here and focus on one key axiom, namely: people ought to choose the action that leads to the (morally) best outcome. Hence, our goal in analysing the cryonics dilemma will be to work out which decision leads to the morally best outcome.
One might object that this needlessly biases our analysis in favour of consequentialism. If the purpose of this series is to examine the ethics of cryonics, it should be open to all ethical theories, be they consequentialist or otherwise. I basically agree with this criticism, but I also tend to think — like Parfit and Ord — that the three dominant strands in ethical theory (consequentialism, deontologism and virtue ethics) can be subsumed under a common framework. Thus, I think it is possible — perhaps on a strained interpretation of consequentialism — to incorporate some of the concerns of deontologists and virtue ethicists. So that when we look at the morally best outcomes in this series, we will consider effects that decisions might have on a person’s character, and on the general rules/duties that we wish people to follow.
2. An Attempt to model the Cryonics Dilemma
Now that we have outlined the key elements of normative decision theory, we can make a first pass at modelling the cryonics dilemma. On the face of it, the cryonics dilemma seems to confront the agent with a simple binary choice: (i) freeze yourself and (ii) do not freeze yourself. Furthermore, there would appear to be two obvious outcomes to these choice: (a) you are resuscitated and live an extended life ("Life") or (ii) you die ("Death"). Thus, we might be tempted to construct the following decision tree.
This is wrong for all sorts of reasons. For starters, the way in which the outcomes are placed at the end of the respective branches is hugely misleading. Obviously, if you freeze yourself, you do not necessarily live, it’s a possibility sure, but one that needs to represented in a probabilistic fashion. In other words, we need to include Nature in this decision tree. Nature will role a dice and determine whether the cryopreservation will have its intended aim or not.
Another problem with this diagram is that it may unnecessarily limit the choices available to the agent. Assuming, for sake of argument, that the ultimately goal is for the agent to extend their life, there may be other ways to do this. In particular, there may be other ways of preserving one’s body with the hope of future resuscitation. Reader gwern pointed out to me that plastination or chemical preservation may be a distinct possibility, one that might even have a higher probability of success than cryopreservation.
While I accept that there may be other choices worth considering in the model, I will not include them in my analysis. I do so for a simple reason: this series is intended to discuss the ethics of self-preservation and resuscitation, not the respective merits of the different forms of preservation and resuscitation. Cryonics is simply chosen as the most widely-known example of such a technology.
So, I’ll simply correct the decision tree here by including Nature as a decision-maker. Nature chooses successful resuscitation with probability p and no resuscitation with probability 1 - p. As follows:
Have we nailed it now? Clearly not. For one thing, we haven’t included the actual probabilities. We’ll talk about that later. More important here is the fact that the possible outcomes arising after the decision not to freeze oneself are underspecified. Clearly, one will die (unless some other form of life extension is available) but to limit the outcomes to death alone is misleading. As many of the anti-cryonics arguments point out, one could do other morally valuable things after by freezing oneself that would not be possible by choosing to freeze oneself. Thus, we need to alter the model to include post-not-freezing choices. I’ll include two here: (a) one does nothing morally valuable (that could not also be done by someone choosing to undergo cryopreservation), and so dies (Death); or (b) one chooses to do something morally valuable (that could not have been done by someone choosing to undergo cryopreservation), which creates a morally valuable outcome, but also leads to one’s death. Since the possibly morally valuable outcome is a variable in this model, I’ll simply label it “Opportunity Cost” and fill it in with appropriate examples when they arise.
So here is the, for now final, model of the cryonics “dilemma”:
3. Measuring the outcomes
Now that we have our model, we can move on and consider how to solve the decision problem. To do that, we need to follow our normative principle: choose the action that leads to the morally best outcome. But how do we know, and measure, what is best outcome?
Adjudicating which outcome is best is a long-standing thorn in the side of the decision theorist. Basically, the goal is to work out which outcomes we (individually or collectively) prefer. The problem is figuring out how to measure our preferred outcomes. Numbers of some sort are needed here in order to take advantage of the mathematical elements of decision theory, but there is a great danger that any numbers we do attach to an outcome end up being erroneous or, worse, misleading.
There are two basic approaches to this measurement problem. One is to use ordinal rankings to adjudicate between outcomes; the other is to attach cardinal values to outcomes.
Constructing an ordinal ranking is a very straightforward process. It simply requires us to state the order in which we prefer one outcome to another. So, in the case of the cryonics dilemma, we would probably say, ceteris paribus, that death is worse than life, i.e. that given a choice we would prefer to live than to die. Once we have that ranking in place, we can attach a number to the respective outcomes, largely for convenience and not for mathematical precision. Thus, assuming the higher the number the better the outcome, we can say that death garners a “0” and life is a “1”, and since “1” is better than “0”, we should choose the option that leads to "1", over the option that leads to "0".
The problem with using an ordinal ranking is twofold. First, it doesn’t allow us to say “by how much” one outcome is preferred to another. Looking purely at the numbers in my ordinal ranking, one might get the misleading impression that life is merely one unit better than death. Many people would dispute that. They might say that life is 100 times better than death, or maybe even more, who knows. The point is that the ordinal ranking simply doesn’t allow us to say anything about the “distance” between our preferences even though we would like to.
The second problem is more serious and flows from the first. It is that the ordinal ranking doesn’t allow us to incorporate any probabilistic calculations into our resolution of the decision problem. But this is precisely what we need to do if we are to successful resolve the cryonics dilemma. After all, it is far from certain that one will be successfully resuscitated in the future if one signs up for cryonics. Indeed, as others have pointed out, the probability of resuscitation is going to be determined by something akin to the Drake equation. Over on his website, gwern suggests that the probability of resuscitation is found by multiplying the following (presumably independent) probabilities:
The upshot of this is that in order to figure out the value of undergoing cryopreservation, we will need to take the value of the desired outcome (continued life in the future) and discount it (i.e. multiply it) by the probability of resuscitation. This will give us the expected value of the decision to undergo preservation.
We will then need to compare that expected value with the expected value of the other outcomes, i.e. death and opportunity cost. We might assume that these outcomes are not affected by probabilities since one is definitely going to die (p = 1) and one is in complete control of whether or not one achieves the opportunity cost outcome. But this might not be wise since whether one achieves the opportunity cost might be dependent on a variety of probabilistic factors such as the probability of suffering from weakness of the will and so on.
In any event, if we are to calculate the relevant expected values, and compare them, then we need to adopt a cardinal scale to measure each of the respective outcomes (life, death, and opportunity cost). This scale must represent the real "distances" between the outcomes.
It might be surprisingly difficult to do this. One easy-to-adopt cardinal measurement of the respective outcomes would be “number of extra life years”, however, this could run into problems. Leaving aside the fact that figuring out the likely number of extra life years could itself be difficult, it might also be the case that the value of the opportunity cost outcome cannot be measured in terms of life years. One might also object that the moral complexity of the respective outcomes is missed with such a simple metric.
This leads to conclusion that there might be no way to provide good cardinal measures for the outcomes. But I don’t see this as being a fatal flaw in the decision theoretic model of the cryonics dilemma. I think we can probably muddle along with qualitative evaluations of the outcomes, and, if needs be, attach some reasonable, albeit, conservative estimates to them. For instance, we might say that although we don’t know exactly how much better life is to death, it is at least five times better. As long as we are aware of the limitations involved in these kinds of figures, it shouldn’t be too much of a problem.
That brings us to the end of this post. To sum up, the basic tools of decision theory can be applied to the cryonics dilemma. When they are, we can see that the “dilemma” is actually reasonably complex. It is a decision problem involving at least two possible choices and three possible outcomes (continued life, death and opportunity cost). A simple normative parsing of the problem would admonish us to pick whichever choice led to the morally best outcome, but figuring out which outcome is morally best can be tricky due to the lack of a good cardinal measure.