Friday, July 23, 2021

Ethical Behaviourism and the Moral Risks of Human-Robot Relationships


Over the past few years, I have occasionally defended a position called 'ethical behaviourism'. Ethical behaviourism holds that when it comes to determining the moral status of our relationship with another being, behaviour is a sufficient form of evidence for establishing that status. In its simplest form, we could say that ethical behaviourism states that if something looks and behaves like a duck, then you ought to treat it as morally equivalent to a duck. Given my philosophical interests, I have tended to apply the theory to our relationships with robots and AI, suggesting that if those entities behave in the right ways, then we can have morally significant relationships with them, but the theory applies generally to all kinds of moral relationships with all kinds of entities.

I have never thought of ethical behaviourism as an interesting or controversial theory. It just strikes me as an obvious and, in some ways, unavoidable way of understanding our moral relations with other beings. I find it odd that others disagree. But disagree they do, often quite vociferously.

Some of the criticisms strike me as being mistaken or misguided -- targeting a theory that I do not defend. Others strike me as being more legitimate. In some cases, these criticisms raise interesting philosophical questions in their own right. One such criticism concerns the role that moral risk should play in our approach to different moral relationships. Roughly, the idea is this: is there not a danger of being over or under inclusive when it comes to recognising certain moral relationships? For instance, surely falsely excluding someone from the circle of moral concern is a greater moral error than falsely including them? Therefore, at least in that case, if there is any doubt about the moral status, we should err on the side of over-inclusivity, in order to minimise moral risk.

This sounds like a plausible argument, at least on the face of it. Some people have already used a version of it to make the case for including robots and AI in the moral community. Erica Neely, in her 2014 paper "Machines and the Moral Community" makes explicit appeal to such reasoning, arguing that we should be generous when it comes to recognising the moral status of machines. But what I want to suggest in this article is that things are not quite so simple. Moral risk should play some role in how we think about these questions, but the role it plays will vary depending on the type of moral status with which we are concerned. To make progress on this question, then, we need a meaningful account of the different moral risks involved in over- or under- ascribing these different moral statuses. I'll make some initial progress in that direction in this article, suggesting that the moral risks of recognising certain human-robot relationships are often overstated, but that there is no simple answer to the question of whether we should err on one side or the other. A lot will depend on the context in which we need to make such a decision.

In the course of defending this view, I will offer a brief summary of my case for ethical behaviourism, addressing some of the standard criticisms of it, before plunging into a longer discussion of moral risk and how it should affect the use of ethical behaviourism in relation to robots.


1. A Quick Overview of Ethical Behaviourism

As noted, ethical behaviourism is the view that when it comes to determining the moral status of our relationships with other beings, behaviour is a sufficient form of evidence for establishing that relationship status. To take a more concrete example, if you want to know whether an entity can suffer harms and is therefore deserving of some basic moral concern, you can look to its behaviour to answer that question. If it looks and acts like it can suffer, then you should treat it as if it does.

On my view, ethical behaviourism is entailed by mental property-based views of moral status. Most philosophical theories of moral status hold that other beings have moral status in virtue of things like intentionality, sentience, self-awareness, moral autonomy and so on. All of these properties are linked to the mind. The problem, of course, is that we cannot directly perceive the minds of others. If we want to know whether someone has intentionality, sentience, self-awareness and moral autonomy, we look to their behaviour. If they look and act as if they have those properties, then you should treat them as if they do. Behaviour is the window into the mind, or the very least, one of the best windows into the mind.

Although I see ethical behaviourism as being tied to mental property views of moral status, I have, in the past, suggested that it can 'float free' of any particular view of moral status. In other words, even if we are unsure of the exact mental property (or set of mental properties) that grounds status, behaviour is going to be a sufficient source of evidence for those properties. Consequently, if an entity is sufficiently sophisticated in its behaviours, it will satisfy any such theory. This is something for which critics have taken me to task. Part of the problem may have been the way in which I formulated one of the earlier defences of this position. I believe I said that ethical behaviourists could deploy a 'behavioural equivalence' test for moral status and thus be 'strictly agnostic' as regards the exact ontological underpinning of moral status. I don't think that's quite right anymore. They cannot be strictly agnostic, but they can be relatively agnostic regarding any mental property view.

There are many criticisms of ethical behaviourism. I won't review them all here. The typical ones will argue that behaviour is not the sole source of evidence for moral status. Evidence regarding underlying biological constitution or neural/cognitive mechanisms are said to be more important according to some critics. I've addressed these kinds of criticisms at length in some of the papers I have published. The only thing I will say about them here is that (i) ethical behaviourism is about the sufficient conditions for moral status not the necessary ones, so it holds open the possibility of other forms of evidence being determinative of moral status; and (ii) notwithstanding this point, many of these other forms of evidence are, in my view, either validated through behavioural evidence or trumped by behavioural evidence. So behaviour remains one of the best sources of evidence for moral status.

In addition to these critiques, there are some confusions about ethical behaviourism that are worth clearing up. Three are worth mentioning. First, ethical behaviourism is an epistemic thesis about the kinds of evidence we can use to determine moral status; it is not an ontological thesis about the actual grounding of moral status. This is implicit in what I have already said but it is worth spelling out explicitly. On my view, it could well be that sentience is what ultimately grounds moral status, it is just that behaviour is what provides sufficient evidence of sentience. Second, ethical behaviourism is an ethical theory; it is not a descriptive or scientific theory. I am not claiming that the mind reduces to behaviour, or that we shouldn't care about the underlying neural or cognitive mechanisms of behaviour. Of course we should care about those things. I am just claiming that behaviour is more important when it comes to determining moral status. Third, although I have spoken so far only about the role of ethical behaviourism in establishing 'moral status' (a phrase with a particular meaning in philosophy), I believe that it applies more generally to all manner of relationship statuses. For example, I have defended the use of ethical behaviourism to determine whether humans can be friends with robots, or whether humans can have loving, intimate relationships with robots. The reason for this broader applicability is that philosophical accounts of those relationships often claim that certain mental properties are crucial to determining their existence. For example, goodwill and mutual affection are crucial to both friendship and love. These are mental properties. Once again, we use behaviour to establish the presence of those mental properties.

That’s the gist of ethical behaviourism. For the remainder of this article, I’ll focus on the relationship between ethical behaviourism and moral risk.


2. Risk Asymmetry Arguments

There has been a recent surge of interest in the role of moral risk and uncertainty in our decision-making. Ted Lockhart kickstarted this interest with his 2000 book Moral Uncertainty and Its Consequences. Lockhart’s key insight is easy enough to grasp. Decision theorists have long argued that we should factor predictive uncertainty into decision-making. If we don't know whether it is going to rain later on today, then we should consider the probability that it will and the likely cost to us of getting wet if it does. Depending on what those probabilities and costs are, we will decide whether or not to take an umbrella with us when we go out. Lockhart and those that have followed in his wake argue that we should do the same when it comes to moral uncertainties. We often aren't sure what the right thing to do is, but we should try to factor that uncertainty into our moral decision-making.

This basic insight has generated a rich body of scholarship over the past two decades, some of it quite complex in nature. The main focus of this literature is to identify meta-moral decision-rules that tell us what we ought to do when we don't know what we ought to do. Many of these decision rules are variations on decision rules that have long been discussed by decision theorists. For example, Lockhart argued for something akin to the 'maximise expected benefit' rule in cases of moral uncertainty. Others argue that we should maximise the expected choiceworthiness of our decisions.

The minutiae of these decision rules does not concern us here. One of the chief motivating examples for proponents of moral uncertainty is the idea that some critical moral choices involve substantial risk asymmetries. In other words, there are cases in which we are unsure what the right thing to do is, but one option carries most of the moral risk. Given this asymmetrical distribution of risks, the claim is that we should avoid that option.

This might not make much sense in the abstract so consider a concrete example: the decision to eat meat. Let's assume, for a moment, that we are uncertain about the moral status of animals and the morality of farming and slaughtering such animals. We know that there are reasons to think that animals suffer and that modern farming methods are not good for their well-being and flourishing. We also know that many people think that factory farming is an ongoing moral catastrophe -- something that our grandchildren will look back on with a mixture of shame and incredulity. How could we be so cruel? But we are still not entirely convinced. We think there are personal and social benefits to eating meat that are not easily substituted (maybe we have read Vaclav Smil's book on the matter) and we are not sure that animals suffer in the same way as humans or that their status is such that they are wronged by farming and slaughtering them. So the upshot is that there is some moral uncertainty attached to the decision to eat meat. It might be permissible or it might be contributing to a great moral catastrophe.

Here is where the risk asymmetry argument comes into play. When you think about it, in the case of meat-eating, most of the moral risk is on one side of the ledger. By eating meat you might be contributing to a moral catastrophe, perpetuating the suffering of millions of innocent creatures. Sure, you gain some nutritional benefit from doing so, and the experience can be quite pleasurable, but you can survive and thrive on a plant-based diet, and the benefits are not so great that they outweigh the risks of a moral catastrophe. Thinking about it in this way makes it clear what you ought to do, despite your initial uncertainty.

Abstracting away from the specific details of the meat-eating dilemma, we can extract a basic argument template that can be deployed in all cases involving significant risk asymmetries:


  • (1) You are faced with a choice between Option A and Option B but are unsure which of those two options is morally acceptable (permissible, obligatory etc).
  • (2) In order to know which of the two options is acceptable, you would need to know moral fact X (which you don't know).
  • (3) If X were true, then Option A would clearly be morally unacceptable and Option B would be acceptable.
  • (4) If X were false, then both A and B would be acceptable but the relative benefits of A would be minimal.
  • (5) You ought try to minimise expected moral costs (at least where those costs are obvious)
  • 6) Therefore, despite your uncertainty with respect to X, you ought to choose Option B over Option A.

Simpler formulations of the argument are possible, but this one is useful when we turn to consider the moral status of non-humans and the application of ethical behaviourism. Obviously what's interesting about the risk asymmetry argument is that it attempts to capture the relative costs of making false positive and false negative moral errors. What do I mean by this? I mean that it captures the risk of believing fact X to be true when it is not (the false positive risk) versus the risk of believing it to be false when it is not (the false negative risk). Applied to the meat-eating example, proponents of the argument are claiming that the false positive risk (believing that animal suffering counts for a lot when it doesn't) is a lot lower than the false negative risk (believing that it doesn't count when it does).

There are a number of criticisms of risk asymmetry arguments. Dan Moller argues, for example, that we should ignore very small risks of being wrong. It is only when there is some substantive risk that we should start to factor it in. For instance, there is a small risk that the pebbles in my driveway are conscious and that I cause them great pain every time I drive out. After all, panpsychism might be true. But the risk seems so small as to not be worth taking seriously. The problem then, of course, is that we get into a debate about which risks are really small and which are sufficiently large to be worth factoring into a risk asymmetry argument. This could lead to some intractable conflicts. I'm well aware, for example, that some people think the chances of a robot or AI having some morally significant status are on a par with the chances that the pebbles in my driveway are conscious. I think ethical behaviourism gives us a way to assess the relative likelihood of this, but they may continue to disagree. 


Perhaps conflicts of this sort are unavoidable and risk asymmetry arguments can only work from subjective probability estimates. I won't attempt to resolve this issue here. Instead, I will assume we can say something meaningful about the probabilities in question and use this as the basis for assessing different risk asymmetry arguments.


3. Risk Asymmetry Arguments and Robot Moral Statuses

Let's bring it back to ethical behaviourism and, more specifically, the application of ethical behaviourism to our relationships with robots. For ethical behaviourism to have any practical utility, the basic insight has to be translated into some kind of standard or test for determining whether the behavioural evidence warrants belief in the existence of some kind of moral status. In previous work, I have called this standard the 'performative threshold' that must be crossed before an entity 'counts' for some moral purpose. I have never spelled out exactly what that threshold is because it is likely to vary from context to context, and it is also likely to vary as a function of your underlying theory of what matters, ontologically speaking, for moral purposes. If you think the capacity to suffer is what matters, then your performative threshold is likely to be different from that of someone that thinks that the capacity for reflective moral judgment is what matters. I have never been overly-invested in these details because I have been more concerned with the general point that behaviour is a sufficient basis for believing in moral status if you adopt a mental capacity based theory of status.

But it is worth thinking about the performative threshold in some detail since that is where the intellectual rubber (the basic idea of ethical behaviourism) meets the road (practical application to disputed cases of moral status). As I see it, when you are dealing with a disputed case of moral status -- e.g. whether a robot can suffer -- you work through analogies: is this entity sufficiently like another entity whose moral status is undisputed -- e.g. humans who we agree can suffer. So, in other words, in disputed cases the most natural way to proceed is through some kind of 'performative equivalence' test. Is X sufficiently like Y with respect to the properties that we think are relevant?

But there are different levels at which this equivalence test could be set. These are similar to the 'sensitivity' levels of other scientific tests. We could have:


Robust performative equivalence: The two entities must be equivalent to each other in multiple ways, across different environments, retests and contexts.

 

Moderate performative equivalence: The two entities must be equivalent to each other in several ways and across some environments, retests and contexts.

 

Minimal performative equivalence: The two entities must be equivalent to each other in a few ways and in perhaps one or two environments and contexts, with no need for retest.

 

These are crude distinctions but you get the idea. What is interesting about these different tests from the present perspective is that they can be understood as responses to the different levels of moral risk involved in recognising that an entity has some moral status. If the false positive risk is very high -- i.e. if you would make a serious moral error by assuming that X had some particular moral status when it did not -- then you might favour a robust version of the test. If the false negative risk is very high -- i.e. you would make a serious moral error by assuming that X did not have some particular moral status when it did -- then you might favour a minimal version of the test.

This is where risk asymmetry arguments get their foothold in the debate about the moral status of robots. It is tempting to apply those arguments in a simple and straightforward way but, as I shall now argue, the risk asymmetries may vary a lot depending on the kind of moral status that is under dispute.

So what are the different kinds of moral status that might be under dispute? As noted previously, in my work I have looked at two: (i) basic moral status; (ii) friendship and love. Let's look at both of these in a little more detail, examining the costs and benefits of different moral errors. This will enable us to consider the different performative standards that should apply to these different statuses.


(i) Basic moral status

This is the ground zero of the debate. Basic moral status arises whenever a being is an object of moral concern. In other words, it is not just a tool or thing that you can treat however you please but it has moral interests of its own. It can be harmed and benefited by your actions and so you have to factor that into your decisions. A husk of corn, for instance, does not have basic moral status but a newborn baby does.

Calling this 'basic moral status' is, however, a bit misleading. Moral status is not a simple binary thing but, rather, a spectrum of different possibilities and grades of moral status. We might agree that a newborn lamb and a newborn baby have moral status, but we might reasonably disagree as to whether they have equal moral status. Most people (I suspect) would argue that the newborn baby has a higher moral status than a newborn lamb. It has more capacities and potential interests and is thus deserving of a higher standard of moral care. I don't wish to get too enmeshed in the different possible grades of moral status here, but it is something to keep in mind. The main dispute in the literature tends to be about when or whether an entity attains a similar moral status to a human being and is thus deserving of a similar level of moral protection. This is what is usually in dispute when people as whether an entity belongs in our moral community or not.

Tying it back to theme of this article, what are the risks/rewards of making an error when it comes to ascribing basic moral status to a robot? The risks of making a false negative error -- denying moral status to entity that deserves it -- seem pretty high. Indeed, some people argue that the history of human moral progress is the history of overcoming false negative errors of exclusion. Several philosophers have written about moral progress in these terms. A standard example that they give is that the abolition of slavery can be seen as a recognition that slave populations deserve the same moral status as non-slave populations.

The risks of the false negative error are twofold. First, there is the direct harm to the entities that are denied moral status. They are denied basic moral rights and respect, and they may also be treated cruelly and inhumanely. Second, there are various indirect harms that result from the exclusion. These could arise in different ways. There are, for instance, studies suggesting that cruelty towards animals correlates with cruelty towards humans (the correlation is referred to in the literature as “The Link”). This might suggest that if we continue to exclude animals we could perpetuate an unnecessary lack of compassion and care within the present moral community. This depends, of course, on how we treat the excluded population. We can exclude without being cruel and inhumane. I'm pretty indifferent toward most ants, but I don't think I'm cruel to them. They leave me alone and I leave them alone.

These false negative risks look high and this has persuaded some people to think we should err on the side of over-inclusivity when it comes to basic moral status. But these risks needs to be balanced against the false positive risks. What's the harm of caring for something that does not need to be cared for? The typical arguments here are expressed in terms of the opportunity costs associated with lost time and attention. For instance, one of the most popular critiques of the robot rights debate is that it sucks up scholarly attention. We ought to be focusing on human welfare and human well-being, and how this is negatively impacted by AI and robotics, not on whether robots deserve moral care. The whole debate is a bit like people caring about the plight of locusts while millions of humans starve and suffer. Similarly, there are those tha argue that excessive moral concern for robots will prevent us from using robots in a way that benefits humanity. This is how I understand certain parts of Joanna Bryson's famous claim that robots should be slaves* (though she rejects that terminology now).

Against this, however, there may be some benefits to making false positive errors. It could be, for example, that being compassionate towards robots increases our level of compassion towards others, even if the robots do not deserve this level of concern. The studies on animal cruelty may be relevant here. It could be that all those people that care about the welfare and well-being of animals are wasting their time: animals do not deserve this level of concern. But at least they are not being cruel and inhumane to animals and treating animals as a training ground for cruelty to humans. That said, the empirical literature on the psychology of the moral circle paints a somewhat mixed picture. Some research seems to support the point I just made. For example, in a literature review, Daniel Crimston and his colleagues note that:


Across multiple studies, greater moral expansiveness was associated with increased empathic concern, perspective taking, moral identity, identification with all of humanity, connection with nature, endorsement of universalism values, and increased use of harm and fairness principles as foundations for moral decision making.
(Crimston et al 2018, p 16)

 

But a fascinating recent study by Joshua Rottman and his colleagues — with the delightful title “Tree-Huggers versus Human Lovers” — suggests that we do have a limited budget of moral concern and that increased concern for one group may come at the expense of reduced concern for another. Specifically, Rottman et al found that some people care more about non-human animals and the environment than they do about marginalised human communities. One interpretation of this is that these people reduce moral concern for marginalised human populations in order to make mental space for moral concern toward animals and the environment. The study is limited. Rottman et al were more interested in finding out whether people with this moral attitude existed than in how prevalent they are in society. But it does suggest that there could be false positive risks to recognising the moral status of non-humans.

I'm not sure what all this means when it comes to the performative threshold for basic moral status. I remain sceptical of those that push the false positive risks associated with opportunity costs. It's not obvious to me that care and concern for animals or robots must come at a cost to care and concern for humans. It sounds plausible to me that there is some kind of complementarity effect when it comes to compassion: the more compassion the better. That said, I have to acknowledge that the research doesn’t always support this optimism.

Still, on balance, when it comes to basic moral status, I'm inclined to say that the risks are (slightly) more asymmetrical on the false negative side and this favours a lower performative threshold than we might otherwise be inclined to use.


(ii) Friendship and Love

I'm going to treat friendship and love as a pair, not because I think they are the same thing, but because I think they share enough features for present purposes. There is, in any event, a long tradition of treating them as closely related. For instance, there is the classic distinction in the Greek tradition between philia and eros, both of which are species of love, the former applying to friendships and the latter to intimate relationships.

Friendship and love are complex phenomena and there are many different accounts of the conditions that must be satisfied in order for someone to count as a friend or a lover. For example, Aristotle's famous analysis of the concept of friendship claims that there are three main types of friendship: utility friendship, pleasure friendship and virtue friendship. The first type arises where the friends use one another for some instrumental gain; the second arises where they derive pleasure from their interactions with one another; and the third, which is more complex, arises when the friends 'share a life' with one another and have consistent and ongoing feelings of good will toward one another. The third category, according to Aristotle was the most meaningful. Most philosophical discussions of friendship begin and end with Aristotle, though his is not the only account. Love is similarly complex. In their discussion of human-robot love, Nyholm and Frank identify three different accounts of what it might take to be in a loving relationship with another, varying in terms of whether the other is a good match for your personality, the strength of your mutual commitment and affection for their distinctive characteristics..

A full analysis of the potential risks and rewards of human-robot love and friendship would have to contend with each of these accounts. I am not going to do that here. Instead, I just want to focus on two core aspects of friendship and love, that tend to be shared across most accounts. First up is the need for mutual goodwill between friends and lovers. On most accounts of friendship and love, it is agreed that in order for two people to be true friends or lovers, they must have some degree of mutual affection and good feeling toward one another. They must like each other, feel positive about their interactions and desire good things to happen to one another. Not all the time (that would be an impossible standard) but most of the time. It is the sincerity of these feelings that is often taken to be the true mark of friendship and love. It is also this need for mutual goodwill that, in my view, opens the door for ethical behaviourism. Mutual goodwill is a mentalistic property. Many people doubt whether robots could have the mentalistic properties that sustain mutual goodwill. But if I am right, this is something that can be evidenced at the behavioural level. If a robot looks and acts like it has goodwill towards you, then you are probably justified in believing that it does. You are in the same epistemic boat when it comes to human friends and lovers anyway. They might not like you as much as you think. There is always some doubt. You have to judge them by their behaviour.

This brings me to the second aspect of friendship and love: it is high risk/high reward. Friends and lovers are among the most valuable things that people can have in life. Many accounts of the good life include friendship and intimacy as basic human goods. They are usually thought to be intrinsic goods -- worth having in their own right -- as well as instrumental goods -- things that can unlock an array of additional benefits. Indeed, there is a large body of research detailing the instrumental benefits of intimacy and friendship for physical health, psychological well-being, social inclusion and much more (references). At the same time, our friends and lovers can betray us and let us down. Broken relationships are often painful and can leave emotional scars that last a lifetime. Abusive relationships can be even worse. If you get very close to someone, you run the risk of them doing you great harm. But if you keep everyone at arm’s length, you miss out on part of what makes life worth living.

The high risk/high reward nature of friends and lovers has interesting implications for the risk asymmetry argument. It is tempting to suppose that the high risks warrant extra caution when it comes to recognising the existence of such a relationship. If false friends and lovers can hurt you, then you better err on the side of false negatives rather than on the side of false positives. This may be taken to justify a high performative threshold. But the high reward nature of such relationships cuts against this logic. If you have so much to gain, and if you would be living a less optimal life without friends and lovers, why not be more open to them?

To resolve this tension, specifically when it comes to robot friends and lovers, we need to think more carefully about how the risks and rewards play out for people that might be thinking about forming such a relationship with a robot. There might be much to gain from such relationships, but how significant this potential gain is probably depends on the opportunity cost associated with forming that relationship. Again, the typical argument from the critic of such relationships will be that if you form such a relationship with a robot, you will miss out on forming such a relationship with a human. Since, on balance, relationships with humans are assumed to be superior to relationships with robots, the argument then concludes that we should discourage human-robot relationships, even if they are, in principle, possible.

There are, however, three problems with this argument. First, human-robot relationships may not be inferior to human-human relationships. This belief is, arguably, a holdover from the assumption that such relationships are impossible and hence devoid of all value. Some people claim to have much more valuable relationships with their pets than they do with fellow humans. It is not implausible to suspect that something similar could be true for some people with their relationships with robots. Second, even if human-robot relationships are inferior, people that are inclined to such relationships may not be missing out on much. There is a body of empirical research suggesting that people that score high on anthropomorphism tend to be more socially isolated and lonely. We might infer from this that such people are less likely to form significant relationships with other humans. So, in their case, it is not a simple choice between human relationships and robot relationships. It is, rather, a choice between robot relationships and no relationships at all. Third, and finally, the opportunity cost argument may not hold true in many cases. We may not have to 'give up' human relationships in order to form relationships with robots. It could well be that robot relationships complement human relationships or can be pursued in parallel to them. The counterargument to this will be that there is some upper limit on the number of friends and lovers we can -- or should -- have (e.g. Dunbar’s number). But I’m sceptical about the relevance of such limits to this debate. In any event, those limits are probably sufficiently high that most people could accommodate a few robot relationships without any overall loss in human relationships.

What about the false positive risks of robot relationships? Well, as noted, the risks associated with human friends and lovers are usually cashed out in terms of insincerity, betrayal and being let down. You thought that someone loved only you, but it turns out they have been having multiple affairs. You thought that someone was your friend, but it turns out they have been spreading nasty rumours about you behind your back. You really needed someone to be there for you during a difficult time, but they decided to ignore you. Do these risks also apply to robot friends and lovers? A lot of people think that the insincerity risk is intrinsic to human-robot relationships. The idea is that a robot cannot be your friend because they lack the right state of mind. They are always inauthentic. But if I am right about ethical behaviourism, this is not a good objection to human-robot relationships. Whether they are authentic or not is something that is to be assessed through behaviour — as it usually is for human-human relationships — not the presence or absence of some magical and unobservable inner mental state. That leaves us with the risk of betrayal and being let down. My own view is that the risk of being betrayed by robots is significant, at least given the way in which robots are currently designed and operated. Robots are created by companies, they use proprietary, cloud-based AI, and they usually collect data on their users that is used by the company and third parties. This data collection and transfer, in particular, presents a major risk of betrayal. It is also possible that robots could be hacked and used to extract data contrary to the intentions of the original creators or even used to manipulate or harm you. Again, similar risks are present in human-human friendships (plenty of friends have ‘betrayed’ me in some sense) so the relative risks here are unclear. The risk of being let down by a robot may correlate with the risk of the robot being hacked or manipulated. That said, one hope with robots is that they would be more consistent and reliable than humans. Thus, it could well be that robots score lower on this type of false positive risk.

I am not sure how to balance all of these potential risks and rewards. I’m not sure it can be done in the abstract. A lot will depend on the particular person (their degree of social isolation; their need for friends etc) and the particular robotic system (its security and safety record; its features). For some people and some systems, the false negative risks will outweigh the false positives; for others the opposite will be true. The idea that someone like me could, from the armchair, decide the issue might be an instance of intellectual hubris.


Monday, July 19, 2021

93 - Will machines impede moral progress?


Thomas Sinclair (left), Ben Kenward (right)

Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

 


Show Notes

Topics discussed incude:

  • What is a moral value?
  • What is a moral machine?
  • What is moral progress?
  • Has society progress, morally speaking, in the past?
  • How can we design moral machines?
  • What's the problem with getting machines to follow our current moral consensus?
  • Will people over-defer to machines? Will they outsource their moral reasoning to machines?
  • Why is a lack of moral progress such a problem right now?


Relevant Links


Friday, July 9, 2021

92 - The Ethics of Virtual Worlds


Are virtual worlds free from the ethical rules of ordinary life? Do they generate their own ethical codes? How do gamers and game designers address these issues? These are the questions that I explore in this episode with my guest Lucy Amelia Sparrow. Lucy is a PhD Candidate in Human-Computer Interaction at the University of Melbourne. Her research focuses on ethics and multiplayer digital games, with other interests in virtual reality and hybrid boardgames. Lucy is a tutor in game design and an academic editor, and has held a number of research and teaching positions at universities across Hong Kong and Australia.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here)


Show Notes

Topics discussed include:

  • Are virtual worlds amoral? Do we value them for their freedom from ordinary moral rules?
  • Is there an important distinction between virtual reality and games?
  • Do games generate their own internal ethics?
  • How prevalent are unwanted digitally enacted sexual interactions?
  • How do gamers respond to such interactions? Do they take them seriously?
  • How can game designers address this problem?
  • Do gamers tolerate immoral actions more than the norm?
  • Can there be a productive form of distrust in video game design?

Relevant Links

Wednesday, June 30, 2021

91 - Rights for Robots, Animals and Nature?



Should robots have rights? How about chimpanzees? Or rivers? Many people ask these questions individually, but few people have asked them all together at the same time. In this episode, I talk to a man who has. Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, a Fulbright Scholar to Sri Lanka, a Research Fellow of the Earth System Governance Project, and Core Team Member of the Global Network for Human Rights and the Environment. His research focuses on environmental politics, rights, and technology. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020). We talk about the arguments and ideas in the latter book.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).


Show notes


Topics covered include:
  • Should we even be talking about robot rights?
  • What is a right? What's the difference between a legal and moral right?
  • How do we justify the ascription of rights?
  • What is personhood? Who counts as a person?
  • Properties versus relations - what matters more when it comes to moral status?
  • What can we learn from the animal rights case law?
  • What can we learn from the Rights of Nature debate?
  • Can we imagine a future in which robots have rights? What kinds of rights might those be?

Relevant Links


Monday, June 28, 2021

The Shape of Techno-Moral Revolutions: Lessons from Carlota Perez


Adapted from Perez, Technological Revolutions and Financial Capital


One thing that is always daunting about scholarship is the sheer incomprehensible vastness of it. We strive for originality and novelty in research, but this is hard to achieve. So much has been written about so many topics that it is not unusual to find that one's hard earned 'insights' have been pipped by someone else’s hard earned insights of three decades ago.

I felt a bit like this recently when I read Carlota Perez's book Technological Revolutions and Financial Capital. The book reviews five historical technological revolutions, and the impact they have had on our economies and our social structures. It also examines the role of financial capital in fuelling bubbles and speculation around novel technologies. It's a fascinating ride, the centrepiece of which is a general theory about the structure and progression of technological revolutions.

Although not exactly the same, I found that much of what Perez had to say resonated with my own thinking about technology and moral revolutions (which has, admittedly, become something of an obsession of late). In the following article, I want to tease out some of those points of resonance. This serves the dual function of both summarising key aspects of Perez's book and showing how they can be mined for insights on other topics.

I am going to focus on four key insights in what follows. I'll start by summarising Perez's take on them. I'll then consider their relevance for the study of techno-moral revolutions. I'll also offer some critical reflections along the way.


1. The Five Revolutions

Before I get into the four specific insights, it is worth offering a brief overview of Perez's theoretical framework. As noted, Perez looks at five technological revolutions that have occurred since the dawn of the Industrial Revolution, and the impact they have had on our economies and societies. Perez defines a technological revolution as:


..a powerful and highly visible cluster of new and dynamic technologies, products and industries, capable of bringing about an upheaval in the whole fabric of the economy and of propelling a long-term upsurge of development. It is a strong interrelated constellation of technical innovations, generally including an important all-pervasive low-cost input, often a source of energy, sometimes a crucial material, plus significant new products and processes and a new infrastructure. 
(Perez 2002, 8)


There is a lot going on in that definition, and we will unpack some of it as we go along. For now, the crucial point is that, according to Perez, there have been five such revolutions in the past 250 or so years. They are:


  • (a) The 'Industrial Revolution', which began primarily in Britain in the late 1700s and arose from developments in mechanisation in cotton, wrought iron and other industries.
  • (b) The Age of Steam and Railways, which again began primarily in Britain in the 1830s (roughly) but spread rapidly to the European Continent and the USA, and arose from developments in steam power and railways.
  • (c) The Age of Steel, Electricity and Heavy ENgineering, which began primarily in the USA and Germany in the 1870s, and arose from developments in steel manufacturing (replacing iron) and electrical and chemical engineering.
  • (d) The Age of Oil, Automobiles and Mass Production, which began primarily in the USA in the 1910s, and depended on developments in fossil fuel energy production, the internal combustion engine and scientific management of manufacturing industries (e.g. the motor car production line).
  • (e) The Age of Information and Telecommunication, which began primarily in the USA in the 1970s, and arose from developments in computers, telecommunications and microelectronics.

Perez's book was published in 2002, just as the Age of Information seemed to have entered full steam. She offers minimal speculations on what the next likely technological revolution will be (she hints at biotech and AI as the obvious possibilities). Her gaze is mainly historical. She claims that each of these historical revolutions takes about 50-70 years to fully exhaust themselves. During that period of time they pass through two main phases, each of which is broken down into two sub-phases: (i) the installation phase (which involves an initial irruption of the technology, followed by a frenzy of excitement and investment); and (ii) the deployment phase (which involves synergy between different uses of the technology and maturity in the full economic and social exploitation of the technology).

Each of these revolutions inevitably ends and a new round of technological innovation kicks off. In the mid-point of each revolution, there tends to be a major economic crash after the initial bubble of excitement comes to an abrupt halt.

As I say, Perez's book is insightful and thought-provoking. But people looking for a rigorous defence of the model she proposes might be disappointed. Her focus is not so much on defending a particular mechanical explanation of social and economic history through the rigorous use of data but, rather, giving us a new way to look at social history. I found four of her insights particularly useful and I want to reflect on how they shed light on techno-moral revolutions.


2. The Importance of Visible Attractors

One of Perez's key points is that the technologies that kickstart a revolution often have a long gestation period. The basic concept or idea can be around for years before it really takes off. The steam engine may be an example of this. Primitive versions of the steam engine were around for a long time before the age of the railways began. Similarly, the basic idea of the computer was around for nearly a century before the age of information began in earnest. Charles Babbage and Ada Lovelace came up with a workable prototype and an early model of software coding and more modern models date from the late 1940s.

Why did it take so long for the potential of these technologies to be fully appreciated? Perez argues that every revolution requires some initial, highly visible 'attractor' to get started:


...it is suggested here that for society to veer strongly in the direction of a new set of technologies, a highly visible 'attractor' needs to appear, symbolizing the whole new potential and capable of sparking the technological and business imagination of a cluster of pioneers.
(Perez 2002, 11)

 

Examples given include Arkwright's Cromford mill opening in 1771 (kickstarting the Industrial Revolution), and Stephenson’s 'Rocket' locomotive being used on the Liverpool-Manchester railway (kickstarting the Age of Steam and Railways).

Do visible attractors and pioneers also help kickstart moral revolutions too? I think they might. There are often terrible moral tragedies and injustices that awaken the ethical conscience and shock it into a new way of thinking. For example, in his work on the structure of moral revolutions, Robert Baker suggests that the medical ethics revolution in the latter part of the 20th century was kickstarted by revelations regarding Nazi experimentation on concentration camp victims during WWII. The horror of what took place made people realise that some ethical (and ultimately legal) limits had to be placed on medical practice. Similar attractors played a role in the civil rights movement in the US. Rosa Parks's refusal to give up her bus seat; Martin Luther King's speech at the Washington Monument. These were all focal points for social moral attention and helped catalyse a revolution.

It is worth disentangling two different kinds of visible attractor. First, there are morally significant events -- these are the historical occurrences that awaken the moral conscience and tip it into a new mode of thinking. Second, there are the moral pioneers -- these are specific individuals that help symbolise a new mode of moral thinking.

Are there any visible attractors that might be helping to kickstart new techno-moral revolutions? When it comes to morally significant events, there have been some scandals in the past decade or so that captured the public imagination and made them more aware of the ethical consequences of digital technologies. The Snowden Leak and the Cambridge Analytica scandal are two obvious examples. These scandals helped to highlight the pervasiveness of digital surveillance and the potential manipulative power of predictive analytics. Neither scandal was all that shocking to people familiar with the underlying technologies. The problems had been known and written about for decades. But these scandals captured social moral attention in a way that decades of scholarship did not. It's too early to say whether they have kickstarted a moral revolution, or not, but they have certainly ignited policy debates and influenced legal responses.

What about moral pioneers? I have written in the past about contemporary cyborgs as moral pioneers. I'm thinking in particular here about people like the artist Neil Harbisson, who is famous for having an antenna attached to his skull that converts light waves into sound. This allows him to hear in colour (a kind of technologically facilitated synethsia). I'm also thinking of Steve Mann, who is famous for his 'eyetap' which is a prosthetic attachment that augments his visual field in a variety of ways. I see these people as early pioneers in the cyborg mode of life, showing the rest of us what might be possible and desirable about it. I also see them as moral pioneers because they actively fight for the rights of cyborgs and people that want to pursue a non-human or post-human form of life. They are showing the rest of us the potential moral errors of strict human exceptionalism. That said, I would be the first to admit that the popularity of these moral pioneers is currently too small to kickstart a moral revolution. If more prominent and well-known figures follow their lead, this may happen.

The point is not to get too wedded to these particular examples. The point, which I take from Perez, is that particular events or individuals can play an important role in highlighting new moral possibilities and changing social moral practices. It's worth being on the lookout for these events and individuals.


3. The Emergence of a New Techno-Moral Paradigm

The second key insight I take from Perez's work relates to her idea of a 'techno-economic paradigm'. She defines this as:


...a best practice model made up of a set of all-pervasive generic technological and organizational principles, which represent the most effective way of applying a particular technological revolution and of using it for modernizing and rejuvenating the whole of the economy. 
(Perez 2002, 15)

 

This is an abstract definition. The gist of the idea is that a set of new technologies comes along that allows for businesses and economic actors to reorganise or rearrange their practices in a way that best exploits the economic potential of those new technologies. When they do this, they adopt a new 'paradigm’.

I recently discussed an example of a new techno-economic paradigm emerging in the world of car sales in the early 2000s. I won't rehash the details here - you can read the original article for that - but in essence the idea was that internet sales platforms changed the way that people bought and sold cars. Instead of car-selling being a mainly in-person activity in which a naive and often intimidated customer would attend a car dealership and be subjected to all manner of sharp negotiation practices and hard selling, the internet shifted it to being a largely online, over-the-phone business, with greater equality between buyer and seller and fewer sharp practices. Admittedly, this example might be too narrow to constitute a whole new techno-economic paradigm, but the lessons learned from this example are generalisable. The rise of the internet and globalised supply chains has (and continues to) change the way in which the retail industry operates.

What about new techno-moral paradigms? Can they emerge too? I would say 'yes' and they are often equivalent to or part of techno-economic paradigms. New ways of doing business generate new power relationships, new expectations, and new duties. This requires a new moral paradigm. But moral life does not begin and end in the market and so techno-moral paradigms are likely to affect non-economic aspects of life too (there is a debate to be had about the dividing line between economic and non-economic aspects of life; we won't engage with that debate here).

One way to think about techno-moral paradigms is to use the idea of an 'affordance', which is popular in technological studies and behavioural ecology. The basic idea is that humans live in environments that afford them different possibilities for action (i.e. or, to put it another way, environments that contain different affordances). New technologies often generate new affordances. The world in which the automobile exists is a world with very different possibilities for action than the world in which it does not. Each of these new affordances generates a set of moral questions. Should we take advantage of the action possibility? Do we have an obligation to do so? Clusters of related technologies obviously generate long-lists of these questions. As we answer them, a new techno-moral paradigm emerges.

Are there any examples of this process at work? Perhaps. In another recent article, I took a long look at the impact of contraception and home appliances (washing machines, microwaves etc) on moral attitudes toward extramarital sex and women's role in society. Again, I won't rehash all the details here -- read the full thing if you want them. The core idea from that article, however, was that both sets of technologies changed the cost-benefit ratio for certain decisions. Contraception, particularly the Pill, the latex condom and the IUD, dramatically reduced the risk of extramarital pregnancy and the associated stigma and social punishment associated with that. Home appliances reduced the amount of time required for certain household chores. Since women were the ones that bore these costs or performed these chores, these technologies had a particularly dramatic effect on their lives. Of course, it wasn't all plain sailing. Prejudices and taboos die hard, but eventually, over the course of the 20th century, the use of these technologies generated a new moral paradigm. It became permissible, and in some cases morally expected to use these technologies and to take advantage of the possibilities they afforded. The shame and stigma associated with extramarital sex (and also extramarital pregnancy) reduced to a whimper, and women pursuing careers outside the home (not driven by economic necessity) became tolerated and celebrated.

There are other examples too. A more controversial one concerns the rise of digital surveillance and predictive analytics. These technologies enable much greater tailoring of services to individuals. This is true both in business and in government. The rapid collection and processing of personal data allows retailers, for example, to build personal profiles of shoppers and predictive analytics enables them to make recommendations based on those profiles. In some ways, this allows for much greater convenience and, potentially, a boost in well-being for customers. But this comes at the cost of greater intrusions into privacy and the gradual erosion of autonomy. These technologies have generated a new techno-economic paradigm -- commonly referred to as 'surveillance capitalism' thanks to Shoshanna Zuboff -- and this paradigm has generated a set of related moral questions. Should we take the hit to privacy and autonomy in return for convenience and well-being? Or must we fight back to protect privacy and autonomy? The precise answers to these questions remain elusive. In some parts of the world, convenience seems to be winning out over privacy. In other parts of the world, elaborate legal frameworks have been created to eke out some space for privacy. I won't comment on the merits of this debate here. What's interesting from the present perspective is how the set of digital surveillance technologies is forcing the creation of a new techno-moral paradigm.


4. The Importance of Institutional Reform to Accommodate a New Paradigm

A third key idea from Perez's analysis is the problem of resistance between old and new paradigms. This often occurs at the institutional level. What happens is that when new technological revolutions occur, they rapidly generate new possibilities for action. People respond to these new possibilities at a local and individual level. For example, people start taking their mobile phones with them in their cars; they start texting while driving. They fill out the space of possibilities in short order. Eventually, this will generate a new set of social norms and practices, often codified and enforced by a legal system, but in the early days the social institutions tend to lag behind the technological possibilities. This generates a lot of friction and conflict:


Societies are profoundly shaken and shaped by each technological revolution and, in turn, the technological potential is shaped and steered as a result of intense social, political and ideological confrontations and compromises. 
(Perez 2002, 22)

 

Ultimately, the old institutional paradigm will need to adapt if the full potential of the technological paradigm is to be unleashed. But there isn't necessarily a single best institutional reform. There are often different ideological possibilities that compete, sometimes violently, for control. Perez argues that this is exactly what happened during the fourth of the revolutions she discusses (the revolution in oil, automobiles and mass production). This fourth revolution created the possibility of mass production and mass consumption. But how should society be organised to take advantage of those possibilities?


The unleashing of the 'golden age' based on the mass-production technologies of the fourth paradigm that had been diffusing since the 1910s and 1920s demanded institutions facilitating massive consumption, by the people or by the governments. Only in such a context could full flourishing be achieved. At the time, Fascism, Socialism and Keynesian democracies were set up as very different socio-political models giving impulse to growth processes based on mass production and consumptions. 
(Perez 2002, 24)

 

In the West, Keynesianism won out and became the post-WWII consensus until the late 1970s. By then, the next revolution was underway.

What's interesting about these examples is that those three ideologies were competing for control of political morality. They were defining the preferred relationship between citizens and states. What were citizens expected to do for the state (be productive workers; contribute to national armies etc)? And what were governments expected to do for citizens (provide a social safety net; boost demand during economic slumps; etc)? So this conflict between the new techno-economic paradigm and the old institutional order is, at its heart, a kind of techno-moral conflict.

Are we in the midst of new one right now? I have written about this in recent years, focusing in particular on the impact of mass automation on our existing social contract. Whereas in the mid-20th century the fight was about how to retool the state for the age of mass consumption, now the fight might be about how to retool the state for the age of mass leisure. As the latest wave of automation takes hold, the percentage of the adult population needed to keep the economy running may decline. Many people may be underemployed or have no jobs at all. To take advantage of the economic potential of mass automation, a social reckoning may be in order. For instance, redistributive policies may need reform to compensate for the loss of income associated with automation (and to prevent a collapse of consumer demand). The basic income guarantee is the most widely discussed of these policy reforms. In addition to this, education and training systems may also need reform. The goal of such institutions may no longer be to train the next generation of workers but to encourage the pursuit of knowledge for its own sake or to develop their civic responsibilities and sense of public duty (a la Ancient Athens). I'm not claiming that educational systems don't already do these things but a re-prioritisation may be in order. Finally, and more generally, the ideological debunking of the work ethic may be required in order to shed us of this notion that a fully functioning adult is a fully engaged worker. There is more to life than work and this new techno-economic paradigm may allow us to realise this.

These are themes that I explore in more detail in my book Automation and Utopia, albeit not through the lens of Perez's work.


5. The Pattern of Revolutions

The fourth key insight I take from Perez is the pattern she identifies in each historical technological revolution. As mentioned above, Perez argues that there are two main phases to each revolution -- the installation phase and the deployment phase -- and that each of these is broken down into two further sub-phases and that at the mid-point of the cycle there is a crash/turning point. The full sequence thus is: Irruption -> Frenzy -> Turning Point -> Synergy -> Maturity. Do these patterns occur in techno-moral revolutions too?

Before we answer that question, it is worth noting that this is probably the most intellectually dubious part of Perez's work. Carving history at its joints and suggesting that there are distinct patterns to technological development and growth is problematic. You can do it and with a sufficiently flexible interpretation it is possible to make the facts fit the pattern, but it is unlikely that you are uncovering some deeper law of historical progress, and you may have to ignore incompatible data. David Egerton, in his book The Shock of the Old, criticises this tendency in histories of technology. He thinks they focus too much on innovation and invention and not enough on how old technologies linger and proliferate. This, he argues, leads to a false view of history in which new technological eras have distinct boundaries that mark them off from old ones. The reality is much messier. History is like a canvas that keeps getting painted over: the old colours remain and affect the new picture that emerges.

This scepticism is worth bearing in mind. But despite this, I still think there is some value to putting some order on history. We don't have to kid ourselves into thinking we have discovered some universal law of social evolution, but it might help us to make sense of what has gone on and what is going on right now. We shouldn't hold onto the pattern too tightly; we should be open to revising it in light of new information; but we shouldn't discard it so quickly either and assume that history is just one damned thing after another.

Those caveats in mind, do techno-moral revolutions follow a similar pattern to the one identified by Perez? Well, there’s a simple argument for thinking that they might. If, as I suggested above, techno-economic paradigms often generate new techno-moral paradigm (new actions, new power relationships, new duties and expectations), and if techno-economic revolutions unfold according the pattern identified by Perez, then it stands to reason that at least some techno-moral revolutions follow this pattern too. If we return to the example of surveillance capitalism as a techno-moral revolution, we may see some evidence for it following this pattern. First, came a technological revolution in digital surveillance and predictive analytics (the irruption). Second, there was a frenzy of excitement among corporations, governments and (to some extent) individual users (particularly, say, proponents of self-tracking and quantification). These actors rushed to exploit the full potential of the technology, exploring different use cases with fervour (the frenzy). Then, there was a reckoning: scandals revealing the moral costs of this new paradigm. This should lead to a synergy between the moral rules and the new technologies. If I were to guess, I would say we are living through this phase now. People are far more aware of the risks of surveillance capitalism and new institutional mechanisms are being designed to mitigate those risks. This could, in time, lead to a new and relatively stable moral paradigm (balancing the benefits of the technology against the costs of privacy and autonomy), which would constitute the maturity phase of the techno-moral revolution.

One of Perez’s claims is that all techno-economic revolutions exhaust themselves in due course. The profits that were once attainable with the technology eventually dissipate and economic actors seek out other opportunities. One might wonder whether techno-moral revolutions suffer a similar fate. Do moral revolutions eventually exhaust themselves? On the face of it, you would say “no”. Techno-economic revolutions are driven by market incentives. They run out of steam when the profits start to decline. There is no equivalent incentive pressure in the moral sphere.

But is that really true? I think some moral revolutions probably do head in one direction only. I haven’t written about this much myself, but several philosophers have written about moral progress as entailing the expansion of the circle of moral concern — from family, to tribe, to nation, to empire, to humanity, to ecosphere. If they are right then, despite some setbacks, the general trend of moral progress is in one direction only: toward an ever-expanding circle of concern.

But maybe that is a misleading and overly-idealistic way to think about it? It could be that those expansions of concern are, in part, evidence of the exhaustion of a previous moral paradigm (e.g. when humanism is exhausted we turn to animal rights and ecocentrism). David Runciman has gestured at a similar idea in his book How Democracy Ends. He argues that representative democratic governance may have passed its middle age and be on a decline toward death. Why? Because such democracies have one big idea at their heart: you grow the moral legitimacy of the state by extending the voting franchise. Since the franchise has now been fully extended in most democratic regimes (children and prisoners being some of the last holdouts), there is little new space for them to explore. This makes them vulnerable to ideological attack. New methods of participatory governance may hold off the decline for a while, but they too have their limits, not least the fact that most people have neither the time, nor inclination, nor luxury to participate in a meaningful way. Digital technologies are often lauded as potential saviours of participatory governance, but they have yet to yield clear benefits in this regard.

These are just half-baked thoughts, but they would be worth pursuing in more detail. Perhaps some moral revolutions never exhaust themselves but others do? The question is whether we can successfully identify which ones.


Thursday, June 10, 2021

Axiological Futurism: The Systematic Study of the Future of Values



Here's a new paper that I have forthcoming in the journal Futures. This paper has had a long gestation. I wrote it more than two and a half years ago. At the time, I thought it was one of my more interesting pieces. Apparently journal editors disagreed. Vehemently. This paper was rejected from four different journals before finally, on the fifth try, finding a home. I still think it is among the more interesting and important pieces I have written. It makes the case for 'axiological futurism', which is the study of the future of values. This links to my ongoing work on technology and moral revolutions. See what you think of it. Links to prepublication versions are available below. The final version will be open access (thanks to Ireland's new open access publishing agreements with Elsevier et al) and I will post that once it is available.


Title: Axiological Futurism: The Systematic Study of the Future of Values

Links: Official; Philpapers; Researchgate; Academia

Abstract: Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades. 

 

 


Wednesday, June 2, 2021

Interviews about Automation and Utopia


I did a few interviews about my book Automation and Utopia over the past year. Once upon a time I was meticulous in documenting and recording all of them on this website (admittedly more for my own records than for the benefit of readers). For some reason, I have lapsed in this practice recently. Anyway, here's my attempt to correct for this oversight with a list of recent interviews. If you want to learn more about the book, check them out:


Tuesday, June 1, 2021

The Technological Future of Love




Here's a new draft paper. This one was co-authored with Sven Nyholm and Brian Earp. It is about the role that technology can and will play in reshaping the value of love. It is forthcoming in an edited collection entitled Love: Past, Present and Future. You can access a preprint version of the paper at the links below.

Title: The Technological Future of Love

Authors: Sven Nyholm, John Danaher, Brian Earp

Links: Philpapers; Researchgate; Academia

Abstract: How might emerging and future technologies-sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and 'gamify' romantic relationships-change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for "cautious optimism" about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology.