Pages

Friday, July 23, 2021

Ethical Behaviourism and the Moral Risks of Human-Robot Relationships


Over the past few years, I have occasionally defended a position called 'ethical behaviourism'. Ethical behaviourism holds that when it comes to determining the moral status of our relationship with another being, behaviour is a sufficient form of evidence for establishing that status. In its simplest form, we could say that ethical behaviourism states that if something looks and behaves like a duck, then you ought to treat it as morally equivalent to a duck. Given my philosophical interests, I have tended to apply the theory to our relationships with robots and AI, suggesting that if those entities behave in the right ways, then we can have morally significant relationships with them, but the theory applies generally to all kinds of moral relationships with all kinds of entities.

I have never thought of ethical behaviourism as an interesting or controversial theory. It just strikes me as an obvious and, in some ways, unavoidable way of understanding our moral relations with other beings. I find it odd that others disagree. But disagree they do, often quite vociferously.

Some of the criticisms strike me as being mistaken or misguided -- targeting a theory that I do not defend. Others strike me as being more legitimate. In some cases, these criticisms raise interesting philosophical questions in their own right. One such criticism concerns the role that moral risk should play in our approach to different moral relationships. Roughly, the idea is this: is there not a danger of being over or under inclusive when it comes to recognising certain moral relationships? For instance, surely falsely excluding someone from the circle of moral concern is a greater moral error than falsely including them? Therefore, at least in that case, if there is any doubt about the moral status, we should err on the side of over-inclusivity, in order to minimise moral risk.

This sounds like a plausible argument, at least on the face of it. Some people have already used a version of it to make the case for including robots and AI in the moral community. Erica Neely, in her 2014 paper "Machines and the Moral Community" makes explicit appeal to such reasoning, arguing that we should be generous when it comes to recognising the moral status of machines. But what I want to suggest in this article is that things are not quite so simple. Moral risk should play some role in how we think about these questions, but the role it plays will vary depending on the type of moral status with which we are concerned. To make progress on this question, then, we need a meaningful account of the different moral risks involved in over- or under- ascribing these different moral statuses. I'll make some initial progress in that direction in this article, suggesting that the moral risks of recognising certain human-robot relationships are often overstated, but that there is no simple answer to the question of whether we should err on one side or the other. A lot will depend on the context in which we need to make such a decision.

In the course of defending this view, I will offer a brief summary of my case for ethical behaviourism, addressing some of the standard criticisms of it, before plunging into a longer discussion of moral risk and how it should affect the use of ethical behaviourism in relation to robots.


1. A Quick Overview of Ethical Behaviourism

As noted, ethical behaviourism is the view that when it comes to determining the moral status of our relationships with other beings, behaviour is a sufficient form of evidence for establishing that relationship status. To take a more concrete example, if you want to know whether an entity can suffer harms and is therefore deserving of some basic moral concern, you can look to its behaviour to answer that question. If it looks and acts like it can suffer, then you should treat it as if it does.

On my view, ethical behaviourism is entailed by mental property-based views of moral status. Most philosophical theories of moral status hold that other beings have moral status in virtue of things like intentionality, sentience, self-awareness, moral autonomy and so on. All of these properties are linked to the mind. The problem, of course, is that we cannot directly perceive the minds of others. If we want to know whether someone has intentionality, sentience, self-awareness and moral autonomy, we look to their behaviour. If they look and act as if they have those properties, then you should treat them as if they do. Behaviour is the window into the mind, or the very least, one of the best windows into the mind.

Although I see ethical behaviourism as being tied to mental property views of moral status, I have, in the past, suggested that it can 'float free' of any particular view of moral status. In other words, even if we are unsure of the exact mental property (or set of mental properties) that grounds status, behaviour is going to be a sufficient source of evidence for those properties. Consequently, if an entity is sufficiently sophisticated in its behaviours, it will satisfy any such theory. This is something for which critics have taken me to task. Part of the problem may have been the way in which I formulated one of the earlier defences of this position. I believe I said that ethical behaviourists could deploy a 'behavioural equivalence' test for moral status and thus be 'strictly agnostic' as regards the exact ontological underpinning of moral status. I don't think that's quite right anymore. They cannot be strictly agnostic, but they can be relatively agnostic regarding any mental property view.

There are many criticisms of ethical behaviourism. I won't review them all here. The typical ones will argue that behaviour is not the sole source of evidence for moral status. Evidence regarding underlying biological constitution or neural/cognitive mechanisms are said to be more important according to some critics. I've addressed these kinds of criticisms at length in some of the papers I have published. The only thing I will say about them here is that (i) ethical behaviourism is about the sufficient conditions for moral status not the necessary ones, so it holds open the possibility of other forms of evidence being determinative of moral status; and (ii) notwithstanding this point, many of these other forms of evidence are, in my view, either validated through behavioural evidence or trumped by behavioural evidence. So behaviour remains one of the best sources of evidence for moral status.

In addition to these critiques, there are some confusions about ethical behaviourism that are worth clearing up. Three are worth mentioning. First, ethical behaviourism is an epistemic thesis about the kinds of evidence we can use to determine moral status; it is not an ontological thesis about the actual grounding of moral status. This is implicit in what I have already said but it is worth spelling out explicitly. On my view, it could well be that sentience is what ultimately grounds moral status, it is just that behaviour is what provides sufficient evidence of sentience. Second, ethical behaviourism is an ethical theory; it is not a descriptive or scientific theory. I am not claiming that the mind reduces to behaviour, or that we shouldn't care about the underlying neural or cognitive mechanisms of behaviour. Of course we should care about those things. I am just claiming that behaviour is more important when it comes to determining moral status. Third, although I have spoken so far only about the role of ethical behaviourism in establishing 'moral status' (a phrase with a particular meaning in philosophy), I believe that it applies more generally to all manner of relationship statuses. For example, I have defended the use of ethical behaviourism to determine whether humans can be friends with robots, or whether humans can have loving, intimate relationships with robots. The reason for this broader applicability is that philosophical accounts of those relationships often claim that certain mental properties are crucial to determining their existence. For example, goodwill and mutual affection are crucial to both friendship and love. These are mental properties. Once again, we use behaviour to establish the presence of those mental properties.

That’s the gist of ethical behaviourism. For the remainder of this article, I’ll focus on the relationship between ethical behaviourism and moral risk.


2. Risk Asymmetry Arguments

There has been a recent surge of interest in the role of moral risk and uncertainty in our decision-making. Ted Lockhart kickstarted this interest with his 2000 book Moral Uncertainty and Its Consequences. Lockhart’s key insight is easy enough to grasp. Decision theorists have long argued that we should factor predictive uncertainty into decision-making. If we don't know whether it is going to rain later on today, then we should consider the probability that it will and the likely cost to us of getting wet if it does. Depending on what those probabilities and costs are, we will decide whether or not to take an umbrella with us when we go out. Lockhart and those that have followed in his wake argue that we should do the same when it comes to moral uncertainties. We often aren't sure what the right thing to do is, but we should try to factor that uncertainty into our moral decision-making.

This basic insight has generated a rich body of scholarship over the past two decades, some of it quite complex in nature. The main focus of this literature is to identify meta-moral decision-rules that tell us what we ought to do when we don't know what we ought to do. Many of these decision rules are variations on decision rules that have long been discussed by decision theorists. For example, Lockhart argued for something akin to the 'maximise expected benefit' rule in cases of moral uncertainty. Others argue that we should maximise the expected choiceworthiness of our decisions.

The minutiae of these decision rules does not concern us here. One of the chief motivating examples for proponents of moral uncertainty is the idea that some critical moral choices involve substantial risk asymmetries. In other words, there are cases in which we are unsure what the right thing to do is, but one option carries most of the moral risk. Given this asymmetrical distribution of risks, the claim is that we should avoid that option.

This might not make much sense in the abstract so consider a concrete example: the decision to eat meat. Let's assume, for a moment, that we are uncertain about the moral status of animals and the morality of farming and slaughtering such animals. We know that there are reasons to think that animals suffer and that modern farming methods are not good for their well-being and flourishing. We also know that many people think that factory farming is an ongoing moral catastrophe -- something that our grandchildren will look back on with a mixture of shame and incredulity. How could we be so cruel? But we are still not entirely convinced. We think there are personal and social benefits to eating meat that are not easily substituted (maybe we have read Vaclav Smil's book on the matter) and we are not sure that animals suffer in the same way as humans or that their status is such that they are wronged by farming and slaughtering them. So the upshot is that there is some moral uncertainty attached to the decision to eat meat. It might be permissible or it might be contributing to a great moral catastrophe.

Here is where the risk asymmetry argument comes into play. When you think about it, in the case of meat-eating, most of the moral risk is on one side of the ledger. By eating meat you might be contributing to a moral catastrophe, perpetuating the suffering of millions of innocent creatures. Sure, you gain some nutritional benefit from doing so, and the experience can be quite pleasurable, but you can survive and thrive on a plant-based diet, and the benefits are not so great that they outweigh the risks of a moral catastrophe. Thinking about it in this way makes it clear what you ought to do, despite your initial uncertainty.

Abstracting away from the specific details of the meat-eating dilemma, we can extract a basic argument template that can be deployed in all cases involving significant risk asymmetries:


  • (1) You are faced with a choice between Option A and Option B but are unsure which of those two options is morally acceptable (permissible, obligatory etc).
  • (2) In order to know which of the two options is acceptable, you would need to know moral fact X (which you don't know).
  • (3) If X were true, then Option A would clearly be morally unacceptable and Option B would be acceptable.
  • (4) If X were false, then both A and B would be acceptable but the relative benefits of A would be minimal.
  • (5) You ought try to minimise expected moral costs (at least where those costs are obvious)
  • 6) Therefore, despite your uncertainty with respect to X, you ought to choose Option B over Option A.

Simpler formulations of the argument are possible, but this one is useful when we turn to consider the moral status of non-humans and the application of ethical behaviourism. Obviously what's interesting about the risk asymmetry argument is that it attempts to capture the relative costs of making false positive and false negative moral errors. What do I mean by this? I mean that it captures the risk of believing fact X to be true when it is not (the false positive risk) versus the risk of believing it to be false when it is not (the false negative risk). Applied to the meat-eating example, proponents of the argument are claiming that the false positive risk (believing that animal suffering counts for a lot when it doesn't) is a lot lower than the false negative risk (believing that it doesn't count when it does).

There are a number of criticisms of risk asymmetry arguments. Dan Moller argues, for example, that we should ignore very small risks of being wrong. It is only when there is some substantive risk that we should start to factor it in. For instance, there is a small risk that the pebbles in my driveway are conscious and that I cause them great pain every time I drive out. After all, panpsychism might be true. But the risk seems so small as to not be worth taking seriously. The problem then, of course, is that we get into a debate about which risks are really small and which are sufficiently large to be worth factoring into a risk asymmetry argument. This could lead to some intractable conflicts. I'm well aware, for example, that some people think the chances of a robot or AI having some morally significant status are on a par with the chances that the pebbles in my driveway are conscious. I think ethical behaviourism gives us a way to assess the relative likelihood of this, but they may continue to disagree. 


Perhaps conflicts of this sort are unavoidable and risk asymmetry arguments can only work from subjective probability estimates. I won't attempt to resolve this issue here. Instead, I will assume we can say something meaningful about the probabilities in question and use this as the basis for assessing different risk asymmetry arguments.


3. Risk Asymmetry Arguments and Robot Moral Statuses

Let's bring it back to ethical behaviourism and, more specifically, the application of ethical behaviourism to our relationships with robots. For ethical behaviourism to have any practical utility, the basic insight has to be translated into some kind of standard or test for determining whether the behavioural evidence warrants belief in the existence of some kind of moral status. In previous work, I have called this standard the 'performative threshold' that must be crossed before an entity 'counts' for some moral purpose. I have never spelled out exactly what that threshold is because it is likely to vary from context to context, and it is also likely to vary as a function of your underlying theory of what matters, ontologically speaking, for moral purposes. If you think the capacity to suffer is what matters, then your performative threshold is likely to be different from that of someone that thinks that the capacity for reflective moral judgment is what matters. I have never been overly-invested in these details because I have been more concerned with the general point that behaviour is a sufficient basis for believing in moral status if you adopt a mental capacity based theory of status.

But it is worth thinking about the performative threshold in some detail since that is where the intellectual rubber (the basic idea of ethical behaviourism) meets the road (practical application to disputed cases of moral status). As I see it, when you are dealing with a disputed case of moral status -- e.g. whether a robot can suffer -- you work through analogies: is this entity sufficiently like another entity whose moral status is undisputed -- e.g. humans who we agree can suffer. So, in other words, in disputed cases the most natural way to proceed is through some kind of 'performative equivalence' test. Is X sufficiently like Y with respect to the properties that we think are relevant?

But there are different levels at which this equivalence test could be set. These are similar to the 'sensitivity' levels of other scientific tests. We could have:


Robust performative equivalence: The two entities must be equivalent to each other in multiple ways, across different environments, retests and contexts.

 

Moderate performative equivalence: The two entities must be equivalent to each other in several ways and across some environments, retests and contexts.

 

Minimal performative equivalence: The two entities must be equivalent to each other in a few ways and in perhaps one or two environments and contexts, with no need for retest.

 

These are crude distinctions but you get the idea. What is interesting about these different tests from the present perspective is that they can be understood as responses to the different levels of moral risk involved in recognising that an entity has some moral status. If the false positive risk is very high -- i.e. if you would make a serious moral error by assuming that X had some particular moral status when it did not -- then you might favour a robust version of the test. If the false negative risk is very high -- i.e. you would make a serious moral error by assuming that X did not have some particular moral status when it did -- then you might favour a minimal version of the test.

This is where risk asymmetry arguments get their foothold in the debate about the moral status of robots. It is tempting to apply those arguments in a simple and straightforward way but, as I shall now argue, the risk asymmetries may vary a lot depending on the kind of moral status that is under dispute.

So what are the different kinds of moral status that might be under dispute? As noted previously, in my work I have looked at two: (i) basic moral status; (ii) friendship and love. Let's look at both of these in a little more detail, examining the costs and benefits of different moral errors. This will enable us to consider the different performative standards that should apply to these different statuses.


(i) Basic moral status

This is the ground zero of the debate. Basic moral status arises whenever a being is an object of moral concern. In other words, it is not just a tool or thing that you can treat however you please but it has moral interests of its own. It can be harmed and benefited by your actions and so you have to factor that into your decisions. A husk of corn, for instance, does not have basic moral status but a newborn baby does.

Calling this 'basic moral status' is, however, a bit misleading. Moral status is not a simple binary thing but, rather, a spectrum of different possibilities and grades of moral status. We might agree that a newborn lamb and a newborn baby have moral status, but we might reasonably disagree as to whether they have equal moral status. Most people (I suspect) would argue that the newborn baby has a higher moral status than a newborn lamb. It has more capacities and potential interests and is thus deserving of a higher standard of moral care. I don't wish to get too enmeshed in the different possible grades of moral status here, but it is something to keep in mind. The main dispute in the literature tends to be about when or whether an entity attains a similar moral status to a human being and is thus deserving of a similar level of moral protection. This is what is usually in dispute when people as whether an entity belongs in our moral community or not.

Tying it back to theme of this article, what are the risks/rewards of making an error when it comes to ascribing basic moral status to a robot? The risks of making a false negative error -- denying moral status to entity that deserves it -- seem pretty high. Indeed, some people argue that the history of human moral progress is the history of overcoming false negative errors of exclusion. Several philosophers have written about moral progress in these terms. A standard example that they give is that the abolition of slavery can be seen as a recognition that slave populations deserve the same moral status as non-slave populations.

The risks of the false negative error are twofold. First, there is the direct harm to the entities that are denied moral status. They are denied basic moral rights and respect, and they may also be treated cruelly and inhumanely. Second, there are various indirect harms that result from the exclusion. These could arise in different ways. There are, for instance, studies suggesting that cruelty towards animals correlates with cruelty towards humans (the correlation is referred to in the literature as “The Link”). This might suggest that if we continue to exclude animals we could perpetuate an unnecessary lack of compassion and care within the present moral community. This depends, of course, on how we treat the excluded population. We can exclude without being cruel and inhumane. I'm pretty indifferent toward most ants, but I don't think I'm cruel to them. They leave me alone and I leave them alone.

These false negative risks look high and this has persuaded some people to think we should err on the side of over-inclusivity when it comes to basic moral status. But these risks needs to be balanced against the false positive risks. What's the harm of caring for something that does not need to be cared for? The typical arguments here are expressed in terms of the opportunity costs associated with lost time and attention. For instance, one of the most popular critiques of the robot rights debate is that it sucks up scholarly attention. We ought to be focusing on human welfare and human well-being, and how this is negatively impacted by AI and robotics, not on whether robots deserve moral care. The whole debate is a bit like people caring about the plight of locusts while millions of humans starve and suffer. Similarly, there are those tha argue that excessive moral concern for robots will prevent us from using robots in a way that benefits humanity. This is how I understand certain parts of Joanna Bryson's famous claim that robots should be slaves* (though she rejects that terminology now).

Against this, however, there may be some benefits to making false positive errors. It could be, for example, that being compassionate towards robots increases our level of compassion towards others, even if the robots do not deserve this level of concern. The studies on animal cruelty may be relevant here. It could be that all those people that care about the welfare and well-being of animals are wasting their time: animals do not deserve this level of concern. But at least they are not being cruel and inhumane to animals and treating animals as a training ground for cruelty to humans. That said, the empirical literature on the psychology of the moral circle paints a somewhat mixed picture. Some research seems to support the point I just made. For example, in a literature review, Daniel Crimston and his colleagues note that:


Across multiple studies, greater moral expansiveness was associated with increased empathic concern, perspective taking, moral identity, identification with all of humanity, connection with nature, endorsement of universalism values, and increased use of harm and fairness principles as foundations for moral decision making.
(Crimston et al 2018, p 16)

 

But a fascinating recent study by Joshua Rottman and his colleagues — with the delightful title “Tree-Huggers versus Human Lovers” — suggests that we do have a limited budget of moral concern and that increased concern for one group may come at the expense of reduced concern for another. Specifically, Rottman et al found that some people care more about non-human animals and the environment than they do about marginalised human communities. One interpretation of this is that these people reduce moral concern for marginalised human populations in order to make mental space for moral concern toward animals and the environment. The study is limited. Rottman et al were more interested in finding out whether people with this moral attitude existed than in how prevalent they are in society. But it does suggest that there could be false positive risks to recognising the moral status of non-humans.

I'm not sure what all this means when it comes to the performative threshold for basic moral status. I remain sceptical of those that push the false positive risks associated with opportunity costs. It's not obvious to me that care and concern for animals or robots must come at a cost to care and concern for humans. It sounds plausible to me that there is some kind of complementarity effect when it comes to compassion: the more compassion the better. That said, I have to acknowledge that the research doesn’t always support this optimism.

Still, on balance, when it comes to basic moral status, I'm inclined to say that the risks are (slightly) more asymmetrical on the false negative side and this favours a lower performative threshold than we might otherwise be inclined to use.


(ii) Friendship and Love

I'm going to treat friendship and love as a pair, not because I think they are the same thing, but because I think they share enough features for present purposes. There is, in any event, a long tradition of treating them as closely related. For instance, there is the classic distinction in the Greek tradition between philia and eros, both of which are species of love, the former applying to friendships and the latter to intimate relationships.

Friendship and love are complex phenomena and there are many different accounts of the conditions that must be satisfied in order for someone to count as a friend or a lover. For example, Aristotle's famous analysis of the concept of friendship claims that there are three main types of friendship: utility friendship, pleasure friendship and virtue friendship. The first type arises where the friends use one another for some instrumental gain; the second arises where they derive pleasure from their interactions with one another; and the third, which is more complex, arises when the friends 'share a life' with one another and have consistent and ongoing feelings of good will toward one another. The third category, according to Aristotle was the most meaningful. Most philosophical discussions of friendship begin and end with Aristotle, though his is not the only account. Love is similarly complex. In their discussion of human-robot love, Nyholm and Frank identify three different accounts of what it might take to be in a loving relationship with another, varying in terms of whether the other is a good match for your personality, the strength of your mutual commitment and affection for their distinctive characteristics..

A full analysis of the potential risks and rewards of human-robot love and friendship would have to contend with each of these accounts. I am not going to do that here. Instead, I just want to focus on two core aspects of friendship and love, that tend to be shared across most accounts. First up is the need for mutual goodwill between friends and lovers. On most accounts of friendship and love, it is agreed that in order for two people to be true friends or lovers, they must have some degree of mutual affection and good feeling toward one another. They must like each other, feel positive about their interactions and desire good things to happen to one another. Not all the time (that would be an impossible standard) but most of the time. It is the sincerity of these feelings that is often taken to be the true mark of friendship and love. It is also this need for mutual goodwill that, in my view, opens the door for ethical behaviourism. Mutual goodwill is a mentalistic property. Many people doubt whether robots could have the mentalistic properties that sustain mutual goodwill. But if I am right, this is something that can be evidenced at the behavioural level. If a robot looks and acts like it has goodwill towards you, then you are probably justified in believing that it does. You are in the same epistemic boat when it comes to human friends and lovers anyway. They might not like you as much as you think. There is always some doubt. You have to judge them by their behaviour.

This brings me to the second aspect of friendship and love: it is high risk/high reward. Friends and lovers are among the most valuable things that people can have in life. Many accounts of the good life include friendship and intimacy as basic human goods. They are usually thought to be intrinsic goods -- worth having in their own right -- as well as instrumental goods -- things that can unlock an array of additional benefits. Indeed, there is a large body of research detailing the instrumental benefits of intimacy and friendship for physical health, psychological well-being, social inclusion and much more (references). At the same time, our friends and lovers can betray us and let us down. Broken relationships are often painful and can leave emotional scars that last a lifetime. Abusive relationships can be even worse. If you get very close to someone, you run the risk of them doing you great harm. But if you keep everyone at arm’s length, you miss out on part of what makes life worth living.

The high risk/high reward nature of friends and lovers has interesting implications for the risk asymmetry argument. It is tempting to suppose that the high risks warrant extra caution when it comes to recognising the existence of such a relationship. If false friends and lovers can hurt you, then you better err on the side of false negatives rather than on the side of false positives. This may be taken to justify a high performative threshold. But the high reward nature of such relationships cuts against this logic. If you have so much to gain, and if you would be living a less optimal life without friends and lovers, why not be more open to them?

To resolve this tension, specifically when it comes to robot friends and lovers, we need to think more carefully about how the risks and rewards play out for people that might be thinking about forming such a relationship with a robot. There might be much to gain from such relationships, but how significant this potential gain is probably depends on the opportunity cost associated with forming that relationship. Again, the typical argument from the critic of such relationships will be that if you form such a relationship with a robot, you will miss out on forming such a relationship with a human. Since, on balance, relationships with humans are assumed to be superior to relationships with robots, the argument then concludes that we should discourage human-robot relationships, even if they are, in principle, possible.

There are, however, three problems with this argument. First, human-robot relationships may not be inferior to human-human relationships. This belief is, arguably, a holdover from the assumption that such relationships are impossible and hence devoid of all value. Some people claim to have much more valuable relationships with their pets than they do with fellow humans. It is not implausible to suspect that something similar could be true for some people with their relationships with robots. Second, even if human-robot relationships are inferior, people that are inclined to such relationships may not be missing out on much. There is a body of empirical research suggesting that people that score high on anthropomorphism tend to be more socially isolated and lonely. We might infer from this that such people are less likely to form significant relationships with other humans. So, in their case, it is not a simple choice between human relationships and robot relationships. It is, rather, a choice between robot relationships and no relationships at all. Third, and finally, the opportunity cost argument may not hold true in many cases. We may not have to 'give up' human relationships in order to form relationships with robots. It could well be that robot relationships complement human relationships or can be pursued in parallel to them. The counterargument to this will be that there is some upper limit on the number of friends and lovers we can -- or should -- have (e.g. Dunbar’s number). But I’m sceptical about the relevance of such limits to this debate. In any event, those limits are probably sufficiently high that most people could accommodate a few robot relationships without any overall loss in human relationships.

What about the false positive risks of robot relationships? Well, as noted, the risks associated with human friends and lovers are usually cashed out in terms of insincerity, betrayal and being let down. You thought that someone loved only you, but it turns out they have been having multiple affairs. You thought that someone was your friend, but it turns out they have been spreading nasty rumours about you behind your back. You really needed someone to be there for you during a difficult time, but they decided to ignore you. Do these risks also apply to robot friends and lovers? A lot of people think that the insincerity risk is intrinsic to human-robot relationships. The idea is that a robot cannot be your friend because they lack the right state of mind. They are always inauthentic. But if I am right about ethical behaviourism, this is not a good objection to human-robot relationships. Whether they are authentic or not is something that is to be assessed through behaviour — as it usually is for human-human relationships — not the presence or absence of some magical and unobservable inner mental state. That leaves us with the risk of betrayal and being let down. My own view is that the risk of being betrayed by robots is significant, at least given the way in which robots are currently designed and operated. Robots are created by companies, they use proprietary, cloud-based AI, and they usually collect data on their users that is used by the company and third parties. This data collection and transfer, in particular, presents a major risk of betrayal. It is also possible that robots could be hacked and used to extract data contrary to the intentions of the original creators or even used to manipulate or harm you. Again, similar risks are present in human-human friendships (plenty of friends have ‘betrayed’ me in some sense) so the relative risks here are unclear. The risk of being let down by a robot may correlate with the risk of the robot being hacked or manipulated. That said, one hope with robots is that they would be more consistent and reliable than humans. Thus, it could well be that robots score lower on this type of false positive risk.

I am not sure how to balance all of these potential risks and rewards. I’m not sure it can be done in the abstract. A lot will depend on the particular person (their degree of social isolation; their need for friends etc) and the particular robotic system (its security and safety record; its features). For some people and some systems, the false negative risks will outweigh the false positives; for others the opposite will be true. The idea that someone like me could, from the armchair, decide the issue might be an instance of intellectual hubris.


Monday, July 19, 2021

93 - Will machines impede moral progress?


Thomas Sinclair (left), Ben Kenward (right)

Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

 


Show Notes

Topics discussed incude:

  • What is a moral value?
  • What is a moral machine?
  • What is moral progress?
  • Has society progress, morally speaking, in the past?
  • How can we design moral machines?
  • What's the problem with getting machines to follow our current moral consensus?
  • Will people over-defer to machines? Will they outsource their moral reasoning to machines?
  • Why is a lack of moral progress such a problem right now?


Relevant Links


Friday, July 9, 2021

92 - The Ethics of Virtual Worlds


Are virtual worlds free from the ethical rules of ordinary life? Do they generate their own ethical codes? How do gamers and game designers address these issues? These are the questions that I explore in this episode with my guest Lucy Amelia Sparrow. Lucy is a PhD Candidate in Human-Computer Interaction at the University of Melbourne. Her research focuses on ethics and multiplayer digital games, with other interests in virtual reality and hybrid boardgames. Lucy is a tutor in game design and an academic editor, and has held a number of research and teaching positions at universities across Hong Kong and Australia.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here)


Show Notes

Topics discussed include:

  • Are virtual worlds amoral? Do we value them for their freedom from ordinary moral rules?
  • Is there an important distinction between virtual reality and games?
  • Do games generate their own internal ethics?
  • How prevalent are unwanted digitally enacted sexual interactions?
  • How do gamers respond to such interactions? Do they take them seriously?
  • How can game designers address this problem?
  • Do gamers tolerate immoral actions more than the norm?
  • Can there be a productive form of distrust in video game design?

Relevant Links