Pages

Tuesday, March 30, 2021

Technology and the Value of Trust: Can we trust technology? Should we?



Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.


1. Can you trust a thing?

Philosophers like to draw a distinction between trust and mere reliance. The distinction is usually parsed like this: trust is something that exists between people; mere reliance can exist between people and things. One person trusts another when they expect the other will act with goodwill towards them and live up to their obligations. Mere reliance involves the expectation that someone or something will follow a predictable pattern of behaviour.

I believe this distinction was first articulated by Annette Baier in her 1986 article ‘Trust and Antitrust’. More recently, Katherine Hawley has made it the centrepiece of her theory of trust. In her article ‘Trust, Distrust and Commitment’ she opens with a section entitled ‘Trust is not Mere Reliance’. Why not? Hawley accepts that this distinction is not one that is respected in ordinary language. Much to the annoyance of philosophers, people do talk about trusting their cars, their appliances and even the ground on which they walk. But Hawley thinks people are wrong to do so. They should make the distinction because trust is a normatively richer concept than mere reliance. 

Here is the core of her argument:


The distinction is important because trust, not mere reliance, is a significant category for normative assessment. Trust, unlike mere reliance, is connected to betrayal. Moreover trustworthiness is clearly distinguished from mere reliability. Trustworthiness is admirable, something to be aspired to and inculcated in our children: it is a virtue in the everyday sense, and perhaps in the richer sense of virtue ethics too. Mere reliability, however, is not. A reliable person is simply predictable: someone who can be relied upon to lose keys, or succumb to shallow rhetoric, is predictable in these respects, but isn't therefore admirable. Even reliability in more welcome respects need not amount to trustworthiness: when you reliably bring too much lunch, you do not demonstrate trustworthiness, and nor would you demonstrate untrustworthiness if you stopped. 
(Hawley 2012, 2)

 

This is a strange argument. There seem to be two main parts to it. The first is the claim that trust is linked to betrayal while mere reliability is not. I guess that’s true, but that is probably just an artefact of the conceptual vocabulary we use. Betrayal is the flipside or negative of trust: it’s what happens when trust goes bad. There is, presumably, a negative side to reliability too. Unpredictability? Randomness? The second claim is that trustworthiness is admirable and normatively assessable in a way that mere reliability is not. But is that really true? It seems to me that many people think that being 'reliable' is an admirable quality. I often overhear people talking about work colleagues being reliable with the implication being that they exhibit some virtue. It is true that people can be reliably bad, but that doesn’t say much. After all, people can misplace trust in others or their trust can be betrayed. In other words, just as reliability has its ups and downs so too does trust. I can’t help but wonder if the modifier ‘mere’ is doing a lot of the work in this conceptual distinction. If we said ‘mere trust’ instead of ‘trust’ would we have a similarly dismissive attitude?

In any event, neither of these points is particularly pertinent to the issue at hand. Even if there is this important conceptual distinction between trust and mere reliance, it does not follow that you cannot trust a thing. To make that argument, you would have to suggest that there is some condition of trust that is linked to a property that people have but machines or things lack. What might that be?

The typical answer appeals to mental properties. The idea is that trust depends on having a mind. Since things cannot have minds, they cannot be proper objects of trust. Mark Ryan develops this critique in his article ‘In AI We Trust: Ethics, Artificial Intelligence and Reliability’. In the article, Ryan identifies a number of conditions that must be satisfied in order for trust to exist between two entities or parties. They include things like believing that the other party is competent to perform some action or function, having confidence that they will perform those functions, and being vulnerable to them if they do not. Ryan accepts that machines, specifically AIs, can satisfy these three conditions and so a form of ‘rational’ trust in machines (which might be equivalent to what others call ‘reliance’ or ‘confidence’) is possible. But machines cannot satisfy two other critical conditions for the normatively richer form of trust: (i) they cannot be motivated to act towards us out of a sense of goodwill or out of a desire to live up to their moral obligations toward us; and (ii) they cannot betray us.

Without getting too into the details, I think there are problems with the second part of this argument. Ryan’s claim that machines cannot betray appears to be circular. In essence, his position boils down to the claim that you cannot betray someone unless you are a proper object of trust but you cannot be a proper object of trust unless you have the capacity for betrayal. But that just begs the question: how do you become a proper object of trust or develop the capacity for betrayal? 

That leaves the other part of the argument: the claim that machines cannot have the right kinds of motivation or desire for action. What does Ryan say about this? A lot, but here is one critical quote from his paper:


While we may be able to build AI to receive environmental input and stimuli, to detect appropriate responses, and program it to select an appropriate outcome, this does not mean that it is moved by the trust placed in it. While we may be able to program AI to replicate emotional reactions, it is simply a pre-defined and programmed response without possessing the capacity to feel anything towards the trustor. Artificial agents do not have emotions or psychological attitudes for their motives, but instead act on the criteria inputted within their design or the rules outlined during their development [reference omitted] 
(Ryan 2020, p 13)

 

In other words, we might create machines that look and act they care about us or look and act like they are motivated by reasons similar to our own, but this is all just an illusion. They don’t feel anything or care about us. They are just programmed artifacts; not conscious, caring humans. They have no minds, no intentions, no inner life.

If you have read any of my previous work on ‘ethical behaviourism’ (e.g. here, here, here and here), you will know that I do not like this kind of argument. To me, it smacks of an unwarranted form of human exceptionalism and mysterianism: humans have this special property that cannot be replicated by machines, but how that property is instantiated in humans is both mysterious and never fully specified. My own view is that while there are important differences between humans and machines (particularly as they are currently designed and operated) there is no ‘in principle’ reason why machines cannot be motivated to act toward us with goodwill and moral rectitude. After all, the only reason we have to believe that other humans are so motivated toward us is because of how they look and act. Looking and acting, broadly defined, are the epistemic hinge on which perceptions of mindedness turn. We can rely on the same evidence when it comes to machines. If they look and act the right way, we can trust them. Similarly, the notion that machines are somehow different from us because they act on the basis of ‘criteria inputted within their design or rules outlined during the development’ also strikes me as being misleading and false. Humans have also been manufactured through a process of evolution by natural selection and personal biological development. We are constrained by both processes and we act on the basis of decision rules and heuristics acquired during these developmental processes. We may sophisticated and complicated biological machines, but there is nothing magical about us.

If I’m right, then even on Ryan’s account of trust it is, in principle, possible for us to trust machines. But this assumes that Ryan (and Hawley and Baier) are right in supposing that trust depends on mental properties like goodwill and a desire to do the right thing. What if that is the wrong way to think about trust?

One of the most interesting recent papers on this topic comes from C. Thi Nguyen. It is called ‘Trust as an Unquestioning Attitude’. In it, Nguyen argues that we can have a normatively rich form of trust in objects and things. Indeed, hearkening back to the point made by Hawley, he suggests that reference to this non-interpersonal or non-agential form of trust are common in everyday language. He cites several examples of this, including climbers who talk about ‘trusting’ their climbing ropes, and people who have lived through earthquakes talking about feeling ‘betrayed’ by the ground beneath their feet.

What is it that unites these non-agential forms of trust? Nguyen argues that this form of trust arises when we have an unquestioning attitude toward something. In other words, when we take it for granted that it will act in a certain way and we depend on it do so. In this respect, we all trust the ground beneath our feet. We don’t wake up in the morning and assume that it will suddenly tear apart and swallow us up. We rely upon this assumption to live our lives. It is only in the extreme case of an earthquake that we realise how much trust we place in the ground. Other examples of this form of trust abound in our everyday discourse.

But what about all those philosophers who insist that trust can only exist between people? Nguyen says something about this:


I have found that philosophers who work on trust and testimony think that this use of “trust” is bizarre and unintuitive — especially locutions like “trusting the ground” and feeling “betrayed by the ground”. But it seems to me that, in fact, these expressions are entirely natural and comprehensible, and it is only excess immersion in modern, narrowed philosophical theories of trust that renders these locutions odd to the ear. 
(Nguyen, MS p 10)

 

There is a general lesson for philosophers here. For instance, I have encountered a similar phenomenon when writing about gratitude. I once tried to publish a paper on whether atheists could be grateful for being alive. It was repeatedly rejected from journals by reviewers who insisted that gratitude is necessarily interpersonal. According to them, it makes no sense to be grateful for things or for some natural state of affairs. You can only be grateful toward other people. This always struck me as bizarre and counterintuitive but, according to these reviewers, I was the outlier. (If you are interested, you can find the unpublished paper here. Before you say anything, I’m sure there are other reasons why it should have been rejected for publication)

Assume Nguyen is right. What is normatively significant about his version of trust? Nguyen sees trust as an unquestioning attitude as something that is integral to our sense of agency. We are cognitively limited beings. We cannot be constantly suspicious and questioning of everything. By accepting that things (cars, climbing ropes, mobile phones) will work in a certain way, or that people (lovers, friends, fellow citizens) will live up to their obligations, we give ourselves the freedom to live more enriched and open lives.

This doesn’t mean that we can never be suspicious of them. This trust can be misplaced and its wobbly foundations can be revealed in certain circumstances (like in the midst of an earthquake). When this happens we may critically interrogate our previous unquestioning attitude. We may search for data to confirm whether we are right to trust this thing or not. Depending on the outcome of this inquiry, we may find our trust restored or we may find that we can no longer take the thing for granted. Either way, trust as an unquestioning attitude is a normatively essential part of what it means to be human. Given our cognitive limitations, we couldn’t get by without it.

I like Nguyen’s theory of trust. I think it captures something important about our relationship to the world around us. We don’t just rely on our friends, or on the ground beneath our feet or on the smartphones in our pockets. We trust them to act or to persist in certain way so that we can get on with the business of living.


2. Technology and The Value(s) of Trust

If Nguyen’s right, then it does make sense to talk about trust in technology. But this raises a deeper question. Everyone talks about the value of trust but what form does this value take. Is trust valuable in and of itself? In other words, is it a good thing to have trusting relationships in our lives, irrespective of their consequences? Or is trust valuable purely for consequential reasons?

There is a common philosophical distinction that is relevant here: the distinction between intrinsic and instrumental value. It is possible to argue that trust has both kinds of value:


The Intrinsic Value of Trust: Trust is valuable in and of itself (irrespective of its consequences) because it expresses an attitude of respect or tolerance toward the object of trust. For example, if you trust another human being you are signalling to them that you recognise and respect their moral status and moral autonomy.

 

The Instrumental Value of Trust: Trust is valuable because it is practically essential to human life. It allows us to cooperate and coordinate with others, which allows us to innovate and develop and explore more opportunities. A life without trust would be impoverished because it would lack access to other valuable things.

 

From my reading of the literature, the instrumental value of trust tends to be emphasised more than the intrinsic value of trust . There is a good reason for this. Everyone that writes about trust notes that trust is a double-edged sword. Whenever you trust a person or a thing you cede some control and power to them. When I trust my partner to look after our daughter, I give up my own attempts to manage and control all aspects of childcare. When I trust my calendar app to keep a record of my appointments and meetings, I give up my own attempts to keep a mental record of my appointments and meetings. The irony is that ceding power and control in this manner can actually be empowering. By not having to worry about childcare or scheduling (at least temporarily) you unlock other opportunities and overcome some of your own cognitive and temporal limitations. This is Nguyen’s argument. But ceding power and control can be risky. The trust can be betrayed. My partner might not look after our daughter properly, my calendar app might fail to update or record a meeting. When this happens I may lose, rather than gain, something that I value.

It is because the consequences of misplaced trust can be so terrible that people tend to emphasise the instrumental value of trust. Even if trust has some intrinsic value this can be swamped by its negative consequences. Imagine if my partner, through neglect, causes our daughter to become seriously ill. By trusting her to look after our daughter I will have expressed my respect for her moral status and autonomy, but that will be of little consolation if our daughter is seriously ill. The intrinsic value of trust is present and cannot be denied, but it has been overridden by the negative instrumental value of trust. It is the instrumental value that matters most.

How is this relevant to the debate about trust in technology? Well, if we accept that we can trust technology (and that it is meaningful to talk about such trust), then we can also accept that this form of trust can have significant instrumental value. It can help us to access other values that would be impossible (or very difficult to obtain) without that trust. But the intrinsic value of trust does seem to be absent when it comes to our relationships with technology. If we accept that most technologies as they currently exist lack an independent moral autonomy and moral status, then we cannot express respect or tolerance for technology by trusting it. This means that the value of trust in technology hinges entirely on the consequences of this trust: if the consequences are good, then it has instrumental value; if the consequences are bad, it does not.

There are three counterarguments to this claim that trust in technology lacks intrinsic value. The first is to claim that even if technology currently lacks independent moral autonomy and status, it may someday acquire this. The typical way to run this counterargument is to suggest that sophisticated machines might acquire the mental properties that we typically associate with moral autonomy and status and, once they do, we will be able to express respect and tolerance toward them by trusting them. Given my earlier critique of Mark Ryan’s views on trust in technology, and my defence of ethical behaviourism, I am quite sympathetic to this argument. I’m just not sure that any present technology rises to requisite level of sophistication.

The second counterargument is to claim that entities do not need to posses mental properties in order for them to have a moral status that is worthy of respect. Environmental ethicists, for example, might argue that aspects of the natural world have an independent moral status that is not derived from human enjoyment of or dependence on the natural world. It is, consequently, not absurd to suggest that we can express respect or tolerance toward aspects of the natural world. If that is right, then it may be less of a stretch to say that trust in technology in its current form has some intrinsic value (remembering, at all times, that this intrinsic value can be swamped by the negative consequences of misplaced trust).

The third counterargument is to claim that technology is a product of human moral agency and autonomy and hence it can have a kind of derived moral status. In other words, it makes sense to express respect for the technology because in doing you are expressing respect for its human creator. There may be some plausibility to this argument in certain contexts. For example, I trust the chef at my favourite restaurant not to poison me. As a result, I don’t test the chemical composition of his food every time it comes out to my table. I just eat it. By trusting that the food will be fine I am, in a sense, expressing my respect for him. But whether this reasoning holds up in the case of technology is much less clear. Most technologies are created by teams of humans. You are not singling any one of them out for respect and, arguably, it is just as mistaken to respect an entire group of humans as it is to respect a thing. But even if you can, the value of trusting their product is still only a derived value and it is quite a nebulous and partial one at that.

In conclusion, trust in technology can have instrumental value (or disvalue as the case may be), but it probably lacks the intrinsic value that arises from trust between human beings. That said, the intrinsic value of trust is quite limited and can easily be swamped by the negative consequences of misplaced trust. So to say that trust in technology lacks intrinsic value is not to say all that much.


3. Concluding Thoughts

None of this is to suggest that we ought to trust technology. It is simply to say that it is meaningful to talk about trusting technology and this type of trust can have significant instrumental value in our lives. Whether it does, in fact, have such value depends on the properties and dynamics of the technology. What does it actually do in our lives? Does it empower us? Or does it act against our interests? Does it do more of the former than the latter?

These are the very same questions we should ask about our relationships with other human beings. We shouldn’t trust all humans. That would be a mistake. Whether we should trust them, or not, depends on who they are and what they do to us. If we take an unquestioning attitude toward them, does this unlock other opportunities and goods for us? Or does it leave us exposed to exploitation and abuse?

It is undoubtedly true that many of us trust technology in our daily lives and are rewarded for doing so. Right now, as I write these words, I’m trusting my computer and my word processing software to safely record and save them for later retrieval. I don’t doubt that the files will be there tomorrow morning when I wish to work on them again. Similarly, I trust my car not to breakdown when I drive to collect my daughter this afternoon. I don’t meticulously check the undercarriage or the engine every time I hop into the driver’s seat.

The problem is that this trust is sometimes betrayed. Modern technologies can let us down. Digital technology is vulnerable to security hacks and data leaks. Mass surveillance can compromise our privacy. Apps can work more for their creator’s interests than for those of their human users. To use a trite example, it is in Facebook’s interests to keep you hooked on their newsfeed and clicking on their ads. Whether this is in your interest is much more doubtful. In many cases, assuming that the technology has a benign effect on your life can be mistaken. This is the dark side of trust.

What can we do about this? Efforts to create trustworthy technology can help, but many of these efforts must be understood for what they really are. Sometimes they are not about encouraging or facilitating trust in technology. They are, instead, about making it possible for us to critically scrutinise the technology. In other words, to make it possible for us to take questioning attitude toward it when we feel unsure about its bona fides. This is why there is such a significant emphasis on transparency, accountability and audit trails when it comes to creating trustworthy technology.

These are laudable goals, and once the mechanisms of accountability are put in place people may well slip back into an unquestioning attitude toward technology. Trust could then be restored. But the policy itself is motivated by the belief that the technology in its current form is not trustworthy.


4 comments:

  1. I think you can make a good case that someone who *really* believed computers were the sort of moral agents who could be blamed for things like breeches of trust then people could trust machines.

    But, it seems plausible, that, given the current limitations of AI or because you believe that computers don't have qualia which is necessary for being the right kind of moral agent it might be that trusting machines requires -- not quite an error -- but at least very weird intuitions (though a moral realist would presumably group this in objective moral facts not our intuitions).

    Also, it seems a bit besides the point t which of these concepts gets the name trust but your point taken suggesting that there is no compelling principled distinction to be drawn is interesting.

    ReplyDelete
    Replies
    1. Hmm, that wasn't very clear. I guess I’m not convinced that your opponents here would deny that some future AGI couldn't be trusted but merely that, as they are now, AIs lack some capacity necessary for trust.

      Delete
  2. But I'd add the obvious argument to make is that we have special first person knowlede that human like physiology gives rise to experiences so we should have more confidence those with similar bio structure have experiences than machines which pass the Turing test and that seems like a good (if merely a difference in degree of belief) to regard something as a different kind of moral agent.

    ReplyDelete
  3. Technology and the Value of Trust is such a thought-provoking topic! Exploring whether we can and should trust technology is crucial in today's digital age. If you’re sharing more insights on this, using Hostever for hosting can ensure your content reaches the right audience!

    ReplyDelete