Saturday, September 29, 2018

Artificial Intelligence and the Constitutions of the Future

[Note: This is the text of a talk I delivered to the AI and Legal Disruption (AI-LeD) workshop in Copenhagen University on the 28th September 2018. As I said at the time, it is intended to be a ‘thinkpiece’ as opposed to a well-developed argument or manifesto. It’s an idea that I’ve been toying with for awhile but have not put it down on paper before. This is my first attempt. It makes a certain amount of sense to me and I find it useful, but I’m intrigued to see whether others do as well. I got some great feedback on the ideas in the paper at the AI-LeD workshop. I have not incorporated those into this text, but will do so in future iterations of the framework I set out below.]

What effect will artificial intelligence have on our moral and legal order? There are a number of different ‘levels’ at which you can think about this question.

(1) Granular Level: You can focus on specific problems that arise from specific uses of AI: responsibility gaps with self-driving cars; opacity in credit scoring systems; bias in sentencing algorithms and so on. The use of AI in each of these domains has a potentially ‘disruptive’ effect and it is important to think about the challenges and opportunities that arise. We may need to adopt new legal norms to address these particular problems.
(2) Existential Level: You can focus on grand, futuristic challenges posed by the advent of smarter-than-human AI. Will we all be turned into paperclips? Will we fuse with machines and realise the singularitarian dreams of Ray Kurzweil? These are significant questions and they encourage us to reflect on our deep future and place within the cosmos. Regulatory systems may be needed to manage the risks that arise at this existential level (though they may also be futile).
(3) Constitutional Level: You can focus on how advances in AI might change our foundational legal-normative order. Constitutions enshrine our basic rights and values, and develop political structures that protect and manage these foundational values. AI could lead to a re-prioritising or re-structuring of our attitude to basic rights and values and this could require a new constitutional order for the future. What might that look like?

Lots of work has been done at the granular and existential levels. In this paper, I want to make the case for more work to be done at the constitutional level. I think this is the important one and the one that has been neglected to date. I’ll make this case in three main phases. First, I’ll explain in more detail what I mean by the ‘constitutional level’ and what I mean by ‘artificial intelligence’. Second, I’ll explain why I think AI could have disruptive effects at the constitutional level. Third, I’ll map out my own vision of our constitutional future. I’ll identify three ‘ideal type’ constitutions, each associated with a different kind of intelligence, and argue that the constitutions of the future will emerge from our exploration of the possibility space established by these ideal types. I’ll conclude by considering where I think things should go from here.

1. What is the constitutional level of analysis?
Constitutions do several different things and they take different forms. Some might argue that this variability in constitutional form and effect makes it impossible to talk about the ‘constitutional level’ of analysis in a unitary way. I disagree. I think that there is a ‘core’ or ‘essence’ to the idea of a constitution that makes it useful to do so.

Constitutions do two main things. First, they enshrine and protect fundamental values. What values does a particular country, state, legal order hold dear? In liberal democratic orders, these values usually relate to individual rights and democratic governance (e.g. right to life, right to property, freedom of speech and association, freedom from unwarranted search and seizure, right to a fair trial, right to vote etc.). In other orders, different values can be enshrined. For example, although this is less and less true, the Irish constitution had a distinctively ‘Catholic’ flavour to its fundamental values when originally passed, recognising the ‘special place’ of the Catholic Church in the original text, banning divorce and (later) abortion, outlawing blasphemy, and placing special emphasis on the ‘Family’ and its role in society. It still had many liberal democratic rights, of course, which also illustrates how constitutions can blend together different value systems.

Second, constitutions establish institutions of governance. They set out the general form and overall function of the state. Who will rule? Who will pass the laws? Who will protect the rule of law? Who has the right to create and enforce new policies? And so on. These institutions of governance will typically be required to protect the fundamental values that are enshrined in the constitution, but they will also have the capacity for dynamic adaptation — to ensure that the constitutional order can grow and respond to new societal challenges. In this regard, one of the crucial things that constitutions do, as has been argued by Adrian Vermeule in his book The Constitution of Risk, is that they help to manage ‘political risk’, i.e. the risk of bad governance. If well designed, a constitution should minimise the chances of a particular government or ruler destroying the value structure of the constitutional system. That, of course, is easier said than done. Ultimately, it’s power and the way in which it is exercised that determines this. Constitutions enable power as well as limit it, and can, for that reason, be abused by charismatic leaders.

The constitutional level of analysis, then, is the level of analysis that concerns itself with: (i) the foundational values of a particular legal order and (ii) the institutions of governance within that order. It is distinct from the granular level of analysis because it deals with general, meta-level concerns about social order and institutions of governance. The granular level deals with particular domains of activity and what happens within them. HLA Hart’s distinction between primary and secondary legal rules might be a useful guide here, for people who know it. It is also distinct from the existential level of analysis (at least as I understand it) because that deals, almost exclusively, with extinction style threats to humanity as a whole. That said, there is more affinity between the constitutional level of analysis and some of the issues raised in the ‘existential risk’ literature around AI. So what I am arguing in this paper could be taken as a plea to reframe or recategorise parts of that discussion.

It is my contention that AI could have significant and under-appreciated effects at the constitutional level. To make the case for this, it would help if I gave a clearer sense of what I mean by ‘artificial intelligence’. I don’t have anything remarkable to say about this. I follow Russell and Norvig in defining AI in terms of goal-directed, problem-solving behaviour. In other words, an AI is any program or system that acts so as to achieve some goal state. The actions taken will usually involve some flexibility and, dare I say it, ‘creativity’, insofar as there often isn’t a single best pathway to the goal in all contexts. The system would also, ideally, be able to learn and adapt in order to count as an AI (though I don’t necessarily insist on this as I favour a broad definition of AI). AI, so defined, can come in specialised, narrow forms, i.e. it may only be able to solve one particular set of problems in a constrained set of environments. These are the forms that most contemporary AI systems take. The hope of many designers is that these systems will eventually take more generalised forms and be able to solve problems across a number of domains. There are some impressive developments on this front, particularly from companies like DeepMind that have developed an AI that learns how to solve problems in different contexts without any help from its human programmers. But, still, the developments are at an early stage.

It is generally agreed that we are now living through some kind of revolution in AI, with rapid progress occurring on multiple fronts particularly in image recognition, natural language processing, predictive analytics, and robotics. Most of these developments are made possible through a combination of big data and machine learning. Some people are sceptical as to whether the current progress is sustainable. AI has gone through at least two major ‘winters’ in the past when there seemed to be little improvement in the technology. Could we be on the cusp of another winter? I have no particular view on this. The only thing that matters from my perspective is that (a) the developments that have taken place in the past decade or so will continue to filter out and find new use cases and (b) there are likely to be future advances in this technology, even if they occurs in fits and starts.

2. The Relationship Between AI and Constitutional Order
So what kinds of effects could AI have at the constitutional level? Obviously enough, it could affect either the institutions of governance that we use to allocate and exercise power, or it could affect the foundational values that we seek to enshrine and protect. Both are critical and important, but I’m going to focus primarily on the second.

The reason for this is that there has already been a considerable amount of discussion about the first type of effect, even though it is not always expressed in these terms. The burgeoning literature on algorithmic governance — to which I have been a minor contributor — is testament to this. Much of that literature is concerned with particular applications of predictive analytics and data-mining in bureaucratic and institutional governance. For example, in the allocation of welfare payments or in sentencing and release decisions in the criminal justice system. As such it can seem to be concerned with the granular level. But there have been some contributions to the literature that concern themselves more generally with the nature of algorithmic power and how different algorithmic governance tools can be knitted together to create an overarching governance structure for society (what I have called an ‘algocracy’, following the work of the sociologist A.Aneesh). There is also growing appreciation for the fact that the combination of these tools can subvert (or reinforce) our ideologically preferred mode of governance. This conversation is perhaps most advanced among blockchain enthusiasts, several of whom dream of creating ‘distributed autonomous organisations’ that function as ‘AI Leviathans’ for enforcing a preferred (usually libertarian) system of governance.

These discussions of algorithmic governance typically assume that our foundational values remain fixed and non-negotiable. AI governance tools are perceived either as threats to these values or ways in which to protect them. What I think is ignored, or at least not fully appreciated, is the way in which AI could alter our foundational values. So that’s where I want to focus my analysis for the remainder of this paper. I accept that there may be different ways of going about this analytical task, but I’m going to adopt a particular approach that I think is both useful and illuminating. I don’t expect it to be the last word on the topic; but I do think it is a starting point.

My approach works from two observations. The first is that values change. This might strike some of you as terribly banal, but it is important. The values that someone like me (an educated, relatively prosperous, male living in a liberal democratic state) holds dear are historically contingent. They have been handed down to me through centuries of philosophical thought, political change, and economic development. I might think they are the best values to have; and I might think that I can defend this view through rational argument; but I still have to accept that they are not the only possible values that a person could have. A cursory look at other cultures and at human history makes this obvious. Indeed, even within the liberal democratic states in which I feel most comfortable there are important differences in how societies prioritise and emphasise values. It’s a cliche, but it does seem fair to say, that the US seems to value economic freedom and individual prosperity more than many European states, which place a greater emphasis on solidarity and equality. So there are many different possible ways of structuring our approach to foundational values, even if we agree on what they are.

Owen Flanagan’s book The Geography of Morality: Varieties of Moral Possibility sets out what I believe is the best way to think about this issue. Following the work of moral psychologists like Jonathan Haidt, Flanagan argues that there is a common, evolved ‘root’ to human value systems. This root centres on different moral dimensions like care/harm, fairness/reciprocity, loyalty, authority/respect, and purity/sanctity (this is just Haidt’s theory; Flanagan’s theory is a bit more complex as it tries to fuse Haidt’s theory with non-Western approaches). We can turn the dial up or down on these different dimensions, resulting in many possible combinations. So from this root, we can ‘grow’ many different value systems, some of which can seem radically opposed to one another, but all of which trace their origins back to a common root. The value systems that do develop can ‘collide’ with one another, and they can grow and develop themselves. This can lead to some values falling out of favour and being replaced by others, or to values moving up and down a hierarchy. Again, to use the example of my home country of Ireland, I think we have seen over the past 20 years or so a noticeable falling out of favour of traditional Catholic values, particularly those associated with sexual morality and the family. These have been replaced by more liberal values, which were always present to some extent, but are now in the ascendancy. Sometimes these changes in values can be gradual and peaceful. Other times they can be more abrupt and violent. There can be moral revolutions, moral colonisations or moral cross-fertilisations. Acknowledging the fact that values change does not mean that we have to become crude ‘anything goes’ moral relativists; it just means that we have to acknowledge historical reality and to, perhaps, accept that the moral ‘possibility space’ is wider than we initially thought. If it helps, you can distinguish between factual/descriptive values and actual moral values if you are worried about being overly relativistic.

The second observation is that technology is one of the things that can affect how values change. Again, this is hardly a revelatory statement. It’s what one finds in Marx and many other sociologists. The material base of society can affect its superstructure of values. The relationship does not have to be unidirectional or linear. The claim is not that values have no impact on technology. Far from it. There is a complex feedback loop between the two. Nevertheless, change in technology, broadly understood, can and will affect the kinds of values we hold dear.

There are many theories that try to examine how this happens. My own favourite (which seems to be reasonably well-evidenced) is the one developed by Iain Morris in his book Foragers, Farmers and Fossil Fuels. In that book, Morris argues that the technology of energy capture used by different societies affects their value systems. In foraging societies, the technology of energy capture is extremely basic: they rely on human muscle and brain power to extract energy from an environment that is largely beyond their control. Humans form small bands that move about from place to place. Some people within these bands (usually women) specialise in foraging (i.e. collecting nuts and fruits) and others (usually men) specialise in hunting animals. Foraging societies tend to be quite egalitarian. They have a limited and somewhat precarious capacity to extract food and other resources from their environments and so they usually share when the going is good. They are also tolerant of using some violence to solve social disputes and to compete with rival groups for territory and resources. They display some gender inequality in social roles, but they tend to be less restrictive of female sexuality than farming societies. Consequently, they can be said to value inter-group loyalty, (relative) social equality, and bravery in combat. Farming societies are quite different. They capture significantly more energy than foraging societies by controlling their environments, by intervening in the evolutionary development of plants and animals, and by fencing off land and dividing it up into estates that can be handed down over the generations. Prior to mechanisation, farming societies relied heavily on manual labour (often slavery) to be effective. This led to considerable social stratification and wealth inequality, but less overall violence. Farming societies couldn’t survive if people used violence to settle disputes. There was more focus on orderly dispute resolution, though the institutions of governance could be quite violent. Furthermore, there was much greater gender inequality in farming societies as women took on specific roles in the home and as the desire to transfer property through family lines placed on emphasis on female sexual purity. This affected their foundational values. Finally, fossil fuel societies capture enormous amounts of energy through the combustion and exploitation of fossil fuels (and later nuclear and renewable energy sources). This enabled greater social complexity, urbanisation, mechanisation, electrification and digitisation. It became possible to sustain very large populations in relatively small spaces, and to facilitate more specialisation and mobility in society. As a result, fossil fuel societies tend to be more egalitarian than farming societies, particularly when it comes to political and gender equality, though less so when it comes to wealth inequality. They also tend to be very intolerant of violence, particularly within a defined group/state.

This is just a very quick sketch of Morris’s theory. I’m not elaborating the mechanisms of value change that he talks about in his book. I use it for illustrative purposes only; to show how one kind of technological change (energy capture) might affect a society’s value structure. Morris is clear in his work that the boundaries between the different kinds of society are not clearcut. Modern fossil fuel societies often carry remnants of the value structure of their farming ancestry (and the shift from farming isn’t complete in many places). Furthermore, Morris speculates that advances in information technology could have a dramatic impact on our societal values over the next 100 years or so. This is something that Yuval Noah Harari talks about in his work too, though he has the annoying habit of calling value systems ‘religions’. In Homo Deus he talks about how new technologically influenced religions of ‘transhumanism’ of ‘dataism’ are starting to impact on our foundational values. Both of these ‘religions’ have some connection to developments in AI. We already have some tangible illustrations of the changes that may be underway. The value of privacy, despite the best efforts of activists and lawmakers, is arguably on the decline. When faced with a choice, people seem very willing to submit themselves to mass digital surveillance in order to avail of free and convenient digital services. I suspect that it continues to be true despite the introduction of the new GDPR in Europe. Certainly, I have found myself willing to consent to digital surveillance in its aftermath for the efficiency of digital media. It is this kind of technologically-influenced change that I am interested in here and although I am inspired by the work of Morris and (to a lesser extent) Harari I want to present my own model for thinking about it.

3. The Intelligence Triangle and the Constitutions of the Future
My model is built from two key ideas. The first is the notion of an ideal type constitution. Human society is complex. We frequently use simplifying labels to make sense of it all. We assign people to general identity groups (Irish, English, Catholic, Muslim, Black, White etc) even though we know that the experiences of any two individuals plucked from those identity groups are likely to differ. We also classify societies under general labels (Capitalist, Democratic, Monarchical, Socialist etc) even though we know that they have their individual quirks and variations. Max Weber argued that we need to make use of ‘ideal types’ in social theory in order to bring order to chaos. In doing so, we must be fully cognisant of the fact that the ideal types do not necessarily correspond to social reality.

Morris makes use of ideal types in his analysis of the differences between foraging, farming and fossil fuel societies. He knows that there is probably no actual historical society that corresponds to his model of a foraging society. But that’s not the point of the model. The point is to abstract from the value systems we observe in actual foraging societies and use them to construct a hypothetical, idealised model of a foraging society’s value system. It’s like a Platonic form — a smoothed out, non-material ‘idea’ of something we observe in the real world — but without the Platonic assumption that the form is more real than what we find in the world. I’ll be making use of ideal types in my analysis of how AI can affect the constitutional order.

This brings me to the second idea. The key motivation for my model is that one of the main determinants of our foundational values is the form of intelligence that is prioritised in society. Intelligence is the basic resource and capacity of human beings. It’s what makes other forms of technological change possible. For example, the technology of energy capture that features heavily in Morris’s model is itself dependent on how we make use of intelligence. There are three basic forms that intelligence can take: (i) individual, (ii) collective and (iii) artificial. For each kind of intelligence there is a corresponding ideal type constitution, i.e. a system of values that protects, encourages and reinforces that particular mode of intelligence. But since these are ideal types, not actual realities, it makes most sense to think about the kinds of value system we actually see in the world as the product of tradeoffs or compromises between these different modes of intelligence. Much of human history has involved a tradeoff between individual and collective intelligence. It’s only more recently that ‘artificial’ forms of intelligence have been added to the mix. What was once a tug-of-war between the individual and the collective has now become a three-way battle* between the individual, the collective and the artificial. That’s why I think AI has the potential to be so be disruptive of our foundational values: it adds something genuinely new to the mix of intelligences that determines our foundational values.

That’s my model in a nutshell. I appreciate that it requires greater elaboration and defence. Let me start by translating it into a picture. They say a picture is worth a thousand words so hopefully this will help people understand how I think about this issue. Below, I’ve drawn a triangle. Each vertex of the triangle is occupied by one of the ideal types of society that I mentioned: the society that prioritises individual intelligence, the society that prioritises collective intelligence, and the one that prioritises artificial intelligence. Actual societies can be defined by their location within this triangle. For example, a society located midway along the line joining the individual intelligence society to the collective intelligence society would balance the norms and values of both. A society located at the midpoint of the triangle as a whole, would balanced the norms and values of all three. And so on.**

But, of course, the value of this picture depends on what we understand by its contents. What is individual intelligence and what would a society that prioritises individual intelligence look like? These are the most important questions. Let me provide a brief sketch of each type of intelligence and its associated ideal type of society in turn. I need to apologise in advance that these sketches will be crude and incomplete. As I have said before, my goal is not to provide the last word on the topic but rather to present a way of thinking about the issue that might be useful.

Individual Intelligence: This, obviously enough, is the intelligence associated with individual human beings, i.e. their capacity to use mental models and tools to solve problems and achieve goals in the world around them. In its idealised form, individual intelligence is set off from collective and artificial intelligence. In other words, the idealised form of individual intelligence is self-reliant and self-determining. The associated ideal type of constitution will consequently place an emphasis on individual rights, responsibilities and rewards. It will ensure that the individual is protected from interference; that he/she can benefit from the fruits of their labour; that their capacities are developed to their full potential; and that they are responsible for their own fate. In essence, it will be a strongly libertarian constitutional order.

Collective Intelligence: This is associated with groups of human beings, and arises from their ability to coordinate and cooperate in order to solve problems and achieve goals. Examples might include a group of hunters coordinating an attack on a deer or bison, or a group of scientists working in lab trying to develop a medicinal drug. According to the evolutionary anthropologist Joseph Heinrich, this kind of group coordination and cooperation, particularly when it is packaged in easy-to-remember routines and traditions, is the ‘secret’ to humanity’s success. Despite this, the systematic empirical study of collective intelligence — why some groups are more effective at problem solving than others — is a relatively recent development albeit an inquiry that is growing in popularity (see, for example, Geoff Mulgan’s book Big Mind). The idealised form of collective intelligence sees the individual as just a cog in a collective mind. And the associated ideal type of constitution is one that emphasises group solidarity and cohesion, collective benefit, common ownership, and possibly equality of power and wealth (though equality is, arguably, more of an individualistic value and so cohesion might be the overriding value). In essence, it will be a strongly communistic/socialistic constitutional order.

I pause here to repeat the message from earlier: I doubt that any human society has ever come close to instantiating either of these ideal types. I don’t believe that there was some primordial libertarian state of nature in which individual intelligence flourished. On the contrary, I suspect that humans have always been social creatures and that the celebration of individual intelligence came much later on in human development. Nevertheless, I also suspect that there has always been a compromise and back-and-forth between the two poles.

Artificial Intelligence: This is obviously the kind of intelligence associated with computer-programmed machines. It mixes and copies elements from individual and collective intelligence (since humans did create it), but it is also based on some of its own tricks. The important thing is that it is non-human in nature. It functions in forms and at speeds that are distinct from us. It is used initially as a tool (or set of tools) for human benefit: a way of lightening or sharing our cognitive burden. It may, however, take on a life of its own and will perhaps one day pursue agendas and purposes that are not conducive to our well-being. The idealised form of AI is one that is independent from human intelligence, i.e. does not depend on human intelligence to assist in its problem solving abilities. The associated ideal type of constitution is, consequently, one in which human intelligence is devalued; in which machines do all the work; and in which we are treated as their moral patients (beneficiaries of their successes). Think about the future of automated leisure and idleness that is depicted in a movie like Wall:E or something similar. Instead of focusing on individual self-reliance and group cohesion, the artificially intelligent constitution will be one that prioritises pleasure, recreation, game-playing, idleness, and machine-mediated abundance (of material resources and phenomenological experiences).

Or, at least, that is how I envision it. I admit that my sketch of this ideal type of constitution is deeply anthropocentric: it assumes that humans will still be the primary moral subjects and beneficiaries of the artificially intelligent constitutional order. You could challenge this and argue that a truly artificially intelligent constitutional order would be one in which machines as the primary moral subjects. I’m not going to go there in this paper, though I’m more than happy to consider it. I’m sticking with the idea of humans being the primary moral subjects because I think that is more technically feasible, at least in the short to medium term. I also think that this idea gels well with the model I’ve developed. It paints an interesting picture of the arc of human history: Human society once thrived on a combination of individual and collective intelligence. Using this combination of intelligences we built a modern, industrially complex society. Eventually the combination of these intelligences allowed us to create a technology that rendered our intelligence obsolescent and managed our social order on our behalf. Ironically, this changed how we prioritised certain fundamental values.

4. Planning for the Constitutions of the Future

I know there are problems with the model I’ve developed. It’s overly simplistic; it assumes that there is only one determinant of fundamental values; it seems to ignore moral issues that currently animate our political and social lives (e.g. identity politics). Still, I find myself attracted to it. I think it is important to think about the ‘constitutional’ impact of AI, and to have a model that appreciates the contingency and changeability of the foundational values that make up our present constitutional order. And I think this model captures something of the truth, whilst also providing a starting point from which a more complex sketch of the ‘constitutions of the future’ can be developed. The constitutional orders that we currently live inside do not represent the ‘end of history’. They can and will change. The way in which we leverage the different forms of intelligence will have a big impact on this. Just as we nowadays clash with rival value systems from different cultures and ethnic groups; so too will we soon clash the value systems from the future. The ‘triangular’ model I’ve developed defines the (or rather ‘a’) ‘possibility space’ in which this conflict takes place.

I want to close by suggesting some ways in which this model could be (and, if it has any merit, should be) developed:

  • A more detailed sketch of the foundational values associated with the different ideal types should be provided.

  • The link between the identified foundational values and different mechanisms of governance should be developed. Some of the links are obvious enough already (e.g. a constitutional order based on individual intelligence will require some meaningful individual involvement in social governance; one based on collective intelligence will require mechanisms for collective cooperation and coordination and so on), but there are probably unappreciated links that need to be explored, particularly with the AI constitution.

  • An understanding of how other technological developments might fit into this ‘triangular’ model is needed. I already have some thoughts on this front. I think that there are some technologies (e.g. technologies of human enhancement) that push us towards an idealised form of the individual intelligence constitution, and others (e.g. network technologies and some ‘cyborg’ technologies) that push us towards an idealised form of the collective intelligence constitution. But, again, more work needs to be done on this.

  • A normative defence of the different extremes, as well as the importance of balancing between the extremes, is needed so that we have some sense of what is at stake as we navigate through the possibility space. Obviously, there is much relevant work already done on this so, to some extent, it’s just a question of plugging that into the model, but there is probably new work to be done too.

  • Finally, a methodology for fruitfully exploring the possibility space needs to be developed. So much of the work done on futurism and AI tends to be the product of individual (occasionally co-authored) speculation. Some of this is very provocative and illuminating, but surely we can hope for something more? I appreciate the irony of this but I think we should see how ‘collective intelligence’ methods could be used to enable interdisciplinary groups to collaborate on this topic. Perhaps we could have a series of ‘constitutional conventions’ in which such groups actually draft and debate the possible constitutions of the future?

* This term may not be the best. It’s probably too emotive and conflictual. If you prefer, you could substitute in ‘conversation’ or ‘negotiation’.

** This ‘triangular’ graphing of ideal types is not unique to me. Morris uses a similar diagram in his discussion of farming societies, pointing out that his model of a farming society is, in fact, an abstraction from three other types.

Tuesday, September 18, 2018

Episode #45 - Vallor on Virtue Ethics and Technology


 In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change.

 You can download the episode here or listen below. You can also subscribe to the podcast on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:39 - How students encouraged Shannon to write Technology and the Virtues
  • 6:30 - The problem of acute techno-moral opacity
  • 12:34 - Is this just the problem of morality in a time of accelerating change?
  • 17:16 - Why can't we use abstract moral principles to guide us in a time of rapid technological change? What's wrong with utilitarianism or Kantianism?
  • 23:40 - Making the case for technologically-sensitive virtue ethics
  • 27:27 - The analogy with education: teaching critical thinking skills vs providing students with information
  • 31:19 - Aren't most virtue ethical traditions too antiquated? Aren't they rooted in outdated historical contexts?
  • 37:54 - Doesn't virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?
  • 42:34 - Case study on Social Media: Defending Mark Zuckerberg
  • 46:54 - The Dark Side of Social Media
  • 52:48 - Are we trapped in an immoral equilibrium? How can we escape?
  • 57:17 - What would the virtuous person do right now? Would he/she delete Facebook?
  • 1:00:23 - Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?
  • 1:05:00 - The virtue of self-regard and the problem of narcissism in a digital age

Relevant Links

  • Shannon's Twitter profile

Sunday, September 16, 2018

The Institutional Critique of Effective Altruism (1): What is it?

The Stag Hunt of the Elector Frederick the Wise

In his 1754 work Discourse on Inequality, Rousseau introduced a short hypothetical scenario that has since become a famous game theory puzzle. He described two people who were hunting for food. If they each cooperated with one another, then they could successfully hunt and kill a deer. This would provide them with an abundance of food. If they went off by themselves, they could successfully kill a hare. This would provide them with some food but not as much as the deer. If one of them tried to cooperate to hunt the deer and the other went off and hunted the hare, then the cooperator would get nothing while the defector would at least get a hare:

If it was a matter of hunting deer everyone realised that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would have gone off in search of it without scruple… 
(Rousseau, Discourse on Inequality)

This has become known as the Stag Hunt game. The payoff matrix for the game is illustrated below. The numbers in the boxes are an ordinal ranking of the possible outcomes in the game. The idea is that hunting and killing the deer is the best outcome for both players (2), hunting and killing a hare is the second best outcome (1), and getting nothing is the worst outcome (0).

Superficially, the Stag Hunt seems to be similar to the more famous Prisoners’ Dilemma. In both games, players can choose to cooperate or defect. In both games, cooperating yields the best outcome for both players if they both do it, but if one cooperates while the other defects then the cooperator will be a ‘loser’ in the game. There is, however, one big difference. In the Prisoners’ Dilemma, it’s always more rational to defect (in game theoretical parlance: defecting strictly dominates cooperating). That’s not true in the Stag Hunt. In the Stag Hunt, if you could be sure that the other player was going to cooperate, it would be more rational for you to do the same. Because of this difference, some people argue that the Stag Hunt better captures some of the collective action problems that humanity faces. Should we work together to achieve some ideal/optimal outcome? Or, since we can’t always rely on others, should we work independently to secure the next best outcome?

This is one of the main disputes that arises between proponents of effective altruism and their critics. Effective altruism is a movement, founded by moral philosophers such as Peter Singer, William MacAskill and Toby Ord, that argues (broadly speaking) that we should try to do the most good we can with our time, money and other resources. Most famously, proponents of effective altruism argue that wealthy people in developed countries should be giving far more of their money to saving lives in the developing world than they currently do, whether that be by paying for bednets for malaria prevention or direct cash transfers to the poor. The critics argue that this is completely wrong. Individuals shouldn’t work to transfer resources to other individuals in the developed world. That’s too limited and piecemeal. The real problem is a structural or institutional one. People in the developed world should be working to reform the institutions of global capitalism that will address the root causes of global poverty.

I wrote a long series of posts about effective altruism about two years ago. That series looked at several different criticisms of the idea. At the time, I noted that the ‘institutional critique’ seemed to be the emerging favourite amongst the critics. In the intervening years, that’s where most of the action has been in the philosophical literature. In this series of posts I want to examine this institutional critique in some detail. I start today by outlining in more detail what that critique is and how it is supposed to undermine effective altruism. I’ll be using Brian Berkey’s article ‘The Institutional Critique of Effective Altruism’ as my main source for this. It’s the best thing written on the topic, by far. I will, however, also be dipping into articles written by Joshua Kissel and Alexander Dietz, which discuss more specific issues relating to the institutional critique.

1. A Brief Refresher on Effective Altruism
It will help if we have a clear conception of what effective altruism is at the outset. As Iason Gabriel points out in his work on EA, there are ‘thin’ and ‘thick’ definitions of EA. I gave a thin definition in the introduction. According to this, EA is simply the view that you ought to do the most good you can do (whatever that turns out to be), given the time and resources available to you. As proponents of EA put it themselves:

[Effective altruism] is about dedicating a significant part of one’s life to improving the world and rigorously asking the question, “Of all the possible ways to make a difference, how can I make the greatest difference?” 
(Stanford Effective Altruists)*

This thin definition is certainly true to the aspirations of the EA, but it is probably too vague to be useful. Who could disagree with the idea that we ought to do the most good we can?

This is where the ‘thick’ definitions come in. They try to provide more guidance on what doing the most good really entails. Joshua Kissel argues that proponents of EA use three heuristics when it comes to deciding how to do the most good. First, they ask themselves how important a particular action is, in the grand scheme of things. The more important it is, the more likely it is to garner their support. Second, they limit their efforts to problems that are tractable/measurable. In other words, they will want some metric that helps to confirm whether they are, in fact, doing the most good. Finally, they will focus on neglected problems, as opposed to ones that already attract a lot of attention and support. This is because they are concerned with the marginal contribution of their efforts.

These three heuristics do seem to feature heavily in EA literature, but they are still vague in one critical respect. They need some prior agreement on what is useful/important to make sense.

Brian Berkey offers a more useful and detailed ‘thick’ characterisation of EA in his article. He argues that proponents of EA are, usually though not necessarily, committed to the following four propositions. He gives them numbers but I’m going to give them descriptive names:

Moralism: “There are very strong moral reasons, grounded in fundamental values, for the well off to direct significant resources to efforts to address important moral issues”.
Welfarism: “These fundamental values include (but are not necessarily limited to) impartially promoting increases in welfare, or quality of life, for individuals, and the reasons provided by this value are at least fairly weighty.”
Efficiency: “There are strong reasons to prefer giving to efforts that will promote the relevant values most efficiently.”
Evidentialism: “We should employ the best empirical research methods available in order to determine, as best we can, which methods promote the relevant values most efficiently.”

These four propositions combine together to give a reasonably coherent moral outlook. They also help to explain many of the positions staked out among defenders of EA. This is why proponents of EA favour the idea that wealthy people in the developed world should give to certain charities in the developing world, specifically those that have a clearly measurable impact on QALYs and DALYs (sidenote: I appreciate that the binary distinction between developed and developing worlds is problematic). It also explains why they are so keen to ‘rate’ different charities in order to figure out which are the most effective.

It may also help to explain why they seem to be so opposed to institutional reform.

2. The Institutional Critique Explained

All of which brings us to the institutional critique itself. Variations of this critique have been presented by several authors. Berkey focuses on the work of Judith Lichtenberg, Lisa Herzog, Amia Srinivasan and Pete Mills in his paper, all of whose essays on the topic are readily available online. Although there are some differences between what they say, there is a common core to them all. Berkey discusses each at some length in his paper. I’m just going to pick two quotes from two of these authors that I think are representative of the kinds of concerns they raise. The first comes from Judith Lichtenberg:

[T]he maximum effectiveness strategy [endorsed by effective altruists] means neglecting programs that support advocacy for political and structural change, which are essential for addressing the deeper roots of poverty… people across the political spectrum should agree that structural changes that allow all workers to earn a decent living are preferable to welfare programs and private charity.

The second comes from Pete Mills:

[w]ithout any concept of society as a collective endeavour, we cannot address problems at their root but only those symptoms which are tractable on an atomized, individual level…poverty is presented to us as an immediate ethical demand which obscures the need for systemic change.

Both quotes speak to the idea that EA misses the point. In trying to do the most good at the margin, and in focusing on how an individual as opposed to a collective can do the most good, EA ignores the root causes of poverty (and other moral problems), and overlooks the possibility of truly revolutionary moral change.

But why is that? The suggestion from both authors seems to be that the most effective way to do good is to favour systemic change, not piecemeal change at the margins. But if that’s truly the case, why wouldn’t an effective altruist — committed as they are to doing the most good — favour that over, say, giving to a malaria charity? That’s where the four commitments identified by Berkey come into play. Two of them, in particular, seem antithetical to institutional reform: welfarism and evidentialism. The former leads effective altruists to overlook or discount non-welfare related goods; the latter leads them to overlook methods of doing good that aren’t easily measurable and quantifiable. This means that even though they might profess an ‘in principle’ openness to institutional change, they’re not really open to it because their core commitments don’t allow them to go there.

This then gives us the backbone of the institutional critique of EA, which according to Berkey consists of the following two propositions (again the names are mine but the specific content is directly lifted from Berkey):

Reformism: “There are strong moral reasons for individuals to direct resources and time to efforts to promote institutional reform, rather than directing the same resources and time to providing aid to those living in poverty.”

Incompatibilism: “Effective altruists, given their core commitments, cannot support individuals directing resources and/or time to at least some of the efforts to promote institutional change”.

But do we actually have good reason to accept these propositions? Berkey offers a trenchant critique of them in his article, arguing that there is no way to understand reformism that is both plausible and incompatible with the core commitments of EA. He also argues that proponents of the critique are indulging in a kind of hypocrisy: they are professing concern for global poverty while embracing a worldview that commits them to doing very little to address the situation. I’ll look at the details of this trenchant critique in the next post.

* This used to be the description of EA that one found on the website This now appears to have been updated to the slightly different: " a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible. It is also a community of people taking these answers seriously, by focusing their efforts on the most promising solutions to the world's most pressing problems."

Saturday, September 15, 2018

The Robot Rights Debate (Index)

Is our moral circle expanding to include robots? I've written quite a number of pieces about this topic over the past year or so. I've also interviewed some of main players in this debate on my podcast. I thought it might be worthwhile collecting them altogether in one place. While I think all of these pieces is worth reading (or listening to), if you want my own views on the topic read the post entitled 'Ethical Behaviourism in the Age of the Machine'.



Monday, September 10, 2018

The Optimist's Guide to Schopenhauer's Pessimism

[A]gainst the palpably sophistical proofs of Leibniz that this is the best of all possible worlds, we may even oppose seriously and honestly the proof that it is the worst of all possible worlds’ 
(Schopenhauer, The World as Will and Representation Vol II 583)

In 1946, Frederick Copleston, a Jesuit priest famous for his work on the history of philosophy, wrote a book about Arthur Schopenhauer. He subtitled it ‘Philosopher of Pessimism’. The title was appropriate. Schopenhauer was a profoundly pessimistic man. Even the philosopher-broadcaster Bryan Magee, who is a staunch defender and advocate of Schopenhauer’s work, concedes as much. He just thinks that it is possible to separate much of Schopenhauer’s philosophy from his pessimism. He agrees that Schopenhauer was a pessimistic man, but he thinks the majority of Schopenhauer’s philosophy has nothing to do with that attitude of pessimism.

But it’s not clear if this is true. Schopenhauer was not shy about incorporating his pessimism into his philosophical work. And he did argue for his pessimism. Roughly: he argued that we ought to be pessimistic because of a mismatch between our desires and what is possible in the world. This seems to have been Copleston’s real point with the subtitle. He didn’t just think that Schopenhauer was a psychological pessimist; he thought that Schopenhauer’s entire philosophy argued for pessimistic conclusions.

In a recent post, I noted that I think of myself as a pessimistic person (about certain things) and I believe this attitude of pessimism to be epistemically warranted. Nevertheless, I considered an argument for thinking that I should be more irrationally optimistic. Those who read that earlier post will know that I ultimately concluded that I probably should be more optimistic, though whether I can succeed in that aim depends on whether I can find some goal or project about which I am optimistic. I now want to consider the opposing view — the one from Schopenhauer’s philosophy. Maybe instead of being an irrational optimist I should, in fact, be a rational pessimist? And maybe there is a paradox involved in this? Maybe if I become a rational pessimist I will end up with a more sanguine outlook on life? That seems to be what happened to Schopenhauer. Iris Murdoch once observed that Schopenhauer seems to have been ‘merry’ about his pessimism. Maybe I can be too?

And so, I embark upon another journey of self-discovery. I do so by first looking at a recent attempt by David Woods (in his publicly available PhD thesis) to reconstruct Schopenhauer’s philosophy of pessimism. I then looking at some more recent riffs on Schopenhauer’s argument. As Woods points out in his thesis, Schopenhauer’s pessimism has many sources. Contrary to the prevailing wisdom, Schopenhauer presents several different arguments for pessimism, each of which is worthy of some consideration. Nevertheless, there is one argument that represents his primary case for pessimism: the argument that willing entails suffering. I’ll be focusing on that argument in what follows. I’ll start with Woods’s version of it and then follow up by looking at a recent paper by Alexandre Billon that criticises and modifies Schopenhauer’s argument. I’ll conclude on a note of optimism.

1. The ‘Willing Entails Suffering’ Argument: A First Pass
Introspect for a moment. Think about your life and your daily routines. Think about your hopes and dreams. What is the one constant across all these introspections? According to Schopenhauer the one constant is the sense of desire or, more properly, the sense of ‘will’. There are things about you and about your environment that you want to change. You feel hungry and desire food, you feel tired and desire sleep, you feel ignored and desire status. There is a gap between the world as you would like it to be and the world as it actually is. You act — through your will — to fill that gap. When you succeed you feel satisfied, maybe even happy. But then the gap opens up again and you repeat the cycle.

This constant striving and willing was, for Schopenhauer, the essence of life. This could be proved by both introspective evidence (you can feel the constant presence of the will inside your own body) and observational evidence (every living creature can be seen to be striving for something in the world). In addition to being the essence of life, the will was, for Schopenhauer, the origin of suffering and the justification for pessimism. He sets out this position in the following passage, which is worth quoting in full:

All willing springs from lack, from deficiency, and thus from suffering. Fulfillment brings this to an end; yet for one wish that is fulfilled there remain at least ten that are denied. Further, desiring lasts a long time, demands and requests go on to infinity, fulfillment is short and meted out sparingly. But even the final satisfaction itself is only apparent; the wish fulfilled at once makes way for a new one; the former is a known delusion, the latter not as yet known. No attained object of willing can give satisfaction that lasts and no longer declines…Therefore, so long as our consciousness is filled by our will, so long as we are given up to the throng of desires with its constant hopes and fears, so long as we are the subject of willing, we never obtain lasting happiness or peace…
(Schopenhauer, World as Will and Representation Vol 1, 196).

This passage describes the tragedy of the human condition. There is a lot going on within it, but we can extract a relatively simple argument from this complexity:

  • (1) The will is the essence of life.

  • (2) To will something is to experience a lack or deficiency and thus to suffer (i.e. willing entails suffering).

  • (3) It is impossible to fully and finally satisfy the will.

  • (4) Therefore, to be alive is to suffer, continuously.

To paraphrase the famous U2 song: there is a reason why you still haven’t what you are looking for; you never will. That’s a pessimistic conclusion, if ever there was one. It’s not all that different from what one finds in Buddhism, of course, and Schopenhauer was heavily influenced by Eastern philosophies, but the reasons leading up the conclusion are slightly different and more distinctively Schopenhauerian. They are also quite controversial and in need of some explanation. I’ve already given the argument for premise (1) (introspection + observation) but what about the other two premises? What can be said in their favour? Let’s look at both in more detail.

2. Defending Premises (2) and (3)
Let’s start by looking at premise (2) and the claim that willing implies suffering. Schopenhauer’s argument for this is almost logical or conceptual in nature. His claim is that if you will something (e.g. a cup of tea) it can only be because you currently experience some deficiency with respect to the object of your will (e.g. you are thirsty or in need of comfort). You are currently in a state of privation that must be rectified. This is particularly true if your will is to be motivating. You may have some vague wishes or aspirations that do not motivate you to act. This might imply that you don’t really suffer from a deficiency with respect to the objects of those wishes and aspirations. But as soon as you are motivated to act — to will something with your bodily movements — you must be experiencing some deprivation.

This is a neat piece of reasoning, and probably corresponds to the phenomenology of desire in many cases. It certainly chimes with the vast majority of my experiences. If I desire food, it is because I am suffering from some degree of deprivation, however minimal it may be. That said, there are some obvious objections to this interpretation of the link between will and suffering.

One obvious objection is that sometimes it seems like we can desire or will what we already have. For example, I’m currently sitting with a nice cup of tea in my hand. Ten minutes ago, I desired this cup of tea and so I got up and made it. Now I’m content. I have exactly what I wanted and, more importantly, I continue to want this state of affairs to persist. From the first person perspective it doesn’t feel like this continuing desire for tea stems from any deprivation or suffering.

Schopenhauer’s answer to this apparent counterexample is to simply deny that the continued desire for the cup of tea is a genuine instance of the will at work. For Schopenhauer, the will only arises when there is some action that demands a change in an existing state of affairs (technically: Schopenhauer held that the will and bodily action were identical). You cannot, according to Schopenhauer, will a current state of affairs. It may well be that I am content drinking my cup of tea right now, but I’m looking at it in the wrong way. That’s not an example of the will in action. Ten minutes ago, when I desired the tea and got up and made it, was when the will was manifest. At that moment in time the will did imply privation.

This might not be a wholly convincing response but it is worth noting that the original objection is probably somewhat academic anyway. I’d be perfectly happy to accept that I can will a current state of affairs. What I would say in response, however, is that the desire to maintain the present reality implies, at least in part, a fear of what will happen if it is not maintained (i.e. an anticipation of suffering). And, furthermore, there are still many examples in our lives when the will does arise from a feeling of deprivation and hence suffering.

A better objection is that the suffering implied by the will might not be all that great. It’s probably true that when I am hungry or thirsty I am suffering to some degree. But the degree is not that severe and I am in the fortunate position where I can easily restore myself to a state of contented satiety. It’s not like my life is made completely miserable by these occasional moments of suffering, is it? Well, there are a few things to bear in mind. First of all, not everyone is so lucky and, even if it is easy for me to ‘close the gap’ between desire and reality in the case of food and hunger, there are many other scenarios in my life where the gap is much harder to close. Furthermore, and this is something David Woods emphasises in his discussion of Schopenhauer, many smallscale and trivial moments of suffering, over the course of a lifetime, add up to something pretty serious. We might overlook or ignore them at the time, but when we look back over the full sweep of our lives the accumulated instances of micro-suffering will be significant.

That’s enough about premise (2). What about premise (3) and the claim that the will is never satisfied. At first glance, that seems obviously false. The will is frequently satisfied. Think back to my cup of tea. I had a desire for tea, I made the tea, my will was sated. The philosopher Ivan Soll used this kind of reasoning to dismiss Schopenhauer’s argument. In the search for full and final satisfaction, Soll seems to have taken Schopenhauer to be denying the reality of satisfaction. Woods argues extensively against this interpretation of Schopenhauer. He was not denying the reality of satisfaction. He was just arguing that all such satisfaction is fleeting and temporary. My present state of sated contentment will eventually pass and I will get thirsty again. It’s a never ending treadmill of desire. The only way to avoid suffering is to get off the treadmill and give up on desire.* But this is easier said than done because, as Schopenhauer pointed out, many people desire to have desires. This is what boredom consists of: a listless longing to find a specific desire that will capture your attention.

Schopenhauer also made a deeper, philosophical argument about the relationship between the will and the feeling of contentment that comes when it was satisfied. We typically assume that we desire things because they are good for us or because they make us happy. This implies that we see the value of the object of desire as primary and the desire itself as secondary (something that derives from the object of desire). This is exactly wrong, according to Schopenhauer. It’s the desire (the will) that is more fundamental. Our desiring something is what makes us think it is good or a source of happiness. Once we attain the desired thing its goodness and happiness-inducing power dissipates. The will has to move on to something else. This is what he was getting at when he said that the ‘wish fulfilled…is a known delusion’, and it is part of the reason why it is impossible to fully and finally satisfy the will.

For what it is worth, I find Schopenhauer’s claim that it is impossible to fully satisfy the will to be pretty plausible. The constant shifting and dissatisfaction of the will is manifest in my own life. Contentment and happiness are temporary at best, elusive at worst. This does, however, lead directly to probably the most pessimistic aspect of Schopenhauer’s philosophy: the negative conception of happiness.

3. The Negative Conception of Happiness
Schopenhauer argues that happiness is not a positive property. It is, rather, a cancelling of the evil of suffering:

All satisfaction, or what is commonly called happiness, is really and essentially always negative only, and never positive. It is not a gratification which comes to us originally and of itself, but must always be the satisfaction of a wish. 
(Schopenhauer, The World as Will and Representation Vol I, 319)

The primal state for Schopenhauer is that of willing, of desiring some change. This, as was argued above, is a state of deficiency or deprivation. When the will is satisfied we are happy and content, but this is only because we have temporarily quieted the beast. It’s not because we have entered some truly positive state of being. Think about it like this. Imagine you have a scale that measures the affective state of your life from moment to moment. We might be naturally inclined to think that this scale consists of negative affective states (states of pain, deprivation and suffering) and positive states (states of happiness, joy and fulfillment). We might also naturally think that we spend our lives bouncing back and forth from the negative to the positive ends of the scale. This is illustrated below.

As natural as this thought might be, Schopenhauer argues that it is wrong. We don’t spend our lives bouncing back and forth from one end of the scale to the other. We spend most of our time in the negative end of the scale (the state of willing and desiring). We temporarily win reprieve from this by satisfying our desires and reaching the neutral point (the zero point), but we never get into the positive end. Ever. In fact, on Schopenhauer’s account, there is no positive end of the spectrum. What we call ‘happiness’ is just the neutral state.

This has some profound consequences. Some people might think that whether we should be pessimistic or optimistic depends on the empirical evidence. They might be tempted to go out into the world and add up all the positive and negative states that we and others experience and then reach some determination. If the positive states outweigh the negative, then they’d say things aren’t so bad. If the negative states outweigh the positive, they might have a counsel of despair. But it all depends on the aggregative total. It’s not something that can be assessed a priori from the philosopher’s armchair.

But Schopenhauer liked his armchair. He didn’t see any reason to go out into the world and tot up all the negatives and positives. The so-called ‘positives’ are not really positive. They are neutral. The negatives are the only states of being with an actual magnitude. So the negatives will always outweigh the positives. He puts the point evocatively and forcefully in the following passage, which is probably may favourite from his work:

Far from being the character of a gift, human existence has entirely the character of a contracted debt. The calling in of this debt appears in the shape of the urgent needs, tormenting desires, and endless misery brought about through that existence. As a rule, the whole lifetime is used for paying off this debt, yet in this way only the interest is cleared off. Repayment of capital takes place through death. And when was this debt contracted? At the begetting. 
(Schopenhauer, The World as Will and Representation Vol II, 580)

You think having to repay a mortgage is bad: try being alive.

4. The Problems with Schopenhauer’s View
As you might have gathered, I have a lot of sympathy for Schopenhauer’s view. Perhaps it is a ‘mid-life’ thing. When you are young and on the up, you can feel enthusiastic about the challenges and opportunities that life throws your way. But when you reach the mid-point, the routine grows tedious, and you become more aware of the descent to death. You also start to see the upward cycle for the illusion that it was. You were never really on the up. You were always on the decline. There is something paradoxically joyous in this realisation. I once heard the musician and song-writer Nick Cave say that nothing made him happier than writing a sad song that captured something true about the human condition. I agree. That’s why, I think, there is an ebullient quality to Schopenhauer’s writing: he has captured something both sad and true about the human condition and he hasn’t flinched.

Yet despite my sympathy for it I still think that the argument has its flaws. Let’s start with the one that I think is most obvious: the phenomenological accuracy of the negative conception of happiness. While I enjoy Schopenhauer’s interpretation of happiness, it doesn’t feel right to me. When I am in a state of elation or joy — as I sometimes am — I don’t experience this as merely the surface after being submerged in suffering. It feels genuinely positive to me — like something that adds positive value to life and doesn’t merely cancel a debt. Schopenhauer might argue that I am wrong to think about it in this way. But I think that’s a tough sell: my actual experiences of happiness have to carry some epistemic weight His negative theory makes sense within his larger philosophical scheme, but am I more certain of his theory than I am of my own feelings? I don’t think so. And it’s not just me who thinks this way. When surveyed on general life satisfaction and happiness many people self report a positive set of feelings, not a negative or merely neutral set. The negative theory might apply to some cases — particularly the feelings of satiety after satisfying some basic bodily need — but I’m not sure that it applies to all.

This relates to another objection. This is one that Alexandre Billon raises in his discussion of Schopenhauer. He argues that Schopenhauer’s argument for pessimism is invalid because it assumes all happiness flows from the satisfaction of the will. But that’s obviously not the case. There are, as he puts it, ‘non-conative sources of happiness’. That is: experiences of joy that arise without any preceding desire. Things sometimes just happen to us: a friend calls by unexpectedly and makes us laugh; our partners surprise us with a gift; you hear a song you like on the radio. You weren’t in a state of deprivation prior to these things happening, but now they make you happy. These moments of unwilled happiness aren’t merely occasional; they are a frequent and dotted throughout our lives.
Furthermore, as Billon argues, there are some attitudes we can adopt toward the desires we do have that mitigate or reverse any suffering they might cause. Indeed, these attitudes aren’t always even conscious choices; they are just things that we naturally tend to do. He mentions two of them:

We can have ‘eroticised’ desires: In other words, we can enjoy the anticipation of the satisfaction of desire just as much as the satisfaction. In typical Gallic style, Billon uses the example of seduction to illustrate the point. It’s often the teasing and flirting that is more pleasurable than the sexual act itself. This is true for other desires too (and may be the product of our innate biological reward system). It means that, contrary to Schopenhauer, desiring something can be a pleasure, not a pain.

We can have ‘mourning’ desires: In other words, we can be quite happy and contented even though our desires remain unfulfilled and may never be fulfilled. The phenomenon of ‘hedonic adaptation’ enables this. We might lose someone or something we love and feel sad for awhile, but then recover our baseline level of happiness. What’s more, when we do this we don’t necessarily lose the desire for the thing we have lost. We may still want our deceased parents back and yet still not feel despair or unhappiness as a result.

Both of these attitudes toward desire might be irrational, and Schopenhauer might argue that we shouldn’t look on our desires that way, but the reality is that we do. Irrational or not, our experience of the world is not as despairing or painful as Schopenhauer makes out. That’s at least some reason for hope.

The possibility of such ‘irrational’ optimism leads Billon to develop a new and improved argument in favour of Schopenhauerian pessimism. It’s a strictly ‘rational’ form of pessimism which holds that if you are being objectively rational, then you should agree, roughly, with what Schopenhauer has to say. I can’t get into all the intricacies of his position here; I recommend reading his article if you are interested. This gist of it, however, is that a rational person will try to do their best to fulfill their desires and so if they desire anything at all they will remain, at least partly, dissatisfied (because it is irrational to hold onto a fulfilled desire). The result is that if the rational person has any desires at all, he or she is likely to be less than fully happy. That’s a somewhat pessimistic conclusion, but Billon recognises that it is not as dramatic or potent as Schopenhauer’s original conclusion. It is also an empirical claim: the degree of unhappiness depends on how many desires can be fulfilled and how much dissatisfaction they cause if they remain unfulfilled. So we can’t just be armchair pessimists. We have to get out into the world, experience it, and study it for what it is, not what our philosophical system tells us it is.

So where does that leave us and where does it leave me? Ironically, it leaves me in a slightly more optimistic mood than I began. I think there is something to what Schopenhauer has to say, and I think we can underestimate the unrelenting power of the will in our lives. But I don’t think the will is always and necessarily a cause of suffering, and I don’t think the satisfaction of the will is our only source of happiness. While I suspect that I will always be somewhat gloomy in my outlook, I think reading Schopenhauer allows me to appreciate that the glass might be half full, not half empty. He would have hated that.

* Another point of similarity between Schopenhauer and the Buddhists is that they both advocate similar solutions to the problem of suffering: trying to give up on the desire-satisfaction cycle and attain some state of equanimity. Though, true to form, Schopenhauer seems to have been more pessimistic about this than the average Buddhist.

Friday, September 7, 2018

The Case for Irrational Optimism: A Pessimistic Perspective

I am a pretty pessimistic person.* At least that’s what I am told. I rarely see a silver lining without a cloud. When things are going well, I assume that they are about to get worse. When things are going badly, I assume that I am still some distance from rock bottom. Some of this pessimism is protective. If you are always expecting the worst you can be pleasantly surprised when reality fails to meet your expectations. But many times it is destructive. This is particularly true in personal relationships where constantly assuming the worst can be pretty frustrating for your partners and friends.

When challenged on my pessimism, I usually respond by saying that I am ‘realistic’ as opposed to pessimistic. I see things as they are, without the distorting lens of those rose-tinted glasses that others seem to wear. I explain to people that there is something known as ‘depressive realism’, which is the observed tendency for those with depression to see the world for what it is not what we would like it to be. Depressive realists are, sometimes, better able to cope with tragedy and misfortune. It’s what they expect: it doesn’t unsettle them all that much. Most people are the opposite. They are default optimists. They not only think that things are better than they are, but that they are going to get better. Their lives are viewed as upward cycles of progress.

I don’t understand these people. Surely such beliefs are irrational and without warrant? But maybe I should try to understand them? Maybe I should join their ranks and become an irrepressible and irrational optimist? This might be easier said than done, but there is some hope. My pessimism is not monolithic and unwavering. It has some subtlety. I tend to be pessimistic about particular parts of my own life, not all of it, and not about the lives of others, nor about the world as whole. I assume that I am inferior to most people and that I won’t be able to cope with the challenges life throws my way, but I think the opposite about my friends and family. Furthermore, I know that there is some evidence suggesting that those with an irrepressibly optimistic outlook — even if that optimism is unrealistic or unwarranted — do better in life, e.g. in managing potentially fatal illnesses and difficult relationships.

So should I be an irrational optimist? Should I cultivate a healthy disregard for the depressive aspects of reality? The philosopher Lisa Bortolotti has done a lot of work on this topic. Through a combination of philosophical analysis and psychological investigation, she has tried to figure out exactly which kind of irrational optimism is best. And she has a theory, which she sets out in a recent paper ‘Optimism, Agency and Success’. I want explain her theory in what follows and consider how it might apply to my own life. This isn’t intended as an exercise in self-therapy. I think the theory is interesting and worth considering. Furthermore, I hope that other people with a pessimistic outlook can learn something from this inquiry.

1. The Varieties of Irrational Optimism
Let’s start by considering the nature of optimism. Psychologists have long noticed that most people are irrational optimists. They believe things about themselves and the world that are not warranted by the evidence. For example, most people think they are above average in just about any trait or capacity you care to measure (height, intelligence, generosity etc). Some experimental findings suggest that this optimism is adaptive: it helps people deal with the stresses and strains of life. But other findings suggest that it is not: that people with irrational optimism do worse than others when the going gets tough.

The currently favoured explanation for these differences has to do with the degree of ‘reality distortion’ involved in the optimism. Optimistic beliefs that mildly distort reality are thought to be beneficial; ones that stretch the fabric of reality too far are not. So if you are like Steve Jobs, with his famous ‘reality distortion field’, you should watch out: it’s going to catch up with you eventually.
But when you think about it that can’t be quite right. After all, Jobs’s reality distortion field served him pretty well through some difficult times at Apple and elsewhere. It may even have helped him turn the company into the success it now is, even if the cancer got him in the end. Maybe reality distortion itself has nothing to do with the beneficial effects of optimism?

This is, roughly, what Bortolotti argues: that the benefits of optimism have nothing to do with the degree of reality distortion it involves. There are two reasons for this. The first is that some optimistic beliefs, even if they are not epistemically warranted, are not really ‘distortions’ at all: they are actually true. The second is that some empirical findings suggest that highly distortive beliefs — perhaps like those of Steve Jobs — can be beneficial despite their lack of realism. She consequently thinks we need a new theory of optimism that clarifies exactly when it is beneficial and when it is not. She thinks she has one.

To understand Bortolotti’s theory you need to understand some of the conceptual terrain in which it is situated. One important distinction within that terrain concerns the different kinds of optimism. The first involves positive illusions:

Positive Illusions: Positive beliefs about yourself or the world that are the product of biased reasoning and hence are not always warranted by the evidence. They come in three main forms:
Illusions of control: Assuming that you can control external events that are not really or easily within your control.
Illusions of superiority: Assuming that you are better than average with respect to various traits and capacities.
Optimism bias: Assuming that the future will be largely positive and that negative events are unlikely to feature.

Bortolotti is very clear that illusions should be kept conceptually distinct from distortions. Illusions are not always false. At least some of the people who think they are better than average with respect to intelligence or attractiveness must be better than average. The point is that they will tend to think this anyway, irrespective of whether they have good reason to. Their optimism is the product of a systematic bias in how they think about the world, not necessarily in some mismatch between their beliefs and the world. The systematic bias is what we are calling an illusion.

That’s just the first kind of optimism. The second kind is dispositional optimism. This isn’t a set of discrete beliefs about yourself or the world around you. It is a stable character trait that dictates how you react and orient yourself to the world. If you are a dispositional optimist you will tend to assume the best and respond positively toward challenges, without necessarily having specifically optimistic beliefs about what is going to happen or about those challenges. Bortolotti explains in her article that dispositional optimism is measured in a different way than positive illusions and seems to be a reasonably fixed trait that people have over the course of their lives. Positive illusions are different because they can wax and wane in response to different events.

Most of the discussion in therapeutic psychology has focused on the benefits or disadvantages of positive illusions. Since positive illusions are systematic biases, and since they seem to be responsive and malleable, the question arises as to whether we should discourage or encourage them. When it comes to irrational beliefs more generally (i.e. those not specifically related to optimism) two theories have emerged:

The Traditional View: Associated with some of the early founders of cognitive behavioural therapy, holds that irrational beliefs and biases are contrary to our well-being. One major goal of psychotherapy is thus to remove these biased beliefs about reality and replace them with a more evidence-based view of reality.

The Tradeoff View: Some irrational beliefs are good, some are bad. It’s all about achieving the right balance.

Both of these views make predictions about positive illusions and their link to psychological well-being. The traditional view predicts that positive illusions are counterproductive and should be removed. The tradeoff view holds that they should be encouraged in certain circumstances. It’s just a case of being able to identify those circumstances. Although she has problems with both views — because they conflate bias with distortion of reality — Bortolotti develops a theory that is more in line with the tradeoff view.

2. Bortolotti’s Theory - Agency-Based Optimism
The essence of Bortolotti’s theory is that it is an agency-based view of optimism. One of the defining features of being a human is the experience of agency. We have goals, projects and plans and we use our capacities to achieve those goals, projects and plans. When we act as agents we assume, to at least some extent, that our actions allow us to control the outcome. This may not always be true, of course. On some occasions our actions may have little causal effect on the outcomes we desire. Buying a lottery ticket and squeezing it tightly to your chest during the draw doesn’t make it more likely that you will win the lottery. But oftentimes we have an illusion of control: we really think we can control the outcome even though we have no good reason to believe this. This positive illusion of control, along with a positive belief concerning one’s capacities to learn and develop one’s talents, is central to Bortolotti’s theory. She argues that those who think they have the capacity to realise their goals, even when this is a distortion of reality, are more likely to do so than those with a more fatalistic outlook.

But that’s only part of it. It’s not enough that people have positive illusions concerning their capacity to achieve certain outcomes. They must also have positive attitudes concerning those outcomes. In other words, they must believe that what they are doing is valuable or worthwhile. This gives them a sense of purpose and optimism, which combined with the belief in their capacity to bring it about, sustains them in the face of a world that doesn’t always agree. Those with more doubts about the merits of their plans will have less resilience and perseverance. Again, this is true even if there is some distortion of reality involved. Many people might question whether developing the iPhone, or a sleeker laptop, really is good thing for humanity. What mattered for Steve Jobs was that he thought it was. It was a mission that gave him a sense of purpose and meaning. I suspect the same is true for many other high-achieving individuals.

I have tried to illustrate this below.

To be clear, this is not a theory that Bortolotti plucks out of thin air. She develops it by inference from two particular case studies of optimism. The first concerns positive illusions in personal relationships. The second concerns positive illusions in healthcare. I’ll briefly describe both.

The relationship case study focuses on positive beliefs concerning one’s partner. It’s common enough for those in the first blushes of infatuation to idealise their partners. They think their partners are better than average and only see the good in them. This idealisation is usually unwarranted. Nobody is perfect. Nevertheless, it is commonly thought to help solidify a relationship in the short-to-medium term.** The danger is in the long-term. What happens when evidence mounts suggesting that the partner isn’t that great after all?

According to one theory — the disappointment theory — we’d expect the initial idealisation to have a negative effect on the relationship. As the partner fails to live up to your original conception of them you will grow disappointed and weary, maybe eventually ending the relationship. According to another theory — the self-fulfillment theory — we’d expect it to have a positive effect. The idealising partner will tend to ignore or downplay evidence that contradicts the idealisation and work positively to ensure that the partner lives up to the initial expectations. This latter theory has been endorsed by the empirical work of Sandra Murray and her colleagues. They find that those who idealise their partners do better in the long-run. They argue that there are three mechanisms that help to do this: (i) buffering, i.e. the idealising partner has a strong sense of security and confidence in the relationship and are not swayed by conflict or doubt; (ii) transformation, i.e. the idealising partner reinterprets weaknesses as strengths and confronts problems rather than runs away from them; and (iii) reflective appraisals, i.e. the idealised partner starts to see themselves as the idealising partner does and they try to live up to the idealisation. This is all consistent with the agency-based theory of success: the initial idealisations might be wildly distorted but if you have them, you are likely to work to narrow the gap between reality and perception.

The healthcare case study focuses on positive beliefs concerning one’s likelihood of recovering from or coping with a serious illness. Looking at some famous studies done by Shelley Taylor and colleagues on patients with HIV and breast cancer, Bortolotti finds evidence suggesting that those with both the illusion of control over health outcomes and optimism about their future health prospects do better than those with a more pessimistic outlook. There are two alleged mechanisms at play here: (i) those who are optimistic experience less stress, which may have a deleterious effect on health and (ii) those who are optimistic about their capacity to control their health outcomes are more likely to engage in protective behaviours. This is disconcerting for someone like me who thinks that we should be sceptical about many health-related claims — particularly those concerning which lifestyle choices are protective — but is again consistent with the agency-based theory of optimism. Also, as Bortolotti is keen to point out, in none of these case studies does the degree of realism appear to be a relevant factor: sometimes the patients had wildly distorted views about their health and their capacity to control it.

Although she doesn’t mention it, I think Philip Tetlock’s work on ‘superforecasters’ also lends credence to the agency-based theory. In trying to figure out who was best at predicting future events, Tetlock found that people who thought it was genuinely possible for them to do this and, more importantly, that this was a skill that they could hone and develop, did best.

You might wonder whether there is a paradox in all of this. If people with irrationally optimistic beliefs about their agency do better than those with more pessimistic and fatalistic beliefs, is their optimism actually irrational? Well, I guess that all depends on what you mean by ‘irrational’. Bortolotti doesn’t use the term ‘irrational’ in setting out her theory, but I think it’s clear that she would make a distinction between subjective and objective rationality. The better-performing optimists might be subjectively irrational (i.e. acting without epistemic warrant), even if they are objectively rational (i.e. there is evidence to suggest that they do better than the pessimists). Even then, the degree of objective rationality might be in doubt. An irrational optimist might do better than a pessimist, but their degree of optimism might not match how much better they actually do.

3. So should I be an irrational optimist?
The upshot of all this is that there does seem to be some reason to endorse the agency-based theory of optimism. Obviously, I would like more research to be done to see whether it holds up under a variety of different conditions, but I am willing to accept, for the sake of argument, it for the time being. The critical question for me is whether, assuming it’s correct, I should take it onboard in my own life. Should I try to cultivate an irrationally optimistic outlook? Should I be more confident in my own aspirations and abilities? Should I try to develop the belief that I can control outcomes that I currently think are beyond my reach? Should I assume that it is possible to hone my talents and abilities to achieve my goals?

Maybe. It’s worth noting that Bortolotti herself is cautious. Although she thinks the evidence does support the idea that optimistic, agency-related beliefs are positively correlated with achieving outcomes, she warns us against assuming that we are invulnerable. That seems like good advice to me: believing that you are invulnerable is just an optimistic form of fatalism, which is not in line with the agency-based theory. But even if I accept this note of caution, I find myself at odds with the theory.

The problem for me is with the other half of it: the optimism about the goals themselves, and not just the capacity to achieve them. This is where I fall down. There are times when I am pretty optimistic about my own capacities. For example, I’m optimistic about my capacity to write this piece. I think I’m better than average at writing pieces like this; that I can improve at doing so; and that I will succeed in finishing it. I don’t suffer from writer’s block or some paralysis of self-confidence. But that’s because I think what I am doing has some value. It’s fascinating to learn about the relationship between optimism and life outcomes; I feel like I’m learning something about myself through the process of writing; and I’m sharing something that might be of value to others. All of this sustains me in my actions.

But a lot of the time I doubt the wisdom of what I am doing, or I am deeply conflicted about it. On these occasions, I tend to lose all positive self-belief. This is particularly true in relation to my academic work and my personal relationships. I have no idea whether what I am doing with my life is worthwhile. This is true at both an objective and subjective level. In other words, I have doubts about whether what I am doing is good for the world as a whole and whether it contributes to my own well-being and happiness. I worry, for example, that much of the academic work I do is without value or that it’s value is, at the very least, highly uncertain. Some of my academic colleagues are moral crusaders. They are trying to make the world a better place, for example by arguing for human rights and justice for all. They seem very confident that they are doing the right thing. But I have no idea whether my talents and expertise can be used to make the world a better place. Does the world need another piece on the ethics of sex robots? Probably not. Do I get a sense of purpose from writing about it? I’m not sure. Indeed, I find that it’s only really when I ignore the bigger picture, and focus purely on curiosity and intrinsic fascination, that I attain some degree of optimism about what I’m doing. But then I feel that I’m being selfish and self-indulgent and so I step back and try to get a wider perspective, from which I lose all sense of optimism.

I’m sure these thoughts are not uncommon.*** I suspect that even those people I envy for their irrepressible optimism and moral conviction entertain them from time to time. But when they are pervasive, these thoughts undermine the potential therapeutic uses of irrational optimism. Cultivating irrational optimism about one’s capacities and talents might be possible, and hence valuable, if one already has goals about which one is motivated and optimistic. But if you lack those goals, and if you have been trained to question and critique nearly every goal, it’s difficult to find the motivation for doing this. You somehow need to fall into a set of goals that gets your juices flowing to get the process started. If you stay at the reflective, critical level, you never will.

The only thing I can think of that might work for someone like me would be something like Will MacAskill’s combination of effective altruism and moral uncertaintism. For those that don’t know, Will MacAskill is one of the founders of the effective altruism movement. This movement is dedicated, at the broadest level, to doing the best for the world, whatever that turns out to be. Proponents of effective altruism do advocate specific policies for doing good, several of which have been controversial (if you’re interested, I wrote a series of blog posts on critiques of effective altruism). But what interests me more is the way in which MacAskill’s academic work on moral uncertainty complements his approach to effective altruism. MacAskill’s academic work acknowledges that there is plenty of uncertainty about what the best thing to do is. It then tries to work out a decision procedure for doing the right thing even when you don’t know what the right thing to do is. Figuring this out seems like it might be only thing that someone like me could be optimistic about since it at least acknowledges and tries to work with pervasive uncertainty about the value of one's actions. But there’s a problem with this too since there is room for doubt and uncertainty about the best approach to dealing with uncertainty.

So, while I would like to be an irrational optimist, I’m not sure that I can be one given my general attitude to the world and my place within it. I can, at best, give a conditional thumbs up to following Bortolotti’s model: if I can find the right goals, then I probably should cultivate irrational optimism about my ability to achieve them. But until I do that, I’ll have to hang back. I may like the idea of putting a dent in the universe, but before I proceed I want to make sure it’s the right kind of dent.

* I don’t know if this has always been true. I’ve probably become more pessimistic as I’ve aged and I’m probably currently at a personal peak of pessimism. That said, I have always been drawn to a darker view of the world and to the sense that much of human life is tragic. Friends of mine from school will confirm. 

** For what it’s worth, this seems inconsistent with my own experience. I often see the worst in partners in the short-term but then eventually grow to like them more as evidence of their goodness accumulates. 

*** For those who are familiar with it, Thomas Nagel’s article ‘The Absurd’ echoes some of these thoughts.