Pages

Monday, February 29, 2016

Technological Growth, Inequality and Property Price Increases: An Explanation?





This post is a bit of a departure for me. I’m not an economist. Not by any stretch of the imagination. I dabble occasionally in economics-related topics, particularly those concerning technology and economic theory, but I rarely get involved in the traditional core of economics — in topics like property prices, economic growth, debt, wealth inequality and the like. But it’s precisely those topics that I want to get involved with in this post.

I do so because of an observation: property prices in large urban areas (think London and New York) seem to have increased rather dramatically (in real terms) in the past few decades. Sure, there have been property booms and busts — some quite dramatic — but they haven’t really impacted the steady upward trend for prices in large urban areas. These price increases come at a time of increasing wealth inequality. Since approximately 1980, we have seen a fairly noticeably rise in wealth inequality in Western countries. Much of that wealth is now accumulating in property-related investments and is driving middle-to-lower income earners out of the market, unless they willing to take on significant amounts of debt. Why is this happening?

Adair Turner (former head of the financial regulator in the UK) has an intriguing answer to this question in his recent book Between Debt and the Devil. He suggests that the rise in property prices in large urban areas is driven, at least in part, by exponential improvements in technology. As you might imagine, the suggested link between technological growth and rising property prices is designed to pique my curiosity. It joins together two areas of interest: one professional, the other personal.
Now I don’t have the knowledge to critique Turner’s argument; but I do have the ability to render its logic more transparent. So that’s what I propose to do in the remainder of this post. I’ll start by motivating the argument a bit more, looking at some of the facts and figures concerning wealth inequality and property-investment.


1. Is wealth concentrated in property?
Turner’s argument develops out of a series of factual observations. The chief among them being that (a) wealth inequality has increased; (b) the wealth that is out there is increasingly concentrated in property-related investment, particularly in land in large urban areas; and c) the amount of credit-created money is increasing and is also concentrated in property. Is there any evidence to support these observations?

The support for (a) comes from the work of economists like Thomas Piketty and Anthony Atkinson. Piketty’s work is, of course, particularly famous in this respect. It tells a by-now familiar story. The gist of that story is as follows: there were huge of levels of wealth inequality in developed economies in the late-19th century. Most of the wealth was concentrated in the hands of relatively few industrialists, capitalist tycoons and aristocrats. During the first 70-80 years of the 20th century, this wealth inequality started to go down. There were a number of causal contributors to this. I’ll mention two. First, the two world wars witnessed a massive destruction of capital (land, property, machinery etc), which wiped out a lot of wealth. Second, after the second world war, economic growth in the richest economies, particularly the US, exceeded the growth in wealth: this allowed for some flattening of the wealth disparities in society and led to the rise of the middle class.

Since approximately 1980, this trend has reversed itself. We are now returning to levels of wealth inequality not seen since the late-19th century. Concern about this has reached the popular consciousness. It can be seen in the battle-cries of the 99% and the rhetoric of politicians like Bernie Sanders and Jeremy Corbyn. The empirical support for this story is meticulously documented in Piketty’s work. What is particularly interesting about this empirical support it is how it also illustrates the increasing concentration of wealth in property. Consider the following diagram, taken originally from Piketty, but reproduced in Turner’s book:



The diagram shows the ebb and flow of capital as a percentage of national income, in France, since the 1700s. One thing is particularly noticeable: the amount of wealth concentrated in housing has increased dramatically since 1970, going from about 120% of national income in 1970 to 371% by 2010. Turner reports that something similar is true in the UK, with housing-related wealth going from 120% to 300% over the same period. Furthermore, he argues that Piketty’s data doesn’t represent the full picture as commercial real estate also accounts for a significant percentage of non-housing related wealth. If we dive into the data even deeper we notice other interesting trends. Most of the increase is accounted for by increases in the price of land, not in the prices of the buildings that sit on top of that land. Indeed, figures from Knoll, Schularick and Steger suggest that approximately 80% of the increase in house prices in advanced economies since 1950 can be attributed to land. This is pretty startling and it all provides support for (b).

Which brings us to c). Credit plays an important role in the story of increased property prices. I teach a course on banking law to students. One thing I am always keen to impress on these students is that, in a fractional reserve system, banks actually create money, they don’t simply move it about from place to place. They do this through debt contracts (loans, mortgages etc). The amount of money created in this form has been steadily increasing since about 1950. Figures from Rogoff and Reinhardt, reproduced in the diagram below, suggest that private debate now amounts to approximately 170% of GDP in advanced economies, having increased from about 50% of GDP in 1950.

Reinhart and Rogoff's data on the growth of debt. Apologies for the quality.

In an ideal world, this bank-created money would go toward socially valuable investment: entrepreneurs would be given money to start businesses that contribute to the economy. In reality, relatively little of the money goes toward that kind of investment. Most of it now goes toward property. In a 2014 paper, Jorda, Schularick and Taylor presented evidence suggesting that real estate lending now accounts for 60% of bank lending in advanced economies, having increased from just over 30% in 1950. This is significant because credit of this sort often fuels price increases. In the worst cases, it fuels asset price bubbles, which is exactly what happened to property in countries like Ireland and Spain in the lead-up to 2008. But even in these countries property prices in large urban areas rebounded pretty quickly.

Data from Jorda et al on the growth in real estate in lending.


Why? Turner presents two arguments.


2. The Technology Argument
The first argument is the one I’m really interested in. It starts with an apparent paradox. If the foregoing is correct, then property (in particular land) is an increasingly important source of wealth in advanced economies. But this is out of step with another observation about modern economies: their increasing weightlessness. ‘Weightlessness’ is a term economists use to describe the fact that physical goods are playing less of a role in the modern world. Everything is being digitised through advances in information communications technology (ICT).

But the increasing weightlessness of the economy is, in part, responsible for the increasing importance of property. This is Turner’s key insight. He notes, as others have done, that ICT seems to have two key properties: (i) exponential improvement in the hardware along multiple dimensions (processing power, memory, bandwith etc), which reduces costs for ICT over time; and (ii) near zero marginal cost for reproducing digital goods. These properties have interesting consequences. They mean that ICT companies like Facebook and Google can create huge amounts of wealth with relatively little capital investment, i.e. they don’t have to waste much money on physical goods and human labour. Commenting on Facebook’s equity valuation of over $150 billion, Turner argues:

Compared with the investment that went into building automobile, airline, or traditional retail companies this [Facebook’s capital expenditure] is trivial. And more generally, the two distinctive features mean that wherever the “machines” that drive businesses include a large ICT software or hardware element, they keep falling in price relative to current goods and services. IMF figures show that the price of capital equipment relative to prices of current goods and services fell by 33% between 1990 and 2014.
(Turner 2016, 69)

In short, there’s a lot of wealth being created by capital that is itself decreasing in price. This has one necessary consequence: a greater share of investment will be accounted for by assets that are not decreasing in price. Land and property are the obvious examples. This is a necessary consequence because if the price of ICT is going down, the proportional value of whatever is not going down in price must increase. Say $100 dollars is invested in two different forms of capital in one year: (a) $50 in ICT and (b) $50 in land. Suppose the price of the ICT falls by half over the next two years. Even if the value of the land remains unchanged, the share of wealth taken up by the land will double. As Turner puts it:

A world in which the volume of information and communication capacity embedded in capital goods relentlessly increases is a world in which real estate and infrastructure constructions are bound to account for an increasing share of the value of all investment.
(Turner 2016, 69)

This argument, of course, assumes that land and property prices will not themselves be subject to downward pressures. You might argue that this assumption is flawed since property prices clearly do sometimes go down. But there are at least two good reasons for thinking that they are unlikely to do so, particularly in large urban areas. The first is that land (in particular) and property (in general) just aren’t subject to the same downward trends as ICT. Land is a relatively fixed resource (barring some possibility for reclaiming land from the sea). As more and more people clamber to live on smaller and smaller segments of land — which is exactly what is happening in large urban areas — we can expect the prices to go up, not down. The second reason has to do with consumer and investor preferences, which brings us to the second argument.


3. The Consumer/Investor Preference Argument
Turner’s second argument maintains that in richer societies, land and property are likely to increase in price due to consumer and investor preferences. This is a more traditional economic argument because it focuses on the choices that ‘rational’ consumers and investors are likely to make. It is interesting because it highlights a positive feedback loop between technological growth, increasing wealth and increasing property prices.

The rough outline of the argument is as follows.

Some goods have low income elasticity. This means that the demand for these goods is relatively insensitive to changes in income. Food and clothing are good examples. As you get richer, you will probably buy more food and better clothing. But eventually people tend to reach satiation in these categories of expenditure: there is only so much food and clothing they can have. This is particularly true if they are uber-wealthy: there only so-many 3-michelin star meals you can eat, even if you are a billionaire. Furthermore, there are some goods that have high income elasticity, but prices for them are subject to downward pressures such that the increased demand is offset by price decreases. ICT goods are prevalent examples of this.

People aren’t going to invest their increasing wealth in low income elasticity goods. Instead, they are going to invest in high income elasticity goods. The most important of these goods in the developed world is ‘locationally specific housing’, i.e. housing in the most attractive parts of the most attractive large urban areas. Because the supply of this housing is relatively fixed, and because there is more wealth out there chasing this fixed supply of goods, the only thing that can adjust is price.

This kicks off a positive feedback loop. The wealthy invest more in locationally specific housing because its the major thing that is increasing in value; and banks lend more for the purchase of such housing because it looks like a relatively safe bet. The price adjusts ever upwards.


4. Conclusion
I have little to say by way of conclusion. Those of us who are familiar with the property market in large urban areas will have had a good sense of the problem of price increases already. Turner’s argument is intriguing because it links the lived reality to other macroeconomic and technological trends. The exponential growth in ICT increases the amount of wealth invested in property because the downward pressure on ICT prices necessarily increases the proportional share of wealth invested in property. And rising levels of wealth, when tied to consumer preferences and lending practices in the banking sector, fuel price increases in locationally specific housing. The connections that Turner draws out seem plausible to me. Are there good arguments against this point of view?

Saturday, February 27, 2016

Should we be grateful for death?



(Previous post on existential gratitude)

Most people think death is bad. They approach it with a degree of trepidation, possibly even denial. The prospect is particularly acute for someone who does not believe in an afterlife. Could such a person ever view death as a gift, something for which they should be grateful? That’s the intriguing question asked by Mikel Burley in his article “Atheism and the Gift of Death”. I want to take a look at his answer in this post.

I’ll start by dismissing a relatively trivial sense in which a non-believer can view death as a gift. They can view it as a gift when the life they are living is unremittingly bad. They could be suffering tremendous pain due to a terminal illness. There might be no prospect of recovery. Death could be, consequently, the only possible release from the burden of existence. A person in such a state could view death as a gift. In fact, they might even want to hasten death. This position is common in defences of euthanasia.

I dismiss this sense of a death-as-a-gift not because it is uninteresting or unimportant. It is clearly both. But it is also relatively trivial. The deeper question is whether someone whose life is otherwise good can welcome its end as a gift. In other words, can death be viewed as a gift no matter what the life it ends involves?


1. The Standard Religious View
Let’s start by looking at the religious view. The typical monotheistic position maintains that there is an afterlife (heaven/paradise) in which we (and all our friends and family members) will live after we die. In this place, we will be united with our creator (God) and will be rewarded with an existence that is greater than what we previously had on earth.

This typical religious position makes it relatively easy to conceptualise death as a gift. It is an opportunity to shake off this veil of tears. Burley uses a nice quote from Hermann Lange to underscore the point. Lange was a Catholic priest in Nazi Germany. He publicly opposed the Nazi party and was executed for his troubles. Just before his execution he sent a letter to his parents:

For, after all, death means homecoming. The gift we thereupon receive is so unimaginably great that all human joys pale beside it, and the bitterness of death as such — however sinister it may appear to our human nature — is completely conquered by it.

The logic applies both to those whose lives are going well and those whose lives are going badly. If their life is going badly then you they be compensated for this by the unimaginably great joys in the afterlife. And if their life is going well, then they should still regard death as a gift because the joys they will experience in heaven are so much greater than those they currently experience.

In essence, according to this religious view, death is a gift because it allows for our lives to continue in a superior form. There is no way that an atheist (who disbelieves in an afterlife) could embrace a similar view.


2. An Alternative Religious View
But this typical religious view has been challenged by some religious scholars. They consider its conception of the afterlife as a temporal continuation of this life (only in a much better form) to be ethically and metaphysically flawed. Nicholas Lash is one such scholar. He doesn’t conceive of the afterlife in temporal terms. He does not think that we continue to live on in some cloudy paradise, dancing and frolicking with our friends and families. Instead, he thinks we will join with God and partake of His eternal (non-temporal) existence.

On this view, the afterlife is less a continuation of life here on earth and more a completion of life. It is the end of our existence as we know it and the transition to something else. Lash thinks that the continuation view is ethically damaging. We have duties to perform here on earth. We should take these duties seriously. If we think that the present life is simply a waiting room to something better; and that no matter how bad things presently are they will be recompensed in the afterlife; then we run the risk of thinking that our present duties do not ultimately matter. This can erode ethical sensibilities. And the risk is not entirely negligible. There is a destructive and apocalyptic tendency in some religions which is in part driven by the belief that the afterlife will compensate for everything.

This alternative religious view is interesting. It still conceives of death as a gift, but it does so in a different way. It conceives of death as a gift because it is the moment at which life is completed, not continued in a better form. It seems more plausible for there to be an atheistic equivalent of this view. There is nothing obviously metaphysically out of line with an atheistic conception of death as a fitting capstone or end to life taken as a whole. As Burley puts it:

Despite the atheist’s being unable to join the Christian in regarding death as a completion of that which stands in eternal relation to God, he may nevertheless concur that death constitutes the final moment of a finite whole, and thereby gives a determinate structure or shape to that life. To see death as a gift may be to regard it as that which completes and to some extent defines who one is…When one’s death actually occurs, life becomes complete; in so far as one’s life as a whole is a gift, so is one’s death, for it is one of the conditions of having a recognizably human life at all.
(Burley 2012, 538).

This is a somewhat complex view. It presumes that the atheist is grateful for their life. Whether such an emotional reaction is appropriate for the atheist is a contentious matter, but one that I explored in a recent blogpost. For the time being we will assume that the atheist can indeed be grateful for living. This is usually premised on the belief that life itself is good (i.e. has positive value) and so we should be grateful for the opportunity to live it. What Burley is proposing is that we add to this the view that death is necessary to give the atheist’s life-as-a-whole its value. How can we make sense of this?


3. How does death complete the atheist’s life?
We can start by considering an obvious objection. There is a life extension movement out there. Members of this movement think that medical breakthroughs can be leveraged to enable us to extend our lives indefinitely. In my experience, most members of this movement tend to be non-religious. They do not believe in an afterlife. They accept the claim that life is good and hence something we can be grateful for. They dispute the claim that death gives life value. They favour the polar opposite. They think that death robs life of its value. We should do everything we can to avoid it.

If Burley’s suggestion is to gain any traction, it must be able to show why this view is flawed. Burley doesn’t attempt a full-blown critique. He isn’t trying to show that the atheist should view death as a gift always and everywhere. He argues for a more abstract view: particular instances of dying could be bad, but mortality itself (i.e. the fact that life must come to end) could be essential for value. Burley chooses a popular line of support for this: the arguments of Bernard Williams in his famous article ‘The Makropulos Case: Reflections on the Tedium of Morality’. I’ve covered these arguments before in pretty exhaustive detail. In essence, they maintain that an immortal life (one in which death is not possible) would be bad in various ways. The two main ways are that it would erode individual identity (you cannot live forever and maintain a single consistent identity) and that it would lead to boredom. Other philosophers, like Martha Nussbaum and Aaron Smuts, offer similar arguments. Nussbaum claims that an immortal life would rob our decisions of their normative significance; and Smuts argues that an immortal life would take away the good of achievement. These arguments can be challenged, of course, but if they work they lend support to the kind of view that Burley is trying to push. They suggest that mortality is essential to a life with the sorts of values we deem important to life as it is currently lived. (Samuel Scheffler has also supported this kind of argument and I covered his contribution in a previous blogpost).

Burley covers two objections to this line of thought. The first holds that the atheist still cannot view death as a gift because in order for one to conceive of death as a gift there must be a recipient of the gift. But, of course, if you cease to exist after your death you cannot receive the gift. One can quibble with the details of this. You will still be alive during the process of dying, so perhaps the gift accrues at that point. This seems somewhat unpalatable. Alternatively, you could embrace the idea of posthumous gifts. Burley notes that honours are sometimes conferred on deceased persons. Even if those people continue to exist in another form, they are definitely not around to receive those honours. This suggests that recipients may not be necessary for gifts. Gifts may be possible sub specie aeternitatis.

The second, and slightly more interesting, objection focuses on the strength of Burley’s argument. Burley claims that, if he is right, it is possible for the atheist to view death as a gift. But so what? Mere possibility is not enough in this debate. It may be possible for the atheist to do this but should they? Will they?

Burley has some interesting things to say about this. First he suggests that philosophical argument has a limited role to play in changing how people will view their lives and deaths:

To come to see one’s life as a gift, and perhaps one’s death as well, and to express gratitude for these things, is not a matter of assenting to the truth or plausibility of certain propositions….It is more like coming to see life, or the world, under a different aspect — coming to feel the compulsion of that way of looking at things. And that compulsion is unlikely to be generated by means of arguments, at least in any formal sense of ‘argument’.
(Burley 2012, 542)

So what will generate the compulsion?

More likely, I think, is that someone will come across the words of individuals who, either in life or in literature, express the attitude concerned, and will come, gradually or perhaps in some cases suddenly, to feel an affinity with those forms of words; will recognise that speaking of life and death, and many other things besides, as gifts expresses something with which she can identify.
(Burley 2012, 542)

This may be the function of certain literary passages from famous atheists. I think, in particular, of Richard Dawkins’s famous ‘We are going to die’ passage, which I analysed on a previous occasion.
 
To sum up, Burley is arguing that it is possible for an atheist to view death as a gift, not just because it sometimes offers a release from a horrendous existence, but because it gives shape and purpose to a life that is valuable. I find all of this very interesting.

Thursday, February 18, 2016

What's happening inside the black box? Three forms of algorithmic opacity



The debate about algorithmic governance (or as I prefer ‘algocracy’) has been gathering pace over the past couple of years. As computer-coded algorithms become ever more woven into the fabric of economic and political life, and as the network of data-collecting devices that feed these algorithms grows, we can expect that pace to quicken.

One thing is noticeable in this growing furore: the shift from concerns about privacy to concerns about opacity. I raised this in my recent paper “The Threat of Algocracy”. There, I noted how the debate about algocracy flowed naturally from the debate about surveillance. The Snowden leak brought one thing firmly into the public consciousness: the fact that modern network technologies were being used to spy on us. Of course, we had known this before: anyone who gave a moment’s thought to the nature of information technology would have realised that their digital activities were leaving trails of data that could be gobbled-up by anyone with the right technology. Sometimes they willingly handed over this information for better services. The Snowden leak simply confirmed their worst fears: the powers that be really were collecting everything that they could. It was like the child pointing out that the emperor was wearing no clothes. The breathtaking scope of digital surveillance, suspected for so long, was finally common knowledge.

It was unsurprising that privacy was the first port of call in response to this revelation. If the government — with the complicity of private industry — was collecting all this data about us, we were right to ask: but what about our privacy? Privacy is, after all, an important value in modern societies. But there was another question to be asked: how was all this data being used? That’s where the opacity concern came in and where my argument about the threat of algocracy began. No human could process all the data being collected. It had to be ordered and classified with algorithmic assistance. How did these algorithms work? How could we figure out what was going on inside the black box? And if we can’t figure these things out, should we be worried?

Of course, this is something of a ‘just so’ story. The Snowden-leak is not some dramatic hingepoint in history — or, at least, if it is it too early to tell — the growth of algorithmic governance, and concerns about opacity, predate it and continue in its aftermath. All I’m suggesting here is that just as the privacy concern has been thrust into public consciousness, so too should the opacity concern. But in order for us to take the opacity concern seriously, we need to understand exactly how algocractic systems give rise to opacity and exactly why this is problematic. That’s where Jenna Burrell’s recent paper is of great assistance. It argues that there are three distinct types of algocratic opacity and that these have often been conflated in the debate thus far. They are:

Intentional Opacity: The inner workings of the algocratic system are deliberately concealed from those affected by its operation.
Illiterate Opacity: The inner workings of the algocratic system are opaque because only those with expert technical knowledge can understand how it works.
Intrinsic Opacity: The inner workings of the algocratic system are opaque due to a fundamental mismatch between how humans and algorithms understand the world.

I’m guilty of conflating these three types of opacity myself. So in what follows I want to explain them in more detail. I start with a general sketch of the problem of opacity. I then look at Burrell’s three suggested types.


1. Algorithms and the Problem of Opacity
Algorithms are step-by-step protocols for taking an input and producing an output. Humans use algorithms all the time. When you were younger you probably learned an algorithm for dividing one long number by another (‘the long division algorithm’). This gave you a never-fail method for taking two numbers (say 125 and 1375), following a series of steps, and producing an answer for how many times the former divided into latter (11). Computers use algorithms too. Indeed, algorithms are the basic operating language of computers: they tell computers how to do things with the inputs they are fed. Traditionally, algorithms were developed from the ‘top down’: a human programmer would identify a ruleset for taking an input and producing an output and would then code that into the computer using a computer language. Nowadays, more and more algorithms are developed from the ‘bottom up’: they are jump-started with a few inductive principles and rules and then trained to develop their own rulesets for taking an input and producing an output by exploring large datasets of sample inputs. These are machine learning algorithms and they are the main focus of Burrell’s article.

Computer-coded algorithms can do lots of things. One of the most important is classifying data. A simple example is the spam filtering algorithm that blocks certain emails from your inbox. This algorithm explores different features in incoming emails (header information and key words) and then classifies the email as either ‘spam’ or not ‘spam’. Classification algorithms of this sort can be used to ‘blacklist’ people, i.e. prevent them from accessing key services or legal rights due to the risk they allegedly pose. Credit scoring algorithms are an obvious example: they use your financial information to generate a credit score which is in turn used to determine whether or not you can access credit. No fly lists are similar: they blacklist people from commercial flights based on the belief that they pose a terrorist risk.

The following schematic diagram depicts how these algocratic systems work:



The opacity concern arises in the middle step of this diagram. For the most part, we know how the data that gets fed into the algorithm is produced: we produce it ourselves through our activities. We also typically know the outputs of the algorithm: we are either told (or can reasonably infer) how the algorithm has classified the data. What we don’t know is what’s going on in the middle (inside the ‘black box’ - to borrow Pasquale’s phrase). We don’t know which bits of data are selected by the algorithm and how it uses that data to generate the classifications.

Is this a problem? This is where we must engage questions of political and social morality. There are obviously benefits to algocratic systems. They are faster and more efficient than human beings. We are now producing incredibly large volumes of data. It is impossible for us to leverage that data to any beneficial end by ourselves. We need the algorithmic assistance. There are also claims made on behalf of the accuracy of these systems, and the fact that they may be designed in such a way as to be free from human sources of bias. At the same time, there are problems. There are fears that the systems may not be that accurate — that they may in fact unfairly target certain populations — and that their opacity prevents us from figuring this out. There are also fears that their opacity undermines the procedural legitimacy of certain decision-making systems. This is something I discussed at length in my recent paper.

These benefits and problems fall into two main categories: (i) instrumental and (ii) procedural. This is because there are two ways to evaluate any decision-making system. You can focus on its outputs (what it does) or its procedures (how it does it). I’ve tried to illustrate this in the table below.



If opacity is central to both the instrumental and proceduralist concerns, then we would do well to figure out how it arises. This is where Burrell’s taxonomy comes in. Let’s now look at the three branches of the taxonomy.


2. Intentional Opacity
A lot of algorithmic opacity is deliberate. The people that operate and run algocratic systems simply do not want you to know how they work. Think about Google and its Pagerank algorithm. This is Google’s golden goose. It’s what turned them into the tech giant that they are. They do not want people to know exactly how Pagerank works, partly for competitive reasons and partly because of concerns about people gaming the system if they know exactly how to manipulate the rankings. Something similar is true of governments using algorithms to rank citizens for likely terrorist activities or tax fraud.

This suggests two main rationales for intentional opacity:

Trade Secret Rationale: You don’t want people to know how the algorithm works because it is a valuable commodity that gives you a competitive advantage over peers offering similar services.

Gaming the System Rationale: You don’t want people to know how the algorithm works because if they do they can ‘game it’, i.e. manipulate the data that is inputted in order to generate a more favourable outcome for themselves, thereby reducing the value of the system as a whole.

The first of these rationales is justified on a capitalistic basis: the profit motive is a boon to innovation in this area but profit would be eroded if people could simply copy the algorithm. The second is justified on the grounds that it ensures the accuracy of the system. I have looked at criticisms of this second rationale before. Both rationales are facilitated by secrecy laws - complex networks of legislative provisions that allow companies and governments to conceal their inner workings from the public at large. This is something Pasquale discusses in his book The Black Box Society.

In many ways, this form of opacity should be the easiest to address. If it is caused by deliberate human activity, it can be changed by deliberate human activity. We simply need to dismantle the network of secrecy laws that protects the owners and operators of the algocratic systems. The difficulty of this should not be underestimated — there are powerful interests at play — but it is certainly feasible. And even if total transparency is resisted on the grounds of accuracy there are compromise solutions. For instance, algorithmic auditors could be appointed to serve the public interest and examine how these systems work. This is effectively how pharmaceuticals are currently regulated.


3. Illiterate Opacity
Modern-day computerised algorithms are technically complex. Not everyone knows the basic principles on which they operate; not everyone knows how to read and write the code through which they are implemented. This means that even if we did have a system of total transparency — in which the source code of every algorithm was released to the public — we would still have a good deal of opacity. People would be confronted by programs written using strange-looking symbols and unfamiliar grammars. As Burrell points out:

Courses in software engineering emphasize the writing of clean, elegant, and intelligible code. While code is implemented in particular programming languages, such as C or Python, and the syntax of these languages must be learned, they are in certain ways quite different from human languages. For one, they adhere strictly to logical rules and require precision in spelling and grammar…Writing for the computational device demands a special exactness, formality and completeness that communication via human language does not.
(Burrell 2016, 4)

There is no compelling rationale (sinister or benevolent) behind this form of opacity. It is just a product of technical illiteracy.

How can it be addressed? The obvious response is some educational reform. Perhaps coding could be part of the basic curriculum in schools. Just as many children are obliged to learn a foreign (human) language, perhaps they should also be obliged to learn a computer language (or more than one)? There are some initial efforts in this direction. A greater emphasis on public education and public understanding may also be needed in the computer sciences. Perhaps more professors of computer science should dedicate their time to public outreach. There are already many professors of the public understanding of science. Why not professors for the public understanding of algorithms? Other options, mentioned in Burrell’s paper, could include the training of specialist journalists who translate the technical issues for the public at large.


4. Intrinsic Opacity
This is the most interesting type of opacity. It is not caused by ignorance or deception. It is caused by a fundamental mismatch between how humans and algorithms understand the world. It suggests that there is something intrinsic to the nature of algorithmic governance that makes it opaque.

There are different levels to this. In the quote given above, Burrell emphasised ‘clean, elegant, and intelligible code’. But of course lots of code is not clean, elegant or intelligible, even to those with expert knowledge. Many algorithms are produced by large teams of coders, cobbled together from pre-existing code, and grafted into ever more complex ecosystems of other algorithms. It is often these ecosystems that produce the outputs that affect people in serious ways. Reverse engineering this messy, inelegant and complex code is a difficult task. This heightens the level of opacity.

But it does not end there. Machine learning algorithms are particularly prone to intrinsic opacity. In principle, these algorithms can be coded in such a way that their logic is comprehensible. This is, however, difficult in the Big Data era. The algorithms have to contend with ‘billions or trillions of data examples and thousands or tens of thousands of properties of the data’ (Burrell 2016, 5). The ruleset they use to generate useful output alters as they train themselves on training data. The result is a system that may produce useful outputs (or seemingly useful outputs) but whose inner logic is not interpretable by humans. The humans don’t know exactly which rules the algorithm used to produce its results. In other words, the inner logic is opaque.

This type of opacity is much more difficult to contend with. It cannot be easily wiped away by changes to the law or public education. Humans cannot be made to think like a machine-learning algorithm (at least not yet). The only way to deal with the problem is to redesign the algocratic system so that it does not rely on such intrinsically opaque mechanisms. And this is difficult given that it involves bolting the barn door after the horse has already left.

Now, Burrell has a lot more to say about intrinsic opacity. She gives a more detailed technical explanation of how it arises in her paper. I hope to cover that in a future post. For now, I’ll leave it there.

Friday, February 12, 2016

Dennett's Advice to Philosophers




Daniel Dennett was a major inspiration for me. I remember reading his book Consciousness Explained as a teenager (my older brother was a philosophy student) and being fascinated by the topic itself and Dennett’s unusual take on it. Here was a set of questions and a mode of inquiry that I could get behind. I think what I found most intriguing was the sheer intellectual playfulness and creativity, combined with a seriousness of purpose, that was on display in Dennett’s work. I quickly moved on from Consciousness Explained to the rest of Dennett’s canon, which took that playfulness, creativity and seriousness to a whole new level. I was surprised to see that there were people out there dedicating their lives to thinking and writing about the nature of mind and meaning. I wondered if I could do the same.

Obviously, I wasn’t convinced that I could. I decided to study law at university and it was only after a couple of years that I managed to find philosophical niches within that discipline that might allow me to pursue the questions I was genuinely interested in. In the interim, I drifted away from Dennett’s work. Although I read (nearly) everything he wrote, and although I continued to enjoy his elaborate thought experiments, I found that his questions weren’t quite the same as my questions. My research interests veered more towards the practical side of philosophy, not the theoretical. I was interested more in how we should live and where we are going, and less in understanding who we are and how we got here (though I’m not uninterested in these things). I also grew frustrated by Dennett’s lack of formality in argument. Thought experiments are all well and good, I said to myself, but formal arguments are needed at least some of the time.

But recently I have started to re-read Dennett’s work, starting with his 2013 book Intuition Pumps and Other Tools for Thinking. This was an ideal route back into the fold. The book is a ‘best of’ collection of Dennett’s most famous thought experiments. It also closes out with some advice for those who pursue philosophy as a career. As somebody who has been effectively doing that for a few years (albeit in the guise of a legal academic) I found this advice surprisingly insightful. Dennett makes claims about the nature, limitations and risks of philosophical inquiry that chime with my own experience. I wanted to share his three main bits of advice in this post. They are:

1. Appreciate the Faustian Bargain
2. Become a sophisticated auto-anthropologist
3. Avoid the higher order truths of chmess

I’ll explain each of these in more detail in what follows.



1. Appreciate the Faustian Bargain
Philosophers pursue an important set of intellectual questions: What is truth? What is knowledge? What is justice? What is discrimination? Is it ever permissible to kill? Is death bad? Is life good? What is the nature of the mind? And so on. Ostensibly these questions are asked with the aim of getting the answer right. Philosophers really want to know whether life is good and death is bad; they want to know the conditions that must satisfied in a just society; they want to know the truth. Even extreme relativists or social constructivists believe that the relativistic and constructivist theories they put forward best capture the truth. It is the paradox at the heart of all nihilistic modes of thinking.

But what if you offered philosophers the following Faustian bargain:

(A) You solve all the major philosophical problems of your choice so conclusively that there is nothing left to say (thanks to you, part of the field closes down forever, and you get a footnote in history)
(B) You write a book of such tantalizing perplexity and controversy that it stays on the required reading list for centuries to come.
(Dennett 2013, 411)

In essence: Do you want to be right? Or do you want fame and renown? Dennett notes that philosophers to whom he presents the bargain often admit they would go for (B), which is surprising given the ostensible aim of their philosophical inquiries.

Now, to be fair, the Faustian bargain is probably a false one. Solving a philosophical problem and only becoming a footnote in history is unlikely. If you do get things right, you can expect to acquire some fame or infamy. Future generations are likely to be taught something about your work. But the Faustian bargain isn’t intended to be realistic. It’s a contortion of reality that forces you to confront your true priorities. Do you really care about getting the right answer? Or do you really only care about yourself? I suspect many academics struggle with this dilemma. So much of modern academia is about self-promotion and self-aggrandisement. You promote your work; you win grants; you endlessly demonstrate your value to the institution that pays your wage. Oftentimes the objects of your intellectual inquiries get lost in the mix.

Scientists might think they are above all this. They get caught up in the game as much as anyone, but they might argue that no matter what they are ultimately only interested in the truth of their theories. But this isn’t entirely clear. Dennett suggests that you can offer them a similar bargain. You can ask whether they would like to have priority in making a significant discovery — that someone, somewhere was eventually going to make (e.g. working out the structure of DNA) — or whether they would like to propose a theory so novel and intriguing (but not necessarily right) that their name would enter the scientific lexicon (e.g. Freudian psychoanalysis or Chomskian linguistics). Many might be hard-pressed to choose.

Dennett isn’t very prescriptive with respect to the Faustian bargain. He doesn’t say which side we should favour (though you might be able to imply his preferences). But I don’t think he needs to be prescriptive. I think his point is that the bargain is something worth keeping in mind when trying to sort out your intellectual priorities.


2. Become a sophisticated auto-anthropologist
Social anthropologists are students of human psychology and culture. Their typical research method is to embed themselves in a community and then carefully observe and record the behaviours of the people in that community. Doing so allows them to work out how the people in that community perceive and engage with the world around them. How do they think the world works? What do they value? These are the questions the anthropologist tries to answer.

Much of philosophy is an exercise in auto-anthropology. The philosopher tries to map out the contours of their own understanding of the world. They treat themselves as the subject, and their perceptions and values as the data that can be worked up into a philosophical theory. They often do this in concert with others. The result is a form of mutual auto-anthropology. The methodology is roughly the following:

[Y]ou gather your shared intuitions, test and provoke them by engaging in mutual intuition-pumping, and then try to massage the resulting data set into a consistent “theory”, based on “received” principles that count, ideally, as axioms.
(Dennett 2013, 415)

Think about analytic epistemology. This is an attempt to work out what it means to know some fact or proposition. It usually starts with some paradigmatic example of knowing and tries to infer from this an axiom of knowledge, e.g. the classic knowledge-as-justified-true-belief axiom.

Epistemologists then test this axiom using a series of elaborate thought experiments. From this they discern that the proposed axiom is incorrect or incomplete. They modify it accordingly and test it again. None of the testing is empirical. It always involves exploring one’s own understanding of an imagined reality.

According to Dennett, there are better and worse ways to go about these auto-anthropological studies. A good way — and one of his favourite examples — is Patrick Hayes’s attempt to work out a naive (or folk) physics of liquids. Hayes was trying to build a robot that could understand the world in the same way humans do. To do this, he thought he could axiomatise the typical human understanding of the physics of liquids. This meant ruling out things that seem intuitively impossible, like siphons and pipettes, but allowing other things that seem intuitively acceptable, like towels mopping up liquids. Hayes never collected data from others on this. He treated himself and his own commonsense understanding of physics as the dataset. But:

[H]e was under no illusions; he knew the theory he was trying to axiomatize was false, however useful in daily life.
(Dennett 2013, 414)

He was thus a sophisticated auto-anthropologist — he was open to the fact that his proposed ‘physics’ could be vulnerable to counterintuitive examples. He knew that his intuitive understanding did not necessarily represent reality.

Contrast that with the work of many analytic philosophers. They engage in similar exercises in intuitive axiomatization but they:

…seem to be convinced that their program actually gets at something true, not just something believed true by a particular subclass of human beings.

This could be problematic. Since philosophers tend to represent a narrow range of interests and perspectives, it is likely that their auto-anthropological exercises are distorted by their own theoretical predilections. They should, consequently, be more open to the possibility that they are not getting at the truth. Dennett sees much of philosophical inquiry as an attempt to negotiate between the manifest understanding of the world (the world as it appears to us) and the scientific image. Philosophers should understand the role that their auto-anthropological inquiries have to play in this negotiation:

…philosophers should seriously consider undertaking a survey of the terrain of the commonsense or manifest image of the world before launching into their theories of knowledge, justice, beauty [etc]…Such a systematic inquiry would yield something like a catalogue of the unreformed conceptual terrain that sets the problems for the theorist, the metaphysics of the manifest image, if you like. This is where we philosophers have to start in our attempts to negotiate back and forth between the latest innovations in the scientific image…
(Dennett 2013, 416)


3. Avoid the higher-order truths of chmess
This is probably my favourite of Dennett’s insights into the nature and limitations of philosophical inquiry. I think it really does get at something important (and potentially disturbing). And I think anyone who has waded around in the waters of academic philosophical argument for an extended period of time will agree. That said, it takes a little bit of time to explain so bear with me.

Philosophy is, to a large extent, an a priori discipline. It is about working out the truths that arise from certain conceptual frameworks. Sometimes those frameworks are grounded in an empirical, and scientifically tested, reality (e.g. applied ethics often appeals to findings from the behavioural sciences); sometimes the conceptual clarification gives way to a full-blown science (e.g. several of the sciences, such as physics and psychology, began life as branches of philosophy); sometimes it remains a purely a priori discipline (e.g. analytic epistemology or metaphysics). The a priori mode of inquiry can be useful, but it is also risky.

Dennett illustrates the risks by analogy with the game of chess. Chess is an a priori game. True, there are reams of empirical data about particular games and particular players, but the possible moves and results within the game all follow logically from its constitutive rules. As a result, there are many a priori truths of chess. Dennett gives some examples:

There are exactly twenty legal opening moves (sixteen pawn moves and four knight moves); a king and a lone bishop cannot achieve checkmate against a lone king, and neither can a king and a lone knight, and so forth.
(Dennett 2013, 419)

Figuring out these truths is not a trivial matter. It often takes great ingenuity and persistence to work out exactly what is and is not possible in a game of chess. Part of the reason for this is that the number of possible games is astronomical. So even though the game has been around for centuries, and has been closely studied throughout its existence, surprising a priori truths are sometimes proved. Dennett gives the example of a computer program that discovered a way in which to guarantee or force a win after 200 moves without a capture. This discovery changed one of the rules of competitive chess, which had previously deemed any game involving 50 moves without capture a stalemate.

Entire lives can be dedicated to exploring the a priori truths of chess. They can also be dedicated to exploring the a priori truths of chmess. ‘What’s that?’ you ask. It is a game that Dennett invented off the top of his head. It is the exact same as chess with one small difference: instead of being able to move one square in any direction, the king can move two squares. With this one small change, an entirely new domain of a priori inquiry has been opened up. The problem is that whereas the game of chess is a ‘deep and important human artifact, about which much of value has been written’ (Dennett 2013, 421), the game of chmess is not. It is a random, new invention, created through a slight tweaking of the a priori conceptual apparatus.

How does this analogy apply to the work of the philosopher? Well, if it is true that much of philosophy is about the a priori working out of the implications of various conceptual frameworks, then it is possible that many philosophers are dedicating themselves to working out the higher-order truths of something like chmess (a novel but not humanly important game) rather than the higher order truths of chess (a long-standing humanly important game). They may be doing brilliant, sophisticated work in these ‘games’. And there may be a whole community of scholars interested in playing along, but if it is chmess-like rather than chess-like, it may be ultimately worthless. For Dennett, it brings to mind Hebb’s dictum (from the work of the psychologist Donald Hebb):

Hebb’s Dictum: If it isn’t worth doing, it isn’t worth doing well.

Speaking from my own experience, there are definitely times when I feel like I am playing chmess rather than chess. For instance, I have written a couple of papers about originalist theories of interpretation in law. These are theories of interpretation which hold that the meaning of a legal text is fixed from the time of its ratification. These theories have been highly influential in US legal circles, often being used by conservative lawyers and scholars who wish to constrain the actions of the US Supreme Court. There is, consequently, a practical orientation to the debate, but the practical utility of originalist theories is highly contested, and the theories themselves have become increasingly philosophically sophisticated over the years. The papers I have written about the topic followed a formula: they first set out the basic commitments of originalism and showed how those commitments implied certain (arguably untenable or contradictory) normative beliefs. A lot of this felt like it involved playing around with an arbitrarily defined conceptual apparatus. The work was laborious and quite intricate and sophisticated, but its actual value was unclear, at least to me.

So how can you avoid pursuing the higher-order truths of chmess? Dennett proposes the following test, which I will call the ‘outsider’ test for philosophical value (a different outsider test is employed by John Loftus in debates about religion):

The Outsider Test for Philosophical Value: “One good test to make sure a philosophical project is not just exploring the higher-order truths of chmess is to see if people aside from philosophers can actually play the game. Can anybody outside of academic philosophy be made to care…? Another such test is to try to teach the stuff to uninitiated undergraduates. If they don’t ‘get it’ you really should consider the hypothesis that you’re following a self-supporting community of experts into an artifactual trap”.
(Dennett 2013, 421)

That sounds like a good rule of thumb to me and its one of the aims of this blog.

Anyway, those are the three bits of advice and that’s it for this post.

Wednesday, February 10, 2016

Should we experience existential gratitude?



The postman dropped off a card today. It was from an old friend whom I hadn’t talked to in a long time. The card was accompanied by a gift. A new book from an author I like. I felt immensely grateful for this. My friend and I have drifted apart over the years. For them to suddenly think of me in this way was unexpected.

Later in the morning I went for a walk. I live on the Irish coast. Approximately 500 yards from my house there is promenade with views onto the Atlantic ocean. It was sunny and calm — a rare coincidence in these parts — and as I walked along the promenade I could feel the warmth of the winter sun on the back of my neck. I felt grateful for the opportunity to experience this, to see such beauty in the world, to be alive at this time.

Both of these incidents (which may or may not have happened) raise philosophical questions. In both cases, I claimed to feel gratitude. In the first, my gratitude was directed toward a person — the friend from whom I have drifted apart — in the second, my gratitude was directed to no one in particular — I am not a theist and I do not believe that my presence in this world is the result of divine act of creation. Here’s the question: Am I right to experience gratitude in both cases?

That’s what I want to answer in the remainder of this post. I do so by drawing upon Michael Lacewing’s recent article ‘Can Non-Theists Appropriately Feel Existential Gratitude?’. Don’t be fooled by the title. Lacewing has more to say than is captured by its religious overtones. It provides insight into the concept of gratitude, the nature of the emotions, and the appropriateness of emotional responses. I’ll try to cover all three of these things in what follows.


The Concept of Gratitude: An Initial Overview
Gratitude is an emotional response to the good. But there are lots of emotional responses to the good. One can feel joy or happiness too. What is different about gratitude? One suggestion — and the one that Lacewing defends — is that gratitude is, at is core, an emotional that is felt in response to undeserved (or ‘out of our control’) good. That’s why I was grateful for the book from my old friend. I had no just reason to expect the book. It wasn’t as if they were reciprocating for something I had recently given them. It was completely out of the blue.

You may question this. You might say that sometimes you experience gratitude in response to goods that you deserve (e.g. those you pay for). Lacewing argues that even in these cases your gratitude is likely to be sensitive to some degree of undeservingness. Consider:

If I thank someone who has sold me something. I do not thank them (except in a perfunctory way) for giving me the item when I hand over the money. If I feel genuine gratitude, then I am thankful that they sell the item at all, which is something I have no right to, or perhaps I thank them for good service, which is something one does not ‘buy’ in the same way as the item.
(Lacewing 2015, 2)

There are different types of gratitude, varying depending on the nature of the good in question. Some gratitude is directed at a particular event or person. My gratitude for receiving the book from my friend is of this type. Some gratitude is broader and less directed. My gratitude for being alive and being able to experience beauty in the world is more of this type. It is an existential gratitude, i.e. a recognition of the goodness in one’s life that is undeserved or, even more generally, a simple delight in being.

There are some problems with this initial characterisation of gratitude. Some people insist that there is more to gratitude than a response to undeserved good. They insist that gratitude is a response to a gift given to you by a gift-giver. They consequently appeal to a ‘personal’ analysis of gratitude:

Personal Analysis of Gratitude: Gratitude is a response to a good that is undeserved (or beyond one’s control) and is experienced as a gift, and which is consequentially directed at a person (the gift-giver).

This is to be contrasted with a ‘non-directed’ analysis of gratitude.

Non-directed Analysis Gratitude: Gratitude is a response to a good that is undeserved (or beyond one’s control), which need not be experienced as a gift or directed at a person.

The personal analysis creates problems for the non-theist. If it is true that gratitude must be directed at a gift-giver, then the non-theist cannot experience pure existential gratitude. They can be grateful for some of the goods in life that come from other human gift givers, but they cannot be grateful for their existence as a whole. I would be correct to feel gratitude in response to my friend sending me the book; but not for the chance to be alive and experience beauty in the world.

How persuasive is this argument? Not very. The personal analysis of gratitude is popular among philosophers and psychologists, but it is usually intended to be descriptive in nature. It is not a normative account of when it is or is not appropriate to experience an emotion. Furthermore, it may actually fail as a descriptive account. The fact is there are non-theists who experience generalised existential gratitude (Richard Dawkins has written eloquently about this in the past), and some of the particularised gratitude that we experience in life does not appear to be directed at a gift-giver. Lacewing gives the example of a parent grateful for a moment’s quiet after the children have left. There is no obvious person to whom this is directed

The deeper question then is whether non-directed existential gratitude is normatively appropriate. To answer that we need to consider the normativity of emotional responses more generally and of gratitude in particular.


2. The Normative Assessment of Emotions
Philosophers have a well-worked out set of normative standards to apply to actions and omissions. If I see a child drowning in a pond and I could save them at no risk to myself, we would say that rescuing the child is obligatory. If I am driving my car on the road, we would say that driving on the wrong side is forbidden. If I am walking in the park and listening to music, we would say that humming to myself is permissible. Some philosophers add additional standards, but these three concepts (permissibility, forbiddenness and obligatoriness) form the backbone of our normative assessment of conduct.

How about emotions? Emotions seem different from actions and omissions. For one thing, emotions seem to be largely involuntary responses to the world we experience. They can be trained and honed over time, but they are not subject to the same immediate voluntary control as actions appear to be (ignoring all debates about determinism and free will for the time being). So to develop criteria for assessing emotions we need a better understanding of what they are.

Lacewing follows the dominant philosophical account of emotions. According to this account, emotions are appraisals of their intentional objects. Emotions have content; they are about something or other. I feel angry about the driver who cut me off in traffic; I feel sad when my favourite team loses a match; I feel fear when I see a poisonous snake. These appraisals then supply reasons for action. For example, if the snake is poisonous, I have reason to back away from it.

This suggests a way in which to normatively assess emotions. On this account, emotions can misfire. To be more precise, they can misrepresent the value of their intentional objects. Perhaps the driver who cut me off is an ambulance driver. They did so because they are trying to save someone’s life. For me to feel angry about this would be churlish and self-centred. Perhaps the snake is not really poisonous. If so there is no reason to be fearful and to back away.

Lacewing suggests that we capture this kind of normative assessment using the following three standards:

Appropriateness: The emotion is an accurate reflection of the value of its intentional object.

Inappropriateness: The emotion is not an accurate reflection of the value of its intentional object.

Mandatedness: The emotion is an accurate reflection of the value of its intentional object and that value is sufficiently high to make that emotional response mandated in the relevant context.


You can apply these standards in practical and cognitive senses. In the practical sense, the focus is on the value of the emotional response to the individual experiencing it. In the cognitive sense, the focus is objective: is the thing being represented in the emotion truly valuable enough to warrant that kind of response?

It is easy enough to endorse gratitude in the practical sense. There are a variety of studies suggesting that people who experience gratitude (and are encouraged to keep gratitude diaries) are psychologically healthier — less prone to envy and narcissism and so forth. Whether gratitude is cognitively appropriate is a separate matter.


3. So is non-directed existential gratitude ever cognitively appropriate?
The cognitive appropriateness of personal gratitude is obvious enough. All you need to show is that there is a gift that is good and that it came from a gift-giver. This makes it easy for the theist to defend existential gratitude. For them, existential gratitude is just a species of personal gratitude. Our lives are good (on average) and they are gifts from a supreme gift-giver (note: this ignores several problems concerning the quality of some lives and the problem of evil).

Defending the cognitive appropriateness of existential gratitude from a non-theistic standpoint is tougher. The theist will argue that it is just a mis-firing of personal gratitude. Lacewing argues that in order to defend the cognitive appropriateness of existential gratitude you need to consider its psychological origins (from a non-theistic perspective). He suggests that there are two main accounts on offer:

Evolutionary Account: This holds that gratitude is an adaptive response to life in large social groups. It is good that we feel grateful to others and try to ‘pay forward’ good deeds because it is fitness-enhancing. The non-directed form of gratitude is then simply an evolutionary by-product of this fitness-enhancing interpersonal form of gratitude.

Psychoanalytic Account: This traces the origins of gratitude to early childhood experiences, particularly the phenomenological experience of the child while breastfeeding. Lacewing suggests that in this early state the emotion is not directed toward another because the child doesn’t carve the world up into other intentional agents in the way that adults do; instead it is a feeling of relatedness to the whole world.

Lacewing says that the evolutionary account is no good for the defender of non-directed existential gratitude. It supports the mis-firing view of the critic: non-directed gratitude is the mis-firing of an otherwise appropriate emotional response. He thinks the psychoanalytic account is better.
At this point, I have to lay my cards on the table. Early on in my education I read several philosophical and scientific critiques of psychoanalytical theory (including the classic critiques from Popper and Grunbaum). As a result of this, I find it odd that someone would take it seriously. In particular, I find it odd that someone would attribute much significance to the (hypothesised) phenomenological experiences of an infant child. But Lacewing does and he has defended psychoanalytic theory from the classic critiques. I have not read this aspect of his work but I’m sure that if I did I may rethink some of my views.

Fortunately, I don’t think it is necessary to get into this debate to appreciate Lacewing’s argument about existential gratitude. The upshot of his view is that non-directed gratitude is a cognitive sharpening of our childhood response to the world. This sharpening of emotional response will be appropriate whenever it stands up to critical scrutiny. In other words, whenever you can prove two things:

Goodness: The intentional object of the emotion is, in fact, good.
Non-responsibility: The goodness is not deserved or not a product of events and circumstances that the individual can control.

This, of course, just brings us back to the definition.

Applying this standard, is non-directed existential gratitude cognitively appropriate? That depends on whether the contingency of our existence or the overall quality of our lives is good, and whether much of that good is beyond our control. It seems likely that these two conditions are met in many instances, unless one embraces something like Benatar’s anti-natalism (which holds that life, in general, is not good for us) or some mystical belief that everything is subject to your conscious control. So for many non-theists, existential gratitude will be an appropriate emotional response.
 
It is unlikely that the value in question rises to a sufficient threshold to make existential gratitude mandated, but there is enough to make it appropriate in many instances. That, at any rate, is Lacewing’s suggestion. What do you think? Is this account of gratitude persuasive? Is existential gratitude appropriate for the non-theist?

Thursday, February 4, 2016

Symbols and their Consequences in the Sex Robot Debate



I am currently editing a book with Neil McArthur on the social, legal and ethical implications of sex robots. As part of that effort, I’m trying to develop a clearer understanding of the typical objections to the creation of sex robots. I have something of a history on this topic. I’ve developed objections to (certain types of) sex robots in my own previous work; and critiqued the objections of others, such as the Campaign Against Sex Robots, on this blog. But I have yet to step back and consider the structural properties these objections might share.

So that’s what I’m going to try to do in this post. I was inspired to do this by my recent re-reading of Sinziana Gutiu’s paper ‘Sex Robots and the Roboticization of Consent’. In the paper, Gutiu objects to the creation of sex robots on several grounds. As I read through her objections I began to spot some obvious structural similarities between what she had to say and what I and others have said. I think identifying these structural similarities allows one to see more clearly the strengths and weaknesses of these objections.

So here’s my plan of action. I’ll start by outlining what I take to be the core logical structure of these objections to sex robots. Then I’ll consider how Gutiu fleshes out this logical structure in her paper, and close with some general reflections on the value of this style of objection. Bear in mind, my goal here is not to critique or defend any particular set of views but rather to achieve greater analytical clarity. The hope is that this clarity could, in turn, be used to craft better critiques and defences. So, if you are looking for a very clear take on the merits or demerits of sex robots, you won’t find that in this post.


1. The Basic Logical Structure: Symbols and their Consequences
Assuming one does not adopt a natural law-type attitude toward sex — according to which any non-procreative sexual act would be ethically questionable — the main concern with the creation of sex robots seems to be with the symbolism and consequences of their creation and use. This dual concern is shared by the objections in Gutiu’s paper, my previous paper on robotic rape and robotic child sexual abuse, and the arguments put forward by the campaign against sex robots. As a result, I believe the following schematic argument can capture these concerns:

  • (1) Sex robots do/will symbolically represent ethically problematic sexual norms. (Symbolic Claim)
  • (2) If sex robots have ethically problematic symbolic properties, then their development and/or use will have negative consequences. (Consequential Claim)
  • (3) Therefore, the development and/or use of sex robots will have negative consequences and we should probably do something about this. (Warning Call Conclusion)

Some comments about this abstract formulation are in order.

First, the ethically problematic symbolism of sex robots could take many forms. It could be that the physical representation of the robots embodies negative sexual stereotypes. People are particularly concerned about this since the sex robots that are currently in development seem to be targeted primarily towards heterosexual men and tend to represent a certain style of woman (some liken it to a ‘pornstar’-esque style). The behaviour or movement of these sex robots may be problematic as well, e.g. they may behave in an overly deferential, coquettish manner. It could also be that the act of having sex with a robot is symbolically problematic, perhaps they are designed to resist the user’s advances, thereby concocting a rape fantasy; or perhaps they are designed to be completely passive, ever-willing participants in sexual acts (something Gutiu worries about in her analysis). Perhaps even more symbolically worrying is the possibility of having sex robots that are designed to look and act like children, something I discuss in my article on robotic rape and robotic child sexual abuse, and has been mooted by others. Whatever the problematic symbolism may be, it is deemed important in this debate because most people presume that sex robots themselves will not be persons and so will not be harmed by interactions with human users. If the robots cannot be moral victims, their symbolism is all that is left.

Second, the negative consequences of the symbolism could also take many forms, some more immediate and direct than others. It could be that the user is directly and immediately harmed by the interaction with the robot. This is something I raised in my article on the topic, suggesting that anyone who had sex with a child sex robot or a rape fantasy robot may demonstrate a disturbing insensitivity to the social meaning of their act. It could be that the development and use of the robots sends a negative signal to the rest of society, perhaps reinforcing a culture of sexism, misogyny and/or sexual objectification. The interaction with the robot could also have downstream effects on the user, changing his/her interactions with other human beings and thereby having a harmful impact on them as well. All of these possibilities have been mooted in the literature to date. The negative consequences need not be a dead cert; they could have varying degrees of probability attached to them. This is normal enough in a debate about a nascent, emerging technology (heck, it’s normal enough in any debate about the consequences of technological usage). But the uncertainties may make it difficult to draw firm normative conclusions.

Third, the conclusion is something of a non-sequitur in its current form. The first part does follow logically from the premises; the second part does not. Nevertheless, I have tacked on this ‘warning call’ because I think it is common in the debate: most purveyors of these arguments think we ought to do something to minimise the potential negative consequences. What this ‘something’ is is another matter. Some people favour organised campaigns against the development of such devices; others favour strong to weak forms of regulation.

Anyway, that’s what I think the common abstract structure of these objections looks like. Let’s now consider a concrete version of this objection.



2. Gutiu’s Objections to Sex Robots
The version I am going to consider comes, of course, from Gutiu’s paper. I’ll start with her discussion of the symbolism of sex robots. The guiding assumption in her article is that the majority of sex robots will be targeted at heterosexual males and will depict a stereotypical ‘ideal’ woman. She defends this assumption by reference to literature (e.g. the long-standing trope of male protagonists constructing ideal female partners, present for instance in the Adam and Eve myth) and current examples of robotic technology. Some of these examples do not involve actual sexbots (i.e. robots designed for sexual use) but do involve gynoid robots (robots designed to look and act like women) that are highly sexualised:

Aiko, Actroid DER and F, as well as Repliee Q2 are representations of young, thin, attractive oriental women, with high-pitched, feminine voices and movements. Actroid DER has been demoed wearing either a tight hello kitty shirt with a short jean skirt, and Repliee Q2 has been displayed wearing blue and white short leather dress and high-heeled boots.
(Gutiu 2012, 5)
 
There are many other examples of this too. Thus, the physical structure of female robots alone serves to replicate arguably problematic norms of body shape, dress, and movement. If you add to this the idea that the robots are designed for sexual use, you compound the problematic symbolism. As Gutiu puts it:

To the user, the sex robot looks and feels like a real woman who is programmed into submission and which functions as a tool for sexual purposes. The sex robot is an ever-consenting sexual partner and the user has full control of the robot and the sexual interaction. By circumventing any need for consent, sex robots eliminate the need for communication, mutual respect and compromise in the sexual relationship. The use of sex robots results in the dehumanization of sex and intimacy by allowing users to physically act out rape fantasies and confirm rape myths.
(Gutiu 2012, 2)

She repeats this concern several times during the paper.

It seems, then, that Gutiu fleshes out the first premise of the argument in the following manner:

  • (1*) Sex robots will symbolically represent ethically problematic sexual norms because (a) the majority will adopt gendered norms of body shape, dress, voice and movement (e.g. they will be thin, large-breasted, provocatively clad, coquettish in behaviour and so on - this could vary from society to society); and (b) they will function as ever-consenting sexual tools, allowing users to act out rape fantasies and confirm rape myths.

Some people might find this symbolism disturbing by itself, but consequences are important in this debate. It is, after all, possible that symbolically problematic practices have beneficial consequences. Someone could argue that allowing someone to act out a rape fantasy with a sex robot is better than having them actually rape a real human being. The robot could, thus, have a beneficial preventative effect. I’m not sure how likely that is, but Gutiu is clear in her paper that the creation and use of sex robots will have negative consequences.

First, there are the obvious social harms, and harms to others, arising from the symbolism. If the robots replicate gendered norms of sexualised appearance and sexual compliance, they will contribute to and reinforce a patriarchal social order that is harmful to women. In particular, Gutiu worries that the symbolism will further distort our understanding of sexual consent. Campaigners have been fighting hard to make changes to the law surrounding rape and sexual assault. The changes made to date try to combat rape myths by clarifying the nature of sexual consent and assigning appropriate weight to the testimony of victims. Sex robots would represent a step back in this fight because:

They embed the idea that women are passive, ever-consenting sex objects, and teach users that when getting consent from a woman, “only no means no”.
(Gutiu 2012, 15)

In other words, they would go against the recent demand for positive affirmative signals of sexual consent. This could obviously have an impact on real women, who become victims of actual sexual assault and rape if users act out in the real world.

Second, in addition to the social harms and harms to others, there are the harms to the users themselves. For one thing, they could internalise the problematic sexual norms through repeated use of the robots, which could alter their moral character and the nature of their interactions with real people. Also, and somewhat in tension with this idea, the robots could reinforce antisocial tendencies among users, encouraging them to withdraw more from social interactions, and avoid the need for mutuality and compromise in their sexual lives.

This latter notion was contradicted in the film Lars and the Real Girl. There, the use of a sex doll was therapeutic and enabled an introverted man to reintegrate with society. But Gutiu dismisses this:

Although it was an effective approach to a Hollywood film, sex robots are unlikely to help antisocial users better interact with women. It is doubtful that an individual who does not feel accepted in society, and who finds an alternative way to meet their exact needs for companionship will, for some reason, want to integrate back into society, where they can risk rejection and face social discomfort.
(Gutiu 2012, 17)

This suggests to me that Gutiu fleshes out the second premise of the argument in the following manner:

  • (2*) If sex robots adopt gendered norms of body shape, dress, behaviour (etc), and function as ever-consenting sexual tools, their creation and use will: (a) reinforce patriarchal social norms and distort our understanding of sexual consent, which will ultimately harm women; and (b) will harm the users by encouraging them to internalise problematic sexual norms and, for some, exacerbate their antisocial tendencies.

This, in turn, leads to the ‘warning call’ conclusion. Gutiu thinks that something should be done to combat the problematic symbolism and likely negative consequences. She does not favour prohibition of sex robots. Instead, she favours various regulatory interventions. These could include, in particular, the demand that creators design robots in a certain way. They could also include the creative use of legal mechanisms to allow potential victims of harm arising from the use of sex robots to sue for damages. As an example, she suggests that a person whose marriage dissolves after their partner starts using a sex robot be allowed to sue the manufacturer. This might seem unusual, but there are legal mechanisms (so-called ‘heart balm torts’) that allow people to sue others for interfering with a legally protected relationship.


3. Concluding Thoughts
Hopefully, you can now see how the abstract argument scheme can be developed into something more concrete. I think there are several ways in which to challenge and support the argument developed by Gutiu. But I won’t say too much about them in this post. That wasn’t my intention. I’ll just close with three general comments. These flag-up issues I think are important or worthy of further consideration.

First, on the symbolic claim, I think it is generally true that sex robots appeal to stereotypical gendered norms of appearance and behaviour. You see this all the time in fictional depictions of sex robots (I think, in particular, of the robots in the TV series Humans and the movie Ex Machina which were used for sexual purposes, though not limited to sexual functionality). You also see it in Roxxy, the sex robot developed by TrueCompanion, and the prototypes being developed by RealDoll (LINKs). But I also think that the problematic symbolism could be addressed. The robots don’t have to adopt stereotypical appearances and behaviours. You could, for instance, design robots to give active, affirmative signals of consent. This may be an appropriate target for regulatory intervention or mass social pressure.

Second, despite what I just said, there is an interesting idea in Gutiu’s paper which suggests that there may be something inherent (or, at least, very strongly embedded) in the idea of a sex robot that makes it symbolically problematic. When you think about it, people are probably drawn to the creation and use such devices because they want an ultimately compliant and ever-willing sexual outlet. There wouldn’t be much point in them creating a sex robot that acted exactly like a human being — and could, therefore, avoid, resist or otherwise not reciprocate their sexual desires — since there are plenty of them around anyway. But this very thing that makes sex robots an attractive proposition, in and of itself, symbolically problematic. It represents sexual interactions as devoid of mutuality. Now, of course, people already engage in many solo sexual acts that are devoid of mutuality. And most would agree that there is nothing problematic (symbolic or otherwise) about those acts. But they are symbolically different: they do not involve embodied sexual contact with something that looks and acts (sorta) like a real human being. I don’t know what to make of this right now, but I think the notion that problematic symbolism is strongly embedded in sex robots is interesting. It means it may not be easy to address the symbolism through regulatory intervention or reform.

Third, and finally, the consequential claims that permeate this debate always strike me as being problematic. In many cases, the consequences appealed to are speculative (since the technology is not in widespread use) and indirect. As Anders Sandberg has argued elsewhere, it may indeed be true that the use of sex robots contributes to more harmful social environments and interactions with real human beings, but how tight is that causal connection likely to be? Is intervention into the development and use of sex robots likely to be the most effective way to combat these problems? Or could other policy levers be pulled to the same or better effect? These are all important questions when it comes to assessing the consequential claims and the warning calls that are issued in this debate.