Pages

Tuesday, November 29, 2016

Understanding Freedom as Independence




I want to share an interesting framework for thinking about negative freedom. Negative freedom is a central concept in liberal political theory. One of the primary duties of the state, according to liberal political theory, is to protect negative freedom.

But what does negative freedom consist in? Broadly speaking, negative freedom is the absence of external constraint on action. If I sign my name to the bottom of a document, I do not do so freely if you grabbed my hand and forced me to sign. You are an external constraint. You undermine my negative freedom.

This example is, however, relatively trivial. There are many external constraints on action. Which ones actually undermine my negative freedom? In some sense, I am constrained by my biology. I am not free to stop breathing. I am constrained from breathing anything other than oxygen. But does that mean that the necessity for oxygen-breathing is a freedom-undermining external constraint?
Or take a more contentious example. Suppose I work in an office. My office manager suggests that if I want to get a promotion I should wash his car every weekend. Suppose I duly go and wash his car every weekend. Am I doing so freely? Or does his not-so-subtle hint constitute a freedom-undermining interference? These are the kinds of the questions that fill the pages of political philosophy journals.

In their article, ‘Freedom as Independence’, Christian List and Laura Valentini do two things that help us to answer some of these questions. First, they map out the ‘logical space’ of negative freedom. And second, they use this map to identify and make the case for a new theory of negative freedom — one that has been overlooked by liberal theorists to date.

I want to describe these two features of List and Valentini’s article in this post. I do so because I think their methodology for mapping out the logical space of freedom can be useful in other contexts, and also because I think the idea of ‘freedom as independence’ is worth considering.

[Note: I covered some of this ground already in my post ‘The Logical Space of Algocracy (Redux)’. This is a slightly longer explanation of the discussion of List and Valentini’s framework that occurred in that post.]


1. The Logical Space of Freedom
Two theories of negative freedom predominate in contemporary political theory. The first is the classic liberal theory of freedom as non-interference:

Freedom as Non-Interference: An agent’s freedom to do X is the actual absence of relevant constraints on the agent’s doing X.

I am free to walk down the street provided that no one or no external force is actually constraining me from walking down the street. This is a simple, clean but ultimately problematic way to define negative freedom. There are a couple of features of the definition that are worth calling out. Notice first how it includes the phrase ‘relevant’ constraints. This is a sop to the fact that there is disagreement about what counts as a freedom-undermining constraint. To get a sense of the scope of the disagreement, I would suggest reading my earlier post on Quentin Skinner’s genealogy of freedom. There, I distinguished between external force, coercion, and self-sabotage as potentially relevant types of interference.

Notice second the use of the term ‘actual’ in the definition. This tells us that freedom as non-interference is a non-modal definition of negative freedom. It is only concerned with what happens in the actual world, not with what happens in other possible worlds. This is thought to be a problem by so-called neo-republican theorists of freedom. They think that limiting the focus to the actual world means that liberals cannot account for the absence of freedom in the case of the happy slave. The ‘happy slave’ is a thought experiment in which we are asked to imagine a slave who conforms his/her will to that of his/her master. In other words, they act in a way that always pleases their master. As a result, the slave master never interferes with or imposes constraints on their actions. This means that, according to the definition given above, the slave is free: there are no relevant constraints on their ability to act in this world.

This seems unsatisfactory to the republicans. If you look beyond this actual world to other possible worlds, it seems clear that the slave’s freedom is being undermined. If the slave happens to act in a way that does not please their master, their master stands ready to intervene and prevent them from doing so. They live under the dominion of the slave master. This suggests to the republicans that negative freedom requires more than the absence of constraint in this world. It requires the absence of relevant constraints across a number of possible worlds. They use this to form their own preferred conception of freedom, something called freedom as non-domination.

Freedom as Non-Domination: An agent’s freedom to do X is the robust absence of arbitrary relevant constraints on the agent’s doing X.

We have added two terms to the definition. The ‘robust’ descriptor is supposed to capture the modal nature of freedom as non-domination (i.e. the absence of constraint across a number of possible worlds). The ‘arbitrary’ descriptor requires more explanation. On top of thinking that there is modal dimension to freedom, many republicans think that there is a moral dimension to it too. In other words, not all constraints are morally equal. Some are justified. If I commit a crime and am imprisoned as a result, my freedom to act is constrained, but we might view this as a morally justified constraint. And so, we might be inclined not to include that within the scope of freedom-undermining constraints. This is why we might focus on the absence of arbitrary constraints. (This isn’t quite right but I’ll say more about it in a moment).

Up to this point, we have just been describing the two major theories of negative freedom. When List and Valentini do this in their paper, they note something interesting. They note that the two theories vary along two dimensions: the modal dimension (are they robust or not?) and the moral (do they limit themselves to arbitrary interferences or not?). This suggests that it is possible to arrange theories of freedom into a two-by-two matrix, illustrating the variance along both dimensions. And when you do this, you see something that you might otherwise miss: freedom as non-interference and freedom as non-domination only represent two out of four possible conceptualisations of negative freedom.




Now, as it happens, there are some liberal theories of freedom that belong in the upper right quadrant (i.e. that are moral but non-robust). List and Valentini mention the theories of freedom defended by Robert Nozick and Ronald Dworkin as specific examples. These theories say that in order to have negative freedom you must be free from arbitrary constraint in the actual world. But the lower left quadrant is almost completely neglected. The purpose of List and Valentini article is to describe and defend the theory of freedom that belongs in this quadrant.


2. Defending Freedom as Independence
They call this theory of freedom ‘freedom as independence’. It can defined like this:

Freedom as Independence: An agent’s freedom to do X is the robust absence of relevant constraints on the agent’s ability to do X.

The theory is non-moral and modal. It says that you must be free from constraints across a range of possible worlds — it shares this requirement with freedom as non-domination. But it also says that any relevant constraint (even if it is morally justified) counts against your freedom.

List and Valentini present a detailed argument in defence of freedom as independence. I can’t hope to do justice to the nuances of that argument in this post. I’ll just give you a sense of how it works. It starts with two desiderata that plausible theories of freedom ought to meet: (i) they ought to ‘pick out as sources of unfreedom those modal constraints on action that stand in need of justification’ (List and Valentini 2016, 1049); and (ii) they ought to ‘display[] an adequate level of fidelity to ordinary-language use’ (List and Valentini 2016, 1051). The argument then works like this:



  • (1) There are four logically possible theories of negative freedom: (i) freedom as non-interference; (ii) moralised freedom as non-interference; (iii) freedom as independence; and (iv) freedom as non-domination. The theories vary depending on whether they have a robustness requirement (or not) and a moralised exemption clause (or not).
  • (2) A plausible theory of freedom should have a robustness requirement (therefore theories (i) and (ii) are ruled out).
  • (3) A plausible theory of freedom should not have a moralised exemption clause (therefore theory (iv) is ruled out).
  • (4) Therefore, of the four logical possible theories of negative freedom, freedom as independence is the most plausible.



The bulk of the argumentation comes in the defence of premises (2) and (3). List and Valentini defend premise (2) on fairly standard grounds: by appealing to the happy slave thought experiment. They think it is a major defect of all liberal theories of freedom as non-interference that they cannot account for the unfreedom of the slave. The slave’s situation, even if they are happy, stands in need of moral justification and so the first desideratum is not met. Also, in most ordinary language analyses, we would be inclined to say that a slave is unfree (indeed, the slave’s situation may be the paradigm of unfreedom) and so the second desideratum is not met.

That’s the basic argument anyway. The details are a little bit more complicated. As they point in the paper, there are a few different approaches to freedom as non-interference that attempt to account for the slave’s predicament, e.g. by including something like a robustness requirement that focuses on the probability of interference. List and Valentini dismiss solutions of this sort on the grounds that even if the interference from a slave master was improbable it would stand in need of justification and would not conform with our ordinary language usage. Other modifications are discussed and dismissed in the paper. This leaves premise (2) in reasonably good health.

List and Valentini’s argument for premise (3) is more complex. The initial case in its favour is straightforward. Suppose we accept a moral exemption clause. In that case, we would say that someone who was justly imprisoned was not unfree. But this would fail to satisfy our two desiderata. Imprisoning someone definitely requires moral justification and ordinary language usage would similarly insist that an imprisoned person was not free.

The argument becomes complex when List and Valentini try to use it to make the case against freedom as non-domination. The problem is that although they categorise that theory as having a moral exemption clause, some of its most famous proponents insist that it does not. It all comes down to how we interpret the word ‘arbitrary’ in the definition. Philip Pettit — probably the most famous neo-republican — has argued that it does not have a moral connotation. All he means when he says that you must be robustly free from arbitrary constraint is that you are free from constraints that do not match your own avowed interests. So if I have an interest in not paying tax, but the government insists upon taking tax monies from me (or stands ready to do so in some nearby possible world) then my freedom as non-domination is compromised. This is true even if taking tax money is morally legitimate.

List and Valentini argue against Pettit on a couple of grounds. They think a non-moralised theory of arbitrary constraints creates problems when you turn to politics. Pettit has expended considerable energies in recent years trying to square his neo-republican view with democracy. He tries to argue that neo-republicanism supports democratic decision-making on the grounds that democratic decision procedures are the way to work out the citizens’ avowed interests. But List and Valentini say that this only works if the conception of ‘interests’ that is at play in this argument is moral.

The problem is that virtually every democratic decision — barring one involving complete unanimity — will involve the creation of coercive policies that go against at least one citizen’s avowed interests. That much is clear from the tax example I just gave. Democratic decision-making requires compromise: some avowed interests will have to give way to others. The only way to resolve this is to take onboard a moralised theory of avowed interests, i.e. to insist that some interests are morally legitimate and others are not. But if you make that move you resign yourself to the lower right-hand quadrant of the logical space of freedom.

A neo-republican could hang tough and insist on the non-moralised account of arbitrary interferences. But List and Valentini think that this will be unpalatable because the neo-republican theory also purports to provide some account of political justice.

That’s roughly the argument anyway. As I say, there is more detail in the paper than I can hope to cover here. To summarise, by noticing how freedom as non-interference and freedom as non-domination vary along two distinct dimensions, List and Valentini help to construct a logical space of possible theories of negative freedom. Doing so enables them to spot a neglected theory of freedom, namely freedom as independence. This theory has been ignored in the literature to date but is arguably more plausible than the existing contenders.

Sunday, November 27, 2016

The Logical Space of Algocracy (Redux)




(The following is, roughly, the text of a talk I delivered to the IP/IT/Media law discussion group at Edinburgh University on the 25th of November 2016. The text is much longer than what I actually presented and I modified some of the concluding section in light of the comments and feedback I received on the day. I would like to thank all those who were present for their challenging and constructive feedback. All of this builds on a previous post I did on the ‘logical space of algocracy’)

I’m going to talk to you today about ‘algocracy’ - or ‘rule by algorithm’. ‘Algocracy’ is an unorthodox term for an increasingly familiar phenomenon: the use of big data, predictive analytics, machine learning, AI, robotics (etc.) in governance-related systems.

I’ve been thinking and writing about the rise of algocracy for the past three years. I’m currently running a project at NUI Galway about it. The project is kindly funded by the Irish Research Council and will continue until May 2017. I’ve also published a number of articles about the topic, both on my blog and in academic journals. If you are interested in what I have to say, I would like to suggest checking out my blog where I keep an index to all my writings on this topic.

Today I want to try something unusual. Unusual for me at any rate. I’m normally an arguments-guy. In my presentations I like to have an argument to defend. I like to start the presentation by identifying the key premises and conclusions of that argument; I like to clarify the terminology; and I like to spend the bulk of my time defending the argument from a series of attacks.

I’m not going to do that today. I’m going to try something different. I’m going to try to map out a conceptual framework for thinking about the phenomenon of algocracy. I’ll do this in five stages. First, I’ll talk generally about why I think conceptual frameworks of this sort are important and what we should expect from a good conceptual framework. Second, I’ll outline some of the conceptual frameworks that have been offered to date to help us understand algocracies. I’ll explain what I like and don’t like about those frameworks and what I think is missing from the current conversation. Third, I will introduce a method for constructing conceptual frameworks that is based on the work of Christian List. Fourth, I adopt that method and construct my own suggested conceptual framework: the logical space of algocracy. And then fifth, and finally, I will highlight some of the advantages and disadvantages of this logical space.

At the outset, I want to emphasise that everything I present here today is a work in progress. I know speakers always say this in order to protect themselves from criticism, but it’s more true in this case than most. I’ve been mulling over this framework for a couple of years but never pursued it in any great depth. I agreed to give this talk partly in an attempt to motivate myself to think about it some more. Of course, I agreed to this several months ago and, predictably and unsurprisingly, I managed to procrastinate about it until five days ago when I started writing this talk.

I’m not going to say that the ideas presented here are under-baked, but I will say that they are under-cooked. I hope they are thought-provoking and that in the discussion session afterwards we can figure out whether they are worth bringing to the table. (Apologies for the strained culinary metaphor)


1. Why I love Conceptual Frameworks
I use the term ‘conceptual framework’ to describe any thinking tool that tries to unify and cohere concepts and ideas. I’m a big fan of conceptual frameworks. In many ways, I have spent the past half decade collecting them. This is one of the major projects on my blog. I like to review conceptual frameworks developed by other authors, play around with them, see if I truly understand how they work, and then distill them down into one-page images, flowcharts and diagrams.

In preparation for this talk, I decided to look over some of my past work and I thought I would share with you a few of my favourite conceptual frameworks.



First up is Nicole Vincent’s Structured Taxonomy of Responsibility Concepts. This is something I stumbled upon early in my PhD research about the philosophy of criminal responsibility. It has long been noted that the word ‘responsible’ can be used to denote a causal relationship, a moral relationship, a character trait, and an ethical duty, among other things. HLA Hart tried to explain this in his famous parable of the sea captain and the sinking ship. The beauty of Vincent’s framework is that it builds upon the work done by Hart and maps out the inferential relationships between the different concepts of responsibility.



Second, we have Quentin Skinner’s Genealogy of Freedom a wonderfully elegant family tree of the major concepts of freedom that have been articulated and defended since the birth of modern liberalism. Skinner describes the basic core concept of freedom as the power to act plus some additional property. He then traces out three major accounts of that additional property: non-domination; non-interference; and self-realisation.



Third, there is Westen’s four concepts of consent. Consent is often described as being a form of ‘moral magic’ - it is the special ingredient that translates morally impermissible acts (e.g. rape) into permissible ones (e.g. sexual intercourse). But the term consent is used in different ways in legal and moral discourse. Westen’s framework divides these concepts of consent up in two main sub-categories: factual and prescriptive. He then identifies two further sub-types of consent under each category. This helps to make sense of the different claims one hears about consent in moral and legal debates.



Speaking of claims about consent, here’s a slightly different conceptual framework. The previous examples are all taxonomies and organisational systems. Alan Wertheimer’s map of the major moral claims that are made about intoxication and consent to sex is an attempt to work out how arguments relate to one another. Wertheimer starts his detailed paper on the topic by setting out five claims that are typically made about intoxicated consent. My diagram tries to depict the inferential relationships between these claims. I think this helps to give us a ‘lay of the land’ (so to speak) when it comes to this controversial topic. Once we appreciate the lay of the land, we can understand where someone is coming from when they make a claim about intoxicated consent and where they are likely to end up.



Fifth, here is Matthew Scherer’s useful framework for thinking about the regulation of Artificial Intelligence. This adds another dimension to a conceptual framework: a temporal dimension. It shows how different regulatory problems arise from the use of Artificial Intelligence at different points in time. There are ex ante problems that arise as the technology is being created. And there are ex post problems that arise once it has been deployed and used. It is useful to think about the different temporal locations of these problems because some institutions and authorities have more competence to address those problems.



Finally, and another version of the time-sensitive conceptual framework, we have this life-cycle of prescriptive legal theories, developed by David Pozen and Jeremy Kessler. A prescriptive legal theory is a theory of legal decision-making that tries to remove contentious moral content from a decision-making rule (a classic example would be the originalist theory of interpretation). Kessler and Pozen noticed patterns in the development and defence of prescriptive legal theories. Their life-cycle is designed to organise these patterns into distinctive stages. The major insight from this lifecycle is that prescriptive legal theories usually work themselves ‘impure’ - i.e. they end up reincorporating the contentious moral content they were trying to avoid.

I could go on, but I won’t. Like I said, I enjoy collecting and diagramming conceptual frameworks of this sort. But I think it would be more useful at this stage to draw some lessons from these six examples. In particular, it would be useful to highlight the key properties of good conceptual frameworks. I don’t think we can be exhaustive or overly prescriptive in this matter: good, creative scholarship will come up with new and exciting conceptual frameworks. Nevertheless, the following general principles would seem to apply:

A good conceptual framework should enable you to understand some phenomenon of interest.
A good conceptual framework should allow you to see conceptual possibilities you may have missed (e.g. theories of freedom or responsibility that you have overlooked)
A good conceptual framework should enable you to see how concepts relate to one another.
A good conceptual framework should allow you to see opportunities for research and further investigation.
A good conceptual framework should appreciate complexity while aiming for simplicity.

There are also, of course, risks associated with conceptual frameworks. They can be Procrustean. They can become reified (treated as things in themselves rather than as tools for understanding things). They can be overly simplistic, causing us to ignore complexity and miss important opportunities for research. There is a fine line to be walked. Good conceptual frameworks find that line; bad ones miss it.


2. Are there any conceptual frameworks for understanding algocracies?
That’s all by way of set-up. Now we turn to meat of the matter: can we come up with good conceptual frameworks for understanding algocracies? Two things will help us to answer this question. First, getting a better sense of what an algocracy is. Second, taking a look at some of the existing conceptual frameworks for understanding algocracies.

I said at the very start that ‘algocracy’ is an unorthodox term for an increasingly familiar phenomenon: the use of big data, predictive analytics, machine learning, AI, robotics (etc.) in governance-related systems. The term was not coined by me, though I have certainly run with it over the past few years. The term was coined by the sociologist A.Aneesh during his PhD research back in the early 2000s. That research culminated in a book in 2006 called Virtual Migration in which he used the concept to understand changes in the global labour market. He has also used the term in a number of subsequent papers.

Aneesh’s main interest was in different human governance systems. A governance system can be defined, roughly, like this:

Governance system: Any system that structures, constrains, incentivises, nudges, manipulates or encourages different types of human behaviour.

It’s a very general, wishy-washy definition, but ‘governance’ is quite a general wishy-washy term so that seems appropriate. Aneesh drew a contrast between three main types of governance system in his research: markets, bureaucracies and algocracies. A market is a governance system in which prices structure, constrain, incentivise, nudge (etc) human behaviour. And a bureaucracy is a governance system in which rules and regulations structure, constrain, incentivise, nudge (etc.) human behaviour. Which means that an algocracy can be defined as:

Algocracy: A governance system in which computer coded algorithms structure, constrain, incentivise, nudge, manipulate or encourage different types of human behaviour. (Note: the concept is very similar to the ‘code is law’ idea promoted by Lawrence Lessig in legal theory but to explain the similarities and differences would take too long)

In his study of global labour, Aneesh thought it was interesting how more workers in the developing world (particularly India where his studies took place) were working for companies and organisations that were legally situated in other jurisdictions. This was thanks to the new technologies (computers + internet) that facilitated remote work. This gave rise to new algocratic governance systems within corporations, which sidestepped or complemented the traditional market or bureaucratic governance systems within such organisations.

That’s the origin of the term. I tend to use the term in a related but slightly different sense. I certainly look on algocracies as kinds of governance system — ones in which behaviour is shaped by algorithmically programmed architectures. But I also use the term by analogy with terms like ‘democracy’, ‘aristocracy’, ‘technocracy’. In each of those cases, the suffix ‘cracy’ is used to mean ‘rule by’ and the prefix identifies who does the ruling. So ‘democracy’ is ‘rule by the people’ (the demos), aristocracy is ‘rule by aristocrats’ and so on. Algocracy then can also be taken to mean ‘rule by algorithm’, with the emphasis being on rule. In other words, for me ‘algocracy’ captures the authority that is given to algorithmically coded architectures in contemporary life. Whenever you are denied a loan by a credit-scoring algorithm; whenever you are told which way to drive by a GPS routing-algorithm; or whenever your name is added to a no fly list by a predictive algorithm, you are living within an algocratic system. It is my belief, and I think this is borne out in reality, that algocratic systems are becoming more pervasive and important in human life. I especially think is true because algorithms are the common language in which computers, smart devices and robots communicate. So as these artifacts become more pervasive, so too will the phenomenon of algocracy.

So what kinds of conceptual frameworks can we bring to bear on this phenomenon? Some work has been done already on this score. There are emerging bodies of scholarship in law, sociology, geography, philosophy, and information systems theory (among many more) that address themselves to the rise of algocracy (though they tend not to use that term) and some scholars within those fields have developed organisational frameworks for understanding and researching algocracies. I’ll focus on legal contributions in this presentation since that’s what I am most familiar with, and since I think what has been presented in legal theory so far tends to be shared by other disciplines.

I’ll start by looking at two frameworks that have been developed in order to help us understand how algocratic systems work.

The first tries to think about various stages involved in the construction and implementation of algocratic system. Algocracies do things. They make decisions about human life; they set incentives; they structure possible forms of behaviour; and so on. How do they manage this? Much of the answer lies how they use data. Zarsky (2013) suggests that there are three main stages in an algocratic system: (i) a data collection stage (where information about the world and relevant human beings is collected and fed into the system); (ii) a data analysis stage (where algorithms structure, process and organise that data into useful or salient chunks of information) and (iii) a data usage stage (where the algorithms make recommendations or decisions based on the information they have processed).

Citron and Pasquale (2014) develop a similar framework. They use different terminology but they talk about the same thing. They focus in particular on credit-scoring algocratic systems which they suggest have four main stages to them. This is illustrated in the diagram below:



Effectively, what they have done is to break Zarsky’s ‘usage’ stage into two: a dissemination stage (where the information processed and analysed by the algorithms gets communicated to a decision-maker) and a decision-making stage (where the decision-maker uses the information to do something concrete to an affected party, e.g. deny them a loan because of a bad credit score).

Another thing that people have tried to do is to figure out how humans relate to or get incorporated into algocratic systems. A common classificatory framework — which appears to have originated in the literature on automation — distinguishes between three kinds of system:

Human-in-the-loop System: These are algocractic systems in which an input from a human decision-maker is necessary in order for the system to work, e.g. to programme the algorithm or to determine what the effects of the algorithmic recommendation will be.
Human-on-the-loop Systems These are algocratic systems which have a human overseer or reviewer. For example, an online mortgage application system might generate a verdict of “accept” or “reject” which can then be reviewed or overturned by a human decision-maker. The system can technically work without human input, but can be overridden by the human decision-maker.
Human-out-of-the-loop Systems This is a fully algocratic system, one which has no human input or oversight. It can collect data, generate scores, and implement decisions without any human input.

This framework is useful because relationship of humans to the systems is quite important when we turn to consider the normative and ethical implications of algocracy.

This brings us to the third type of conceptual framework I wanted to mention. These ones focus on identifying and taxonomising the various problems that arise from the emergence of algocratic systems. Zarsky, for instance, developed the following taxonomy, which focused on two main types of normative problem: fairness-related problems and efficiency-related problems. I constructed this diagram to visually represent Zarsky’s taxonomy.



More recently, Mittelstadt et al have proposed a six-part conceptual map to help understand the ethical challenges posed by algocratic decision-making systems. This can be found in their paper ‘The Ethics of Algorithms’.

While each of these conceptual frameworks has some use, I find myself dissatisfied by the work that has been done to date. First, I worry that the frameworks introduced to help us understand how algocratic systems work are both too simplistic and too disconnected. It is important to think about the different stages inside an algocratic system and about how humans relate to and get affected by those systems. But it is important to remember that the relations that humans have to these systems can vary, depending on the stage that we happen to be interested in. There is a degree of complexity to how these stages get constructed and this is something that is missed by the simple ‘in the loop/on the loop/out of the loop’ framework. Furthermore, while I’m generally much happier with the work done on taxonomising and categorising the ethical challenges of algocracy, I worry that this work also tends to be disconnected from the complexities of algocratic systems. This is something that a good conceptual framework would avoid.

So can we come up with one?


3. A Model for Building Conceptual Frameworks: List’s Logical Spaces
I think we can. And I think some of the work done by Christian List is instructive in this regard. So what I propose to do in the remainder of this talk is develop a conceptual framework for understanding algocracy that is modelled on a series of conceptual frameworks developed by List.

List, in case you don’t know him, is a philosopher at the London School of Economics. He is a major proponent of formalised and axiomatised approaches to philosophy. Most of his early work is on public choice theory, voting theory and decision theory. More recently, he has turned his attention to other philosophical debates (e.g. philosophy of mind and free will). He has also written a couple of papers in the past half decade on the logical spaces in which different political concepts such a ‘democracy’ and ‘freedom’ live.

List’s logical spaces try to identify all the concepts of freedom or democracy that are possible, given certain constraints. It is difficult to understand this methodology in the abstract so let’s look at his logical space of freedom and democracy for guidance.

Freedom is a central concept in liberal political theory. Indeed, liberalism is, in essence, founded on the notion that political systems must respect individual freedom. But what does this freedom consist in? List argues that two major theories of freedom predominate in contemporary debates (cf. Skinner’s genealogy of freedom, which I detailed earlier on): freedom as non-interference and freedom as non-domination. The former holds that we are free if we are free from relevant external constraints; the latter holds that we are free if we are robustly free from non-arbitrary constraints.

The difference is subtle to the uninitiated but essential to those who care about these things. I have written several posts about both theories in the past if you care to learn more (LINKs). List suggests that the theories vary along two dimensions: the modal and the moral. That is to say, they vary depending on (a) whether they think the freedom to act requires not just freedom in this actual world but freedom across a range of possible worlds; and (b) whether they only recognise as interferences with freedom those interferences that are not morally grounded (i.e. interferences that are ‘arbitrary’). Freedom as non-interference is, typically, non-modal and non-moral: it focuses on what happens in the actual world, but counts all relevant interferences in the actual world, regardless of their moral justification, as freedom-undermining. Contrast that with republican theories of freedom as non-domination. These theories are modal and moral: they depend on the absence of interference across multiple possible worlds but only count interferences that are non-arbitrary. (Technical aside: some republicans, like Pettit, have argued that freedom as non-domination can be de-moralised but List argues that this is an unstable position - I won’t get into the details here)

What’s interesting from List’s perspective is that even though most of the contemporary debate settles around these two concepts of freedom, there is a broader logical space of freedom that is being ignored. After all, there are two dimensions along which theories of freedom can vary which suggests, at a minimum, four logically possible theories of freedom. The two-by-two matrix below depicts this logical space:



The advantages of mapping out this logical space become immediately apparent. They allow List to discover and argue in favour of an ignored or overlooked theory of freedom: the one in the bottom right corner. And this is exactly what he does in a paper published last year in Ethics with Laura Valentini entitled ‘Freedom as Independence’.

How about democracy? List takes a similar approach. He argues that democracy is, at its root, a collective decision-making procedure. It is a way of taking individual attitudes toward propositions or claims (e.g. ‘I prefer candidate A to candidate B’ or ‘I prefer policy X to policy Y’) and aggregating them together to form some collective output. This is illustrated schematically in the diagram below.



One of List’s key arguments, developed in his paper ‘The Logical Space of Democracy’ is that the space of logically possible collective decision procedures — i.e. ways of going from the individual attitudes to collective outputs — is vast. Much larger than any human can really comprehend. To give you a sense of how vast it is, imagine a really simple decision problem in which two people have to vote on two options: A and B. There are four possible combinations of votes (as each voter has two options). And there are several possible ways to go from those combinations to a collective decision (well 24 to be precise). For example, you could adopt a constant A procedure, in which the collective attitude is always A, irrespective of the individual attitudes. Or you could have a constant B procedure, in which the collective attitude is always B, irrespective of the individual attitudes. We would typically exclude such possibilities because they seem undesirable or counterintuitive, but they do lie within the space of logically possible aggregation functions. Likewise, there are dictatorial decision procedures (always go with voter 1, or always go with voter 2) and inverse dictatorial decision procedures (always do the opposite of voter 1, or the opposite of voter 2).

You might find this slightly silly because, at the end of the day, there are still only two possible collective outputs (A or B). But it is important to realise that there are many logically possible ways to go from the individual attitudes to the collective one. This highlights some of the problems that arise when constructing collective decision procedures. And, remember, this is just a really simple example involving two voters and two options. The logical space gets unimaginably large if we go to decision problems involving, say, ten voters and two options (List has the calculation in his paper, it is 21024).

A logical space with that many possibilities would not provide a useful conceptual framework. Fortunately, there is a way to narrow things down. List does this by adopting an axiomatic method. He specifies some conditions (axioms) that any democratic decision procedure ought to satisfy in advance, and then limits his search of the logical space of possible decision procedures to the procedures that satisfy these conditions. In the case of democratic decision procedures, he highlights three conditions that ought to be satisfied: (i) robustness to pluralism (i.e. the procedure should accept any possible combination of individual attitudes); (ii) basic majoritarianism (i.e. the collective decision should reflect the majority opinion); and (iii) collective rationality (i.e. the collective output should meet the basic criteria for rational decision making). He then highlights a problem with these three conditions. It turns out that it is impossible to satisfy all three of them at the same time (due to classic ‘voting paradoxes’). Consequently, the space of logically possible democratic decision procedures is smaller than we might first suppose. We are left with only those decision procedures that satisfy at least two of the mentioned conditions. Once you pare the space of possibilities down to this more manageable size you can start to think more seriously about its topographical highlights. That’s what the diagram below tries to illustrate.



I don’t want to dwell on the intricacies of List’s logical spaces, I’m only referencing them because I think they provide a useful methodology for constructing conceptual frameworks. They balance the tradeoff between complexity and simplicity quite effectively and exhibit a number of other features listed earlier on. By considering the various dimensions along which particular phenomena can vary, List allows us to see conceptual possibilities that are often overlooked. Sometimes the number of conceptual possibilities identified can be overwhelming, but by applying certain axioms we can constrain our search of the logical space and make it more manageable.


4. Constructing A Logical Space of Algocracy
So can we apply the same approach to algocracy? I think we can. We can start by identifying the parameters (dimensions) along which various algocractic procedures vary.

At a first pass, three parameters seem to define the space of possible algocratic decision procedures. The first is the particular domain or type of decision-making. Legal and bureaucratic agencies make decisions across many different domains. Planning agencies make decisions about what should be built and where; revenue agencies sort, file and search through tax returns and other financial records; financial regulators make decisions concerning the prudential governance of financial institutions; energy regulators set prices in the energy industry and enforce standards amongst energy suppliers; the list goes on and on. In the formal model I outline below, the domain of decision-making is ignored. I focus instead on two other parameters defining the space of algocratic procedures. But this is not because the domain is unimportant. When figuring out the strengths or weaknesses of any particular algocratic decision-making procedure, the domain of decision-making should always be specified in advance.

The second parameter concerns the main components of the decision-making ‘loop’ that is utilised by these agencies. In section two, I mentioned Zarsky, Citron and Pasquale’s attempts to identify the different ‘stages’ in algocratic decision-procedures. One thing that strikes me about the stages identified by these authors is how closely they correspond to the stages identified by authors looking at automation and artificial intelligence. For instance, the collection, processing and usage stages identified by Zarsky et al feel very similar to the sensing, processing and actuating stages identified by AI theorists and information systems engineers.

This makes sense. Humans in legal-bureaucratic agencies use their intelligence when making decisions.Standard models of intelligence divide this capacity into three or four distinct tasks. If algocratic technologies are intended to replace or complement that human intelligence, it would make sense for those technologies to fit into those distinct task stages.

My own preferred model for thinking about the stages in a decision-making procedure is to break it down into four distinct stages. As follows:

(a) Sensing: the system collects data from the external world. 
(b) Processing: the system organises that data into useful chunks or patterns and combines it with action plans or goals. 
(c) Acting: the system implements its action plans. 
(d) Learning: the system uses some mechanism that allows it to learn from what it has done and adjust its earlier stages (this results in a ‘feedback loop’).

Although individual humans within bureaucratic agencies have the capacity to perform these four tasks themselves, the work of an entire agency can also be conceptualised in terms of these four tasks. For example, a revenue collection agency will take in personal information from the citizens in a particular state or country (sensing). These will typically take the form of tax returns, but may also include other personal financial information. The agency will then sort that collected information into useful patterns, usually by singling out the returns that call for greater scrutiny or auditing (processing). Once they have done this they will actually carry out audits on particular individuals, and reach some conclusion about whether the individual owes more tax or deserves some penalty (acting). Once the entire process is complete, they will try to learn from their mistakes and triumphs and improve the decision-making process for the coming years (learning).

The important point in terms of mapping out the logical space of algocracy is that algorithmically coded architectures could be introduced to perform one or all of these four tasks. Thus, there are subtle and important qualitative differences between the different types of algocratic system, depending on how much of the decision-making process is taken over by the computer.

In fact, it is more complicated than that and this is what brings us to the third parameter. This one concerns the precise relationship between humans and algorithms for each task in the decision-making loop. As I see it, there are four general relationship-types that could arise: (1) humans could perform the task entirely by themselves; (2) humans could share the task with an algorithm (e.g. humans and computers could perform different parts of the analysis of tax returns); (3) humans could supervise an algorithmic system (e.g. a computer could analyse all the tax returns and identify anomalies and then a human could approve or disapprove their analysis); and (4) the task could be fully automated, i.e. completely under the control of the algorithm.

This is where things get interesting. Using the last two parameters, we can construct a grid which we can use to classify algocratic decision-procedures. The grid looks something like this:



This grid tells us that when constructing or thinking about an algocratic system we should focus on the four different tasks in the typical intelligent decision-making loop and ask of each task: how is this task being distributed between the humans and algorithms? When we do so, we see the logical space of possible algocratic decision procedures.


5. Advantages and Disadvantages of the Logical Space Model
That brings us to the critical question: does this conceptual framework have any of the virtues I mentioned earlier on?

I think it has a few. I think it captures the complexity of algocracy in a way that existing conceptual frameworks do not. It tell us that there is a large logical space of possible algocratic systems. Indeed, it allows us to put some numbers on it. Since there are four stages and four possible relationship-types between humans and computers at those four stages, it follows that there are 44 possible systems (i.e. 256) within any given decision-making domain. What’s more, I think you could make the logical space even more complex by adding a third dimension of variance. What would that dimension consist in? Well one obvious suggestion would be to distinguish between different types of algorithmic assistance/replacement at each of the four stages. For instance, computer scientists sometimes distinguish between algorithmic processes that are (i) interpretable and (ii) non-interpretable (i.e. capable of being deconstructed and understood by humans or not). That could be an additional dimension of variance. It could mean that for each stage in the decision-making process there are 8 possible configurations, not just four. That would give us a logical space consisting of 84 possibilities.

But the interpretability/non-interpretability distinction is just one among many possible candidates for a third dimension of variance. Which one we pick will depend on what we are interested in (I’ll return to this point below).

Another virtue of the logical space model is that it gives us an easy tool for coding the different possible types of algocratic system. For the two-dimensional model, I suggest that this be done using square brackets and numbers. Within the square brackets there would be four separate number locations. Each location would represent one of the four decision-making tasks. From left-to-right this would read: [sensing; processing; acting; learning]. You then replace the names of those tasks with numbers ranging from 1 to 4. These numbers would represent the way in which the task is distributed between the humans and algorithms. The numbers would correspond to the numbers given previously when explaining the four possible relationships between humans and algorithms. So, for example:

[1, 1, 1, 1] = Would represent a non-algocratic decision procedure, i.e. one in which all the decision-making tasks are performed by humans.
[2, 2, 2, 2] = Would represent an algocratic decision procedure in which each task is shared between humans and algorithms.
[3, 3, 3, 3] = Would represent an algocratic decision procedure in which each task is performed entirely by algorithms, but these algorithms are supervised by humans with some residual possibility of intervention.
[4, 4, 4, 4] = Would represent an pure algocratic decision procedure in which each task is performed by an algorithm, with no human oversight or intervention.

If we created a three dimensional logical space, we could simply modify the coding system by adding a letter after each number to indicate the additional variance. For example, if we adopted the interpretability/non-interpretability dimension, we could add ‘i’ or ‘ni’ after each number to indicate whether the step in the process was interpretable (i) or not (ni). As follows:

[4i, 4i, 4i, 4i] = Would represent a pure algocratic procedure that is completely interpretable
[4i, 4ni, 4i, 4ni] = Would represent a pure algocratic procedure that is interpretable at the sensing and acting stages, but not at the processing and learning stages.

This coding mechanism could have some practical advantages. Three are worth mentioning. First, it could give any designer and creator of an algocratic system a quick tool for figuring out what kind of system they are creating and the potential challenges that might be raised by the construction of that system. Second, it could give a researcher something to use when investigating real-world algocratic systems and seeing whether they share further properties. For instance, you could start investigating all the [3, 3, 3, 3] systems across various domains of decision-making and see whether the human supervision is active or passive across those domains. Third, it might give us a simple tool for measuring how algocratic a system is or how algocratic it becomes over time. So we might be able to say that a [4ni, 4ni, 4ni, 4ni] is more algocratic than a [4i, 4i, 4i, 4i] and we might be able to spot the drift towards more algocracy within a decision-making domain.

But there are also clearly disadvantages with the logical space model. The most obvious is that the four stages and four relationships are not discrete in the way that the model presumes. To say that a task is ‘shared’ between a human and an algorithm is to say something imprecise and vague. There may be many different possible ways in which to share a task. Not all of them will be the same. This also true for the description of the tasks. ‘Processing’, ‘collecting’ and ‘learning’ are all complicated real-world tasks. There are many different ways to process, collect and learn. That additional complexity is missed by the logical space model.

It’s hard to say whether this a fatal objection or not. All conceptual models involve some abstraction and simplification of reality. And all conceptual models ignore some element of variation. List’s logical space of freedom, for instance, involves similarly large amounts of abstraction and simplification. To say that theories of freedom vary along modal and moral dimensions is to say something very vague and imprecise. Specific theories of freedom will vary in how modal they are (i.e. how many possible worlds they demand the absence of interference in) and in their understanding of what counts as a morally legitimate interference. As a result of this, List prefers to view his logical space of freedom as a ‘definitional schema’ - something that is fleshed out in more detail with specific conceptualisations of the four main categories of freedom. It is tempting to view the logical space of algocracy in a similar light.

Another obvious problem with the logical space model is that it is constructed with a particular set of normative challenges in mind. I was silent about this in my initial description of it, and indeed I didn’t fully appreciate it until I afterwards, but it’s pretty clear looking back on it that my logical space is useful primarily for those with an interest in the procedural virtues of an algocratic system. As I have argued elsewhere, one of the main problems with the rise of algocracy is that it could undermine meaningful human participation in and comprehension of the systems that govern our lives. That’s probably why my logical space model puts such an emphasis on the way in which tasks are shared between humans and algorithms. I’m concerned that when there is less sharing, there is less participation and comprehension.

But this means that the model is relatively silent about some of the other normative concerns one could have about these technologies (e.g. bad data, biased data, negative consequences). It’s not that these concerns are completely shut out or shut down; it’s just that they aren’t going to be highlighted simply by identifying the location with the logical space that is occupied by any particular algocratic system. What could happen, however, is that empirical investigation of algocratic systems with similar codes could reveal additional shared normative advantages/disadvantages, so that the code becomes shorthand for those other concerns.

Again, it’s hard to say whether this is fatal or not. It might just mean that the logical space I constructed is not ‘the’ logical space of algocracy but rather ‘a’ logical space of algocracy. Other people, with other interests, could construct other logical spaces. That’s doesn’t mean this particular logical space is irrelevant or useless; it just means its relevance and utility is more constrained.

Anyway, I think I have said enough for now. I’ll leave things there and hand it over to you for questions.

Monday, November 21, 2016

Episode #15 - Nicole Vincent on Neurointerventions and Human Happiness

phil_vincentpicfeb2015-300x300

In this episode I talk to Nicole Vincent. Nicole is an international philosopher extraordinaire. She has appointments at Georgia State University, TU Delft (Netherlands) and Macquarie University (Sydney). Nicole's work focuses on the philosophy of responsibility, cognitive enhancement and neuroethics. We talk about two main topics: (i) can neuroscience make us happier? and (ii) how should we think about radically changing ourselves through technology?  

You can download the episode here. You can also listen below or subscribe on Stitcher or iTunes (via RSS feed).

Show Notes

  • 0:00 - 0:50 - Introduction to Nicole
  • 0:50 - 8:50 - What is a happy life? Objective vs Subjective Views
  • 8:50 - 13:20 - What is a meaningful life? Does meaning differ from happiness?
  • 13:20 - 17:03 - Who knows best about our own happiness? Can scientists tell if we are happy?
  • 17:03 - 25:25 - The distinction between occurrent (in the moment) happiness and dispostitional (predictive) happiness
  • 25:25 - 37:05 - The danger of scientists thinking they know best about occurrent happiness
  • 37:05 - 46:20 - Could scientists know best about dispositional happiness?
  • 46:20 - 56:05 - Neuroplasticity and the normative value of facts about the brain
  • 56:05 - 1:01:45 - What if technology allows us to change everything about ourselves?
  • 1:01:45 -1:05:40 - Nicole's opposition to radical transhumanism
  • 1:05:40 - 1:13:50 - How should we think about transformative change?
  • 1:13:50 - End - How should society regulate technologies that allow for transformative change?
 

Relevant Links

Thursday, November 17, 2016

Understanding Hayek's Knowledge Argument (1): Prices as Signals




How should we decide what gets made, when it gets made, and who should get it once it is made? This is one of the foundational questions of economics. Proponents of the free market insist that private individuals, interacting with one another via a marketplace, responding to a price mechanism, should determine the answers; proponents of central planning think that a suitably organised government bureaucracy should do the work; others prefer a mixed approach.

Friedrich Hayek’s knowledge argument is a famous contribution to this debate. It extolls the benefits of the free market over central planning. Although there are many explanations and commentaries on Hayek’s argument, I have yet to come across one that I really like — one that both does justice to the nuances of Hayek’s original claims while at the same time highlighting their flaws. Richard Bronk’s article ‘Hayek on the wisdom of prices: A reassessment’ is the closest thing I have read, but Bronk’s article lacks concision and clarity.

I want to make up for this. I want to extract the logical core of Hayek’s argument, revealing the key premises and assumptions that go into it, and then subject it to critical scrutiny. I am going to do this over the course of two posts. In this first post I’ll just go through the main steps in Hayek’s argument, briefly commenting on its flaws. In the second post I’ll look in more detail at the weaknesses in the argument, focusing in particular on Bronk’s own arguments about the flaws of the price mechanism.


1. From the Distribution Problem to the Knowledge Problem
Let’s call the ‘what gets made, who gets it’ (etc) problem the ‘distributional problem’:

Distributional problem: All societies need to figure out how best to distribute their scarce resources (material resources, labour, time etc.), i.e. they need to figure out what gets done, when and by whom.

It is important not to underestimate the difficulty of the distributional problem. Human society is both complicated and complex. It consists of many different, dynamically interrelated parts. Figuring out who wants what, who needs what, and how they relate to one another is a fiendishly difficult thing. There are thousands of distributional decisions that need to be made on an minute-to-minute basis. How many shoes should be made? How many shoelaces? How much food should be grown? What types of food? What skills should be taught? Who should teach them? And on and on.

Hayek’s key insight was to suggest that the answer to the distributional problem depends upon the answer to another problem:

Knowledge Problem: To figure out who should get what and when, we need to know certain things: we need to know what people want and need, what resources are available to meet those wants and needs, what the best (most efficient) means of deploying those resources is, how people react to our distributional decisions and so on.

It is also important not to underestimate the difficulty of the knowledge problem. Given the complex and complicated nature of human society, there are many discrete and constantly changing knowledge gaps that need to addressed if we are to figure out who should get what and when.

The essence of Hayek’s knowledge argument is that central planners are not very good at solving the knowledge problem whereas free markets, despite some obvious flaws, are. Those two claims constitute the core of his ‘knowledge argument’. Let’s look at both in more detail.


2. The Case Against Central Planning
The first claim is that central planners fail to solve the knowledge problem. Why not? To answer that we need to understand what central planning is, and it is, in fact, a somewhat complex notion. Roughly, we are talking about a state-run bureaucracy that collects information and makes decisions about what should get made and how it should be distributed. There are many different ways for this play out in practice. You could imagine a single, dictatorial bureaucrat sitting at the centre of an institution deciding what should get done and when in a largely intuitive manner. Or you could imagine something more complex and technocratic, like the cybernetic management system that was used by the Allende government in Chile in the 1970s. There are also ways in which central planners could create market-like structures that replicate some, but not all, features of the free market (itself a highly contested concept). The possible market-like structures that could be adopted featured heavily in the general ‘socialist calculation debate’ in economics (to which Hayek’s argument is a contribution).

So when Hayek says that a centrally planned economy will not solve the knowledge problem what kind of centrally planned economy is he talking about? The general model would be something along the lines of what existed in Soviet Russia: reasonably complex bureaucratic organisations where information is collected and processed by diverse (sometimes politically antagonistic) groups and fed through some decision-making system. The key point is that there is a kind of ‘bottleneck’ within the system. Instead of distributional decisions being made all the time and in parallel, distributional decisions are forced through a single bureaucratic decision-making node. This means that all the information relevant to the distributional decision needs to reach that node. Hayek argues that this is not going to happen.

The argument works like this:


  • (1) If a centrally planned economy is going to work (i.e. going to solve the distribution problem), central planners will need the knowledge relevant to making distributional decisions.
  • (2) Central planners cannot have the relevant knowledge.
  • (3) Therefore, a centrally planned economy is not going to work.


The second premise is key here. Hayek presents four arguments in support of that premise. The first is:


  • (4) Much of the knowledge required for distributional decision-making is tacit, i.e. cannot be easily translated into explicit representations that can then be communicated between relevant decision-makers.


I discussed the phenomenon of tacit knowledge in a previous post about automation and unemployment. The basic idea is that much of the know-how underlying the creation and supply of goods and services is tacit. It is based on practical, oftentimes subconscious, skills that individual workers and manufacturers have acquired over the course of their working lives. Think of the expert surgeon who has performed thousands of hours of complex surgery and intuits when something is going wrong. They act on these intuitions and they often, consequently, improve the quality of the service they provide. There is nothing necessarily mystical or unusual in this ability. The intuitions don’t come from nowhere; they come from practical experience. But they are, nonetheless, very difficult to express and communicate. It is hard to see how a central planner could gain access to this tacit knowledge unless they themselves replicated the experience levels of the individual suppliers of goods and services.

The second argument in support of premise (2) is:


  • (5) The knowledge required is too diverse to be amassed into (and appreciated by) one perspective.


Markets are complex and multi-faceted. The knowledge any one individual has of the market is necessarily partial and incomplete. Hayek argues, and this seems plausible, that no one individual or group is likely able to amass together all those partial perspectives into a unified and complete perspective. Instead, what will happen is that central planners will think they have complete knowledge. They will become over-confident in their ability to understand and predict the behaviour of the people affected by their decisions.

The third argument in support of premise (2) is:


  • (6) Central planners cannot know subjective values and subjective values are part of the knowledge needed to solve the distributional problem.


Hayek defended the subjective theory of value. He held that the value of a good or service was determined by the interaction of the subjective preferences of the agents supplying and demanding that good or service. It was not determined by any intrinsic/objective property of the good or service. Scarce resources are best distributed when the actions of suppliers are responsive to the preferences of demanders. But since it is impossible to really know what is really going on in someone’s mind — i.e. to know what they truly prefer — it follows that it is impossible for a central planner to have access to all the knowledge they need. At best, they will get a partial understanding of subjective value by examining the external behaviour of individuals, but this external behaviour can be misleading.

The fourth and final argument in support of premise (2) is:


  • (7) Central planners cannot rival the knowledge discovery mechanisms of the free market and knowledge discovery is also essential to solving knowledge problem.


This is probably the most complicated aspect of the case against central planning. The idea is that the efficient distribution of goods and services does not just depend on current knowledge but that it also depends on creation and innovation. Suppliers discover more efficient ways of doing things: they innovate in production processes, creating new machinery and new tools, and they innovate in supply chains, creating new ways to get goods and services to consumers. In other words, they create new forms of knowledge that then get fed into the resolution of the distributional problem. Central planners may be able to encourage some experimentation and innovation but they will never, according to Hayek, rival the creative potentialities of the free market. (I should say that this is something hotly contested by defenders of socialist planning like Oskar Lange and there are some historical counterpoints highlighting to role of big government projects in innovation and experimentation).

Anyway, this gives us the first part of Hayek’s argument (the case against central planning). The argument is diagrammed below.




3. The Case in Favour of Free Markets
The second part of Hayek’s argument is an argument in favour of free markets. To some extent, this argument implicit in the critique of central planning: the knowledge gaps faced by central planners would not, it is claimed, arise on the free market. But you cannot get all the way to that conclusion from what has been said thus far. After all, at least some of the knowledge gaps that arise for central planners would seem to arise on the free market too. If knowledge is tacit, diverse and subjective, then surely it is just as difficult for it to be discovered by players on the market too?

This is where Hayek makes his most famous contribution. He argues that the free market has one tool at its disposal that can help to fill in these knowledge gaps: the price mechanism. For him, the free market functions like an information communications system, with prices being the signals that communicate important information (knowledge) to the players on the market.

The argument works a little something like this:


  • (8) If free markets are going to solve the distribution problem, they will have to solve the knowledge problem.
  • (9) A key feature of a free market is the price mechanism: the supply and demand-related decisions of the players on the free market create and respond to prices.
  • (10) The price mechanism can solve the knowledge problem.
  • (11) Therefore, free markets can solve the distribution problem.


There is a lot that needs to be said about this argument. The first premise (8) simply applies the general principle underlying Hayek’s argument (i.e. that the distributional problem depends on the knowledge problem). The second premise (9) is appealing to a key feature of the free market. As we will see below, it is not a unique feature of the free market (prices can exist on ‘unfree’ markets), but prices do function in a particular way on the free market. The third premise (10) follows this up by highlighting the particular way in which prices function, namely to solve the knowledge problem. ‘Solves’ is a bit strong, of course. We are not going to fill every relevant knowledge gap. The idea is, rather, prices do a better job than a central planner ever could.

So premise (10) is then the key to the whole argument. What can be said in its favour? Three things stand out from Hayek’s discussion:


  • (12) Prices collate information about subjective preferences and tacit knowledge from diverse sources.


The subjective knowledge about how much individual consumers and suppliers value a good or service is encapsulated in the market price. This price is the result of diverse, locally-situated actors, coming together and interacting on a marketplace. In other words, it pulls together the diverse perspectives that are difficult to encapsulate in the spreadsheets and statistical data beloved by central planners. We can also argue that the price draws upon the tacit knowledge of the producers and suppliers working on the market. They have the know-how required to produce goods and services and they can communicate the value of that know-how through the price they charge.


  • (13) Prices respond to developments, e.g. changes in preference, new discoveries or innovations in production and so forth.


The individual suppliers and consumers change the prices they are willing to receive and spend in response to local, dynamically updated information. Furthermore, the players on the market are incentivised to do new things in the hope that it will result in higher profits or lower costs. If a new production process is discovered that supplies a good or service at a cheaper cost, this will be fed into the market price, thereby telling people that a new production process is worth taking onboard (and vice versa).


  • (14) Prices communicate relevant information to people, thereby enabling them to know what is and is not worth doing on the market.


One of the big problems facing central planners is that they have to know what needs to be produced and what needs to be supplied, and then communicate this knowledge to the people who make and supply things. This is not easy: you have to somehow draw all the knowledge together and give a quick and easy signal that conveys that knowledges. Prices address this problem with remarkable efficiency. Prices don’t tell you everything you need to know about human wants and needs and how best to meet them. But they do provide a nice, simple, and clear signal of what people want and what methods will best meet those wants. The signal (the price) compresses a lot of information into one place and is readily available to anyone who needs to know it. This helps solve the communication problem that would otherwise arise.

This second portion of the knowledge argument is diagrammed below.



4. Problems and Next Steps
Hayek’s argument is not above criticism. This is to state the obvious. But three criticisms are worth mentioning here by way of conclusion. The first is that market prices will only work their communicative magic if they are undistorted by government interference. For Hayek, prices need to freely respond to changes in local behaviour and to new discoveries if they are going to collate and communicate relevant knowledge. If the government intervenes by setting price floors or price ceilings, this cannot happen. Similarly, if the government imposes additional costs where none should arise, you get further distortions in the knowledge being communicated.

Second, even though Hayek thinks that prices contain a lot of the information needed to solve the knowledge problem, he does not think that they contain all relevant information. This makes his view quite different from modern-day proponents of the efficient market hypothesis (who do think that market prices contain all relevant information). Hayek thinks that the players on the market are constantly trying to achieve some informational advantage over their peers: they are trying to discover new production processes or spot knowledge gaps that others have missed (opportunities for arbitrage). This is both healthy and necessary. It means that markets can encourage innovation and that prices can constantly adapt and update in response to new information. If market prices already contain all the relevant information, it would be difficult to make sense of much market behaviour.

Third, Hayek’s argument overlooks the various ways in which markets can themselves distort prices, either by failing to collate some relevant information or by being hijacked by dominant narratives. Bronk argues that this is more common than we might like to think, particularly in certain markets. This is possibly the most interesting critique of Hayek’s argument and I will look at it in more detail in a future post.

Sunday, November 13, 2016

What is Utopia? The Meta-Utopian Argument




A utopian world is the best of all possible worlds. It is the world that we should want to build; it is the place we should all want to be. And yet when we task our best minds to come up with visions of utopia, they tend to disappoint. They often imagine some squalid commune — like B.F. Skinner in Walden 2 — in which conformity is bred into citizens through perfected social engineering. That doesn’t sound like the best of worlds to me. And when we move from imagination to practice, things are often much worse. Those in the grip of a utopian ideology — be it anarchist, communist, transhumanist, Islamist or other ist — are often willing to justify tremendous pain and suffering in pursuit of their vision. The practical line between utopia and dystopia is a thin one indeed.

This leads many to reject utopian thinking. And yet utopia is a philosophically fascinating concept. Is it actually possible to construct a utopian world? Does it make sense to suppose there is a single vision around which we should all rally? In his controversial 1974 book Anarchy, State and Utopia, Robert Nozick presented one of the most interesting and philosophically sophisticated analyses of the concept of utopia. I want to look at that analysis in this post.

The utopian section of Nozick’s book is often ignored. The book as whole tries to defend a libertarian political philosophy in three parts. In the first part Nozick presents a Neo-Lockean view of rights. In the second part he presents a critique of Rawlsian liberalism. These two parts have generated the bulk of the academic discussion. Much of this discussion is justified: there is plenty of controversial material in the first two parts — enough to last a lifetime of scholarly endeavour. But this has the unfortunate effect of relegating the third part to a relative footnote in academic history. If you asked most people what the core argument in Nozick’s book is, they would probably be able to tell you something about the first two thirds, but not so much about the last.

This is a great shame. Nozick’s analysis of utopia is thoughtful and thought-provoking. In keeping with the main theme of the book, Nozick uses his analysis of utopia as the basis for an argument in favour of the minimal state, but he thinks of this as an independent argument — one whose success or failure is not tied to the success or failure of the first two parts. Furthermore, what Nozick has to say about utopia remains interesting irrespective of your views on the minimal state.

I’ll try to explain why I think this in what follows. I’ll start by outlining Nozick’s ‘meta-utopian’ argument. I’ll then look at some criticisms of that argument. I’m using Ralf Bader’s article ‘The Framework for Utopia’ as my guide to Nozick, but I’m going to present my own reconstruction of the argument.


1. The Meta-Utopian Argument
Nozick’s utopian vision is simply stated: there is (in all likelihood) no single utopian world; the utopian world is, rather, a meta-utopia in which many different worlds can be constructed and joined. The argument for this comes in three steps. The first step is to provide a conceptual analysis of what is meant by ‘utopia’. The second step is to argue that there is no single utopia. The third step is to argue that a meta-utopia is the only stable structure that can accommodate the fact that there is no single utopia.

Let’s start with the conceptual analysis. For Nozick, a utopia is the best of all possible worlds. But what does that mean? Nozick tries to make it more tangible by asking us to imagine we have the power to create any world we like — i.e. the power to construct possible worlds. If we had that power, which world would we construct and what would lead us to call it a utopia? Nozick argues that the utopian world would be the one that is stable, i.e. the one from which or relative to which we could imagine no better world that we would rather be in. This gives us the stability condition:

Stability Condition: A world W is stable if it is judged by its members to be the best possible world, i.e. there is no world they would rather be in.

This, in turn, gives us the stability analysis of utopia:

Utopia: A world W is utopian just in case it is stable.



There are problems with this analysis of utopia, some of which will surface when we consider objections to Nozick’s argument. For now let’s just hone in on one controversial element. If you were paying attention, you will have noticed that Nozick’s analysis places internal standards of judgment at the core of what it is to live in a utopia. A world is utopian if it is judged to be the best of all possible worlds. The judgments of the people living in the world are paramount, not the judgments of some external authority nor the application of some objective standard of betterness. Nozick thinks the centrality to internal standards is justified (we’ll come back to this) but it creates problems. If internal standards are what matters, we run into the problem that there is no shared, intersubjective standard of what makes one world better than another. This makes it highly unlikely that there is a single world in which the stability condition is met for all inhabitants of that world. Some may think there is no place they’d rather be than the world they happen to be in; but others are likely to imagine a better world that is just around the corner. This probably tracks with your everyday experience. You’ll have noticed that the criteria you use to judge what makes for a good life are not the same as the criteria used by others. You might think a stable happy family life is what matters whereas others prioritise success in their careers.



Two caveats about this argument. First, note how the argument is not that a single utopian world is impossible merely that it is highly unlikely. It is possible that everyone’s internal standards of betterness perfectly coincide, but it is not very likely and does not track with what we know about the world. Second, when we say that standards of betterness vary, this does not mean that there are no shared, objective values — i.e. no grounds for agreement on what makes for a good life. There could be such agreement without their being shared standards of betterness. You and I could both agree that success in work and success in family life are important values, we just disagree on their order of priority.

This brings us to the third part of the argument. Since there is probably no single utopian world — i.e. no single world that meets the stability condition — it follows that the closest thing to a utopian world will be a meta-utopian world, i.e. one in which many worlds are possible. A world in which we are free to build and join the possible worlds that meet the stability condition for ourselves. This meta-utopia is the one that allows a thousand flowers to bloom (to quote an inauspicious source). The meta-utopia is evaluatively thin. No matter what your internal standards for betterness are, you are likely to agree that the meta-utopia is the best chance of realising utopia. The meta-utopia does not presuppose or implement any particular vision of the good life. It simply provides an overarching structure in which multiple conceptions of the good life can be pursued.

This gives us the following, informal, argument for the meta-utopia (this is a rough-and-ready reconstruction of the reasoning to this point - it is not intended to be formally valid):


  • (1) A utopian world is a stable world.
  • (2) A world is stable if it is judged by its members to be the best possible world, i.e. there is no world they would rather be in.
  • (3) The standards by which people judge world to be better or worse are internal.
  • (4) People’s internal standards of betterness are unlikely to be shared (i.e. person A may judge W1 to be the best possible world while person B may judge W2 to be the best possible world and so on).
  • (5) Therefore, there is unlikely to be a single stable world (from 2-4).
  • (6) Therefore, there is unlikely to be a single utopia (1 and 5).
  • (7) Therefore, by implication from 2, 3 and 4, the closest thing to a utopian world will be a meta-utopian world, i.e. a world that allows individuals to create and join worlds that conform to their own standards of betterness.


You can probably see how Nozick builds this into a defence of the minimal state. His claim is that the minimal state is the closest thing to a real-world instantiation of the meta-utopia. It is an overarching institutional framework that allows people to create and join associations that are governed by their own preferred values. The minimal state is evaluatively thin: it does not presuppose or implement any particular vision of the good. It simply provides a framework within which utopian associations can flourish.

That’s the basic outlines of his utopian argument. Is it any good?


2. The Coercion Condition and the Problem of Imperialism
Let’s start by considering a major objection. This one focuses on the stability condition. You can see the intentions underlying the stability condition. It’s a sop to liberal moral presuppositions. Liberals think that you should be the ultimate arbiter of what is right for you. The ideal world is one that allows you to choose that which matches your preferences.

But no self-regarding liberal thinks that preference-matching by itself is sufficient. We need to ask: where did those preferences come from? Suppose you express a preference for a world in which you get to be a professional dancer. But suppose further that your preference for being a professional dancer was drilled into you from an early age by your overbearing mother. She always wanted to be a professional dancer herself but failed in her ambitions. She is living vicariously through you. Anytime you expressed an aptitude and desire to do something else, she berated you and convinced you that dancing was the way to go. Eventually you came around to her way of seeing things. You took her preferences on as your own. Would we really say that a world in which your preference for being a dancer is met is the best possible world for you?

The problem with such a scenario is that it seems to assign too much normative weight to preferences that might not be authentically yours, i.e. preferences that have been coerced, manipulated and brainwashed. This suggests that we need to modify the stability condition by adding an ‘authenticity’ clause:

Stability Condition*: A world W is stable if it is judged by its members to be the best possible world, i.e. there is no world they would rather be in, and their judgments are authentic, autonomously derived reflections of what they truly prefer.

The problem with the additional clause is that it makes the practical realisation of Nozick’s meta-utopia much more difficult. There isn’t any real agreement on what makes a judgment or preference authentic. Typically, liberals draw distinctions between preferences that are manipulated into existence by others, and preferences that are the product of ‘natural’, ‘organic’, or ‘non-manipulative’ forces. But according to some points of view, there is no sharp distinction between organically derived preferences and manipulated preferences. This is particularly true if you deny the existence of libertarian free will and think that all our preferences and desires are the product of causal forces.

It creates another practical problem too. If the meta-utopia needs to filter out worlds that are the result of manipulated or coerced judgments of betterness, then it seems like it entails a paradox, namely: it cannot accommodate those worlds where people’s judgments of betterness require the freedom to impose their will on others. Nozick is aware of this. In his book, he notes that there are three main types of community in the meta-utopia:

Existentialist: These are communities that adopt a pluralistic view as to what makes for the best world, and have no desire to impose any particular conception of ‘bestness’ on others. They are willing to tolerate the multiplicity of worlds that the meta-utopia entails.
Missionary: These are communities that adopt a monistic view as to what makes for the best world and wish to convert everyone to their view of bestness, but they do so through rational debate and persuasion, not through manipulation and coercion.
Imperialist: These are communities that adopt a monistic view as to what makes for the best world and wish to convert everyone to their view of bestness. They are willing to do so through manipulation, coercion and force if needs be.

While a meta-utopian institutional framework could be created that accommodates existential and missionary communities, it could not accommodate an imperialist community. The existence of such a community would violate the modified stability condition. You might say this is okay because imperialist preferences shouldn’t be allowed. But if you do so, you start to undercut some of the original appeal of the meta-utopian argument. Remember, the big advantage of that argument was that it didn’t seem to take any particular stance on what made for the best possible world: it allowed people to determine this for themselves. But now it seems like we have to start putting our foot down on some particular conceptions of bestness. This makes the argument less philosophically pure, and more difficult to implement in practice given that there are many imperialist communities already in existence.


3. Why should we focus on internal standards?
One way in which you could resolve the imperialist problem would be to avoid the original sin of the stability condition, i.e. don’t give so much weight to internal standards; use objective standards instead. Of course, this is itself replete with practical problems. What are these objective standards? Who determines what they are? It is those very problems that make the appeal to internal standards quite attractive.

Still, many will feel jittery about the appeal to internal standards and this prompts the question: can we say anything to assuage these jitters? Bader argues that we can in his piece about Nozick. He makes three arguments. The arguments are interesting, but not necessarily mutually consistent.

The first is that there is a strong case for the use of internal standards. Bader thinks that the internalist approach makes for a substantive and theoretically interesting account of utopia, viz. a utopia is a world which is preferred to all other possible worlds by its members. This seems to be both a novel and interesting approach to utopian thinking. Appealing to external standards is less substantive and theoretically interesting. For the externalist, the account of utopia falls out of the particular theory of value to which they adhere. This means all the theoretical and argumentative heavy lifting is borne by that theory. But, of course, debating the merits of particular theories of value is what value theorists have been doing for centuries. So the externalist approach to utopia just replicates centuries of debate about value. The internalist approach, in addition to paying heed to the practical reality of diversity, holds out the promise of providing something different.

The second argument is that there may be plausible grounds for linking internal and external standards. The idea is that a plausible theory of external value should incorporate an endorsement condition, i.e. it should be something that can be endorsed and agreed upon by everyone who is subject to it. Bader explains it like this:

What is objectively best should ideally not be completely disconnected from what the subjects take to be best. The endorsement condition allows us to retain the previously established results in the con- text of external standards. 
(Bader 2011, 21)

The third argument is a practical one. It suggests that epistemic humility is a must when it comes to utopianism. If we have learned nothing from history it is that utopian world-builders often get things wrong and cause great hardship and suffering in the process. We should guard against repeating such mistakes. This means that we probably shouldn’t be too bullish about any particular external theory of value. Even if we think it is on the right lines, we should factor in some element of risk and uncertainty. Bader thinks that if you incorporate this epistemic humility into your externalist theory, you’ll end up with something pretty similar to Nozick’s internalist theory. This is because an epistemically humble approach would require some accommodation to the views of others and some degree of experimentation with world-building.

I’ve gone through these three arguments in some detail because I think they are interesting not just in what they have to say about to Nozick’s meta-utopian argument, but also in what they have to say about all arguments in which internalist and externalist approaches to value seem to clash. It may turn out that in many of these cases there is more common ground between the internalists and externalists than we first suppose.


4. Conclusion
I don’t have too much more to say. To briefly recap, Nozick’s theory of utopia is an oft-neglected part of his case for the minimal state. What’s more, the theory holds interest even if you reject his libertarian outlook. Nozick presents an interesting conceptual analysis of what it means to live in a utopian world. He claims that a utopian world is an world that meets the stability condition (i.e. is such that no member of that world can imagine or would want to move to a better world). He then argues that there is unlikely to be a single world that meets the stability condition: people’s judgments as to what is best vary considerably. This suggests that we need a meta-utopia: a world in which people are able to build and join worlds that meet their own stability conditions.

Interesting though this theory may be, it does suffer from some considerable problems, particularly when we try to imagine what it would take to implement a meta-utopia. It seems like stability by itself is not enough. We need to ensure that people aren’t coerced and manipulated into worlds that are not of their choosing. But this, in turn, means that we cannot accommodate people whose utopian preferences require the freedom to coerce and manipulate others.

Suffice to say, despite its practical problems, Nozick’s theory does have some interesting repercussions for contemporary discussions about online and virtual communities. You can probably guess what those repercussions might be, but I won’t say anything about them for now. I hope to consider them again in a future blogpost.