Wednesday, March 30, 2016

Informal Group Work in Class: Four Tips

Every year I start with the best of intentions. I promise myself that I will use more interactive methods of teaching in my classes; that I will incorporate group work into at least some of my lectures; that I will encourage students to collaborate and learn from one another; that I won’t simply lecture from the front of the class.

Every year I seem to fail. As the semesters drag on, I get increasingly discouraged from incorporating group work into my teaching. There are two main reasons for this. The first is that it is actually pretty hard to design effective group work exercises, particularly ones that work in the large classes I teach (sometimes with upwards of 150 students). As other pressures pile up during the course of a given year, I find I have less and less time to design such exercises and so I eventually stop using them. The other reason is that if the first attempts don’t go well, I tend to retreat to my comfort zone, which is to just lecture at groups of students. I’m particularly tempted by this retreat in the larger groups, where students are often more reluctant to cooperate and it can be difficult to manage and organise group-work.

But I don’t intend for this post to be a confessional — as cathartic as that may be. I’m saying all this merely to underscore the fact that I don’t think of myself as being good at using group-work in my own teaching. I struggle with it. But I want to learn how to get better. So I’m going to educate myself in public by reviewing some of the tips and tricks from James Lang’s book On Course: A Week-by-Week Guide to Your First Semester of College Teaching. Along with Ken Bain’s What the Best College Teachers Do, this is one of my favourite books on teaching: it is resolutely practical in its focus, but engages with just enough of the empirical and theoretical literature to give it a firm grounding.

In this post, I’ll focus on Lang’s tips for managing informal group-work in the classroom.

1. Defining Informal Group Work and Overcoming Resistance to It
I’ll start with some definitions. Lang draws a distinction between two kinds of group work:

Informal Group Work: This is where you form ad-hoc groups during a class session (lecture, seminar) and get them to perform some task within that class session. The task does not count towards their final grade.

Formal Group Work: This is where you form groups for the duration of a module and get them to perform some task (e.g. group report/presentation) that will require them to meet and coordinate their actions outside of class. The task will count towards their final grade.

The distinction is somewhat procrustean: you could have in-class tasks that count towards the final grade and out-of-class task that do not. But I think it is useful nonetheless. In this post, the focus is purely on informal group work. There are two reasons for this. It is the type of group work I am more interested in because it is the type I find most difficult to manage and incorporate into my teaching. Furthermore, formal group work has its own challenges — challenges that warrant independent consideration (Lang discusses those challenges in the book; I might do so on another occasion).

Before looking into the practicalities of informal group work, it is worth asking the question: why would you bother? This is something I often ask myself and my propensity to ask it probably drives some of my reluctance to use it in my classes. I am a very reclusive and independent-minded person. Although I do work with others, I generally prefer working by myself. When I was a student, I found that I learned far more by independent research and reading than I ever did in class or in conversation with others. Consequently, I used to dislike group work exercises, finding them to be a waste of time and effort. I still find this to be the case. I am currently taking a course on teaching and learning in higher education that features a good deal of informal group work. When engaging in this group work, I rarely find the exercises to be useful. At most, I think they break-up long class sessions and restore concentration.

One of the nice things about Lang’s book is that he recognises and responds to this sort of resistance to group work. Indeed, he suggests that it is very common among academics. On average, the people who become academics are the people who most enjoy learning via independent research and writing (possibly more true in certain humanities subjects than in the sciences). So, somewhat ironically, I may be part of a self-selecting group that is less receptive to this style of teaching. I shouldn’t take my own experiences to be representative of my students. Some people really do enjoy the collaborative mode of learning.

Lang offers three further reasons for adding group work to your classes:

A. Students will end up working in careers that require collaborative work so you may as well prepare them for it.
B. Studies (cited in Lang’s book) suggest that students retain more from collaborative exercises than they do from lecturing and, as a nice bonus, they tend to give better feedback.
C. Knowledge is collaborative anyway: it emerges from a consensus of peers. Group work adapts students to this view of knowledge.

I have mixed feelings about these reasons. There is something to be said for each of them. The first is certainly true: students will have to work in teams in pretty much any career they hope to enter. But I suspect formal group work is better at preparing them for this than informal group work. The second chimes with my experience. Even though I don’t always enjoy the informal group work I do as part of my current course, I do find that I remember some of the conversations I have had with other members of the class during such group work far better than I remember what was said by the lecturers. Furthermore, the student feedback I have received for my own courses suggests that they really do enjoy these kinds of exercises and do give better feedback when they are included. The third reason is too philosophically-loaded for me to accept in its current form. Suffice to say I think it gets at something true, but I doubt the value of informal group work in adapting students to this view.

Anyway, enough of the preliminaries. How do you successfully incorporate informal group work into your classes? Lang breaks it down into four main steps. They are illustrated in the diagram below. I will elaborate in the ensuing text.

2. The First Step: Develop the Task
The first step is to develop the task you are going to get the groups to perform. If they are going to be working together during class, then they better be working on something valuable and important. It is far too easy to fall into the trap of setting superficial tasks. I know I have fallen into this trap. You get students to discuss something among themselves for a few minutes, partly to break up the monotony of a lecture, and partly to delude yourself that you are including some meaningful group work into your teaching. This is not the right approach to take.

Effective in-class group work should be concrete and should require the students to produce some sort of definite output within the allotted time. Lang suggests that informal tasks should take a maximum of 20-30 minutes and should require students to produce some sort of written output. The written output need not be elaborate: a sentence or paragraph of text; a diagram; a list of keywords etc. You just need to have it so that their minds are focused on doing something particular. Without it, students might feel lost and temptated to distraction. He also suggests that the task might work best if it is divided into solitary and group work phases. In other words, you get students to work on their own initially and then, after a definite period of time, get them to work with the members of their group. This advice is echoed by many others (e.g. Eric Mazur in his ‘peer instruction’ model).

I think organising the task around some concrete written output is a good idea. My own forays into group work have foundered when I simply ask students to ‘debate’ or ‘discuss’ a topic among themselves. This often leaves them uncertain as to what they should be doing. My better attempts have required them to do something specific. For instance, one of my more successful informal group tasks required students to read a short article in advance of class and then identify the major premises and conclusions in the argument presented in that article in class.

With any task, the devil is going to be in the detail. What you want students to do will vary depending on the discipline and subject you teach, and the time at which you introduce the task. Very generally, Lang suggests that any task you might assign students for homework (or in my case for tutorial work) can be adapted for informal group work. Some examples include:

Getting the students to draw a diagram representing the relationships between the characters in a novel.
Getting the students to identify the major issues and areas of law raised by legal problem question (i.e. story about someone’s legal troubles).
Getting the student to identify and (if time permits) query the experimental protocol in a scientific paper.
[As I said above] Getting the students to identify the premises and conclusions in an argument presented in a passage of prose.

The latter of these is definitely my favourite type of task because I think it translates to many different disciplines. It also has natural ‘extensions’ built into it. I’ll come back to that later.

3. The Second Step: Form the Groups
Once you have developed the task (which should of course be done in advance of class) you then need to form the groups. There is a surprisingly large literature on the optimal way in which to form groups. Many authors recommend that you ensure diversity and balance within the groups. Lang suggests that this might be more appropriate when doing formal group work and I tend to agree. I think for informal group work you just want a method that won’t take up too much time. The two methods I use are:

Pairing: Get students to pair-up with the person sitting next to them or, if you want them to form larger groups, with the three or four people closest to them. This is the simplest and quickest method. I use it whenever time is at a premium (e.g. if the task I want them to perform should take no more than 5-10 minutes). Using a more elaborate method for short tasks seems counterproductive to me because you end up spending as much time forming the groups as they do performing the task. That said, pairing obviously has its drawbacks as it can lead to self-selecting groups.

Number Lottery: Go through each seated row of students and assign each student a number up to a given limit (e.g. 1, 2, 3, 4, 5….1, 2, 3, 4, 5). Once you have completed this for every student in the class, get them to pair up with those who were assigned the same number. This may be my favourite method of group formation as it has an air of randomness about it. I’ve often used it to establish formal groups too.

Lang suggests that groups shouldn’t be too large, four to five students max. I tend to agree that this is preferable. On a few occasions I have formed larger in-class groups (up to ten students) but it’s definitely messy and at that size it becomes too easy for individual students to hide (or free-ride) within the groups. I’m often tempted to do so because I teach large groups of students and if you limit group size to a maximum of 5 you can end up with a lot of groups (30 if your class is 150). This can be a bad thing if you want all the groups to feedback to the class as whole, but there are ways to avoid this problem (see step four, below).

4. The Third Step: Manage the Groups
Once you have formed the groups and explained to students the purpose of the task, you need to let them at it. At this point, you have to manage the groups to make sure they get some value out of the exercise. Lang suggests that you give them some space initially. Don’t be too eager to jump in and direct their conversations. This seems like obvious and sound advice to me. I like to hang back for the first few minutes of the group task and then go around to each individual group (if feasible) and see how they are getting on. But I think I am often too interventionist in this regard and that I tend to do the work for the students once I get involved. I’m going to try to be less interventionist in the future.

There are four main problems you can encounter at this stage in the process:

Silent Groups: Some groups might fall silent and be unsure about the task. You can usually set them right by clarifying the output they need to produce and by asking them more specific questions.

Silent Members within Groups: This is a common problem. Some students will take a back seat within their groups, allowing others to do most of the work. I’m unsure how important it is to address their silence. Sometimes it is driven by laziness or resentment; sometimes it is strategic. Certain people like to wait before offering their opinions. If you feel someone really is disengaging from the task, you can try to involve them by directing specific questions towards them (e.g. “What did you think of what student X just said?”), or by assigning them the role of official group recorder. This will force them to pay attention.

Off-track Groups: Inevitably, some students will be distracted from the assigned task and start talking about something that is off-topic. You can usually bring them back on track simply by hovering next to them, or by intervening and attempting to bring them back on track (which strategy is more appropriate depends on how exactly they have got off track).

Fast Groups: Some groups will finish the task with alarming rapidity. You can deal with them by planning for obvious extensions to the initial task. For instance, if you start by getting students to identify the premises and conclusions of an argument in a passage of prose, you can extend the task by asking them to evaluate individual premises or assess the logical strength of the argument. This is a natural (and oftentimes rich) extension and it is one of the reasons why I like this kind of task.

One other point, which I think is important, is that you should time the tasks appropriately. You should allot students enough time to complete the task but not too much that they are tempting to go off track. But once you have set the time limit, you should stick to it. It can be quite annoying to be told by a teacher that you will have 10 minutes for an exercise only for them to call a halt to it after 7 minutes because they get the ‘sense’ that everyone is done.

5. The Fourth Step: Process and Feedback
Once the allotted time has ended, and students have produced their definite output, you’ll need to process that output in some way and give some kind of feedback. The demand for this in informal group work is less stringent than it would be in the case of formal group work, but it is still important. It will help to combat the sense of pointlessness and futility that some students might feel when they are asked to engage in these tasks (remember: I tend to feel this when I’m asked to engage in them).

Lang suggests three simple methods for processing and giving feedback:

Group Reports: Get each group to report back to the class on the results of their discussion, then offer some comments and feedback. This is the simplest method of processing the outputs, but it has two major drawbacks: it can be overly time-consuming, particularly if you have a large number of groups; and it can be repetitive and boring if groups are all saying the same things. It really only works well if you have a small number of groups or if you randomly select a small number of groups.

Pump-priming: Use the group work as a way of ‘priming the pump’ for larger class discussions. In other words, the specific output should provide the students with the material they need to contribute to a broader discussion about the task (e.g. a discussion about the structure of the argument they were supposed to identify), once you start that discussion you just allow them to spontaneously add their contributions. I think this is nice when the stakes aren’t too high (as is usually the case with informal group work).

Follow-up Task: Use the group work as way of preparing for a follow-up task, one that you either get them to perform in-class or as homework/tutorial work.

In some disciplines, a very simple way to provide feedback is simply to provide students with the ‘answer’ to the question/task they were assigned. This allows them to check whether they were on the right track. Obviously, this only works where the discipline lends itself to such feedback. In mathematics, for instance, there will be definitive answers to a problem question. In law (which I teach) this isn’t really true, but oftentimes legal problem questions do lend themselves to general answer outlines that are more correct than other possibilities. Sketching those outlines for the students can help them to check their own progress with the material.

Anyway, those are all the tips on informal group work. Hopefully you find it to be of some use. Writing about it has been useful for me. It has enabled me to identify some of the flaws in my previous strategies and to reduce my own resistance to the practice.

Monday, March 28, 2016

The Evolution of Social Values: From Foragers to Farmers to Fossil Fuels

I was first introduced to the work of Ian Morris last summer. Somebody suggested that I read his book Why the West Rules for Now, which attempts to explain the differential rates of human social development between East and West over the past 12,000 years. I wasn’t expecting much: I generally prefer narrowly focused historical works, not ones that attempt to cover the whole of human history. But I was pleasantly surprised. Morris definitely has a knack for synthesising large swathes of historical data and presenting compelling explanatory narratives. I was particularly impressed by his social development index, a tool for measuring the historical level of social development across different human societies (something explained at great length in his book The Measure of Civilisation). I also enjoyed Morris’s futuristic leanings: he ended the book by speculating about future trends by drawing lessons from the historical ones.

Since my initial foray, I think I’ve read every one of Morris’s ‘popular’ books. His most recent one — Foragers, Farmers and Fossil Fuels — is probably my favourite. Although it may be the flakiest in terms of the empirical data used to back up its central thesis, it is nevertheless the one that comes closest to my own research interests. The book takes the standard Marxist view* — that social values are determined by material culture — and extends it in an effort to explain three different value systems that have dominated human history. The central thesis is that the values expressed and enforced by human societies are primarily a function of the techniques they use for energy capture. There have been three main techniques for energy capture over the course of human history — foraging, farming, and the use of fossil fuels — and hence three main value systems.

The thesis is simple in its general outline, but there is a great deal of complexity in its defence. Morris acknowledges that the three value systems he describes are ‘ideal types’. Actual historical human societies vary greatly in the particular values they express. Nevertheless, he maintains that these variations can be grouped into these general types — exceptions to the categories often tell us something important that reinforces the utility of the general category. And, as in his other books, the real strength of Morris’s work is his ability to assemble a wealth of data on the different types of society to back up his main claims. If you want a readable and well-researched overview of human social evolution, this is about as good a book as I have read on the topic. It also contains critical rejoinders to Morris’s claims, along with a further response by him, so it is not one-sided.

That’s all by way of introduction. In this post, I want to do something relatively modest. I want to describe the three main value systems that Morris identifies in the book. I cannot hope to do justice to the detail of Morris’s actual account — you will need to read the book for that — but I can hope to share what I think is an interesting way of categorising and understanding human society. This is a useful exercise for me because I am hoping to use some of Morris’s insights in my own work about future governance systems and their values (more on that another time). In what follows, I’ll go through each of the three types of society and describe their value systems.

1. Foraging Societies and their Values
Foragers capture energy by hunting and gathering. That is to say, they hunt and kill wild animals, and they gather wild plants. They then consume both to supply themselves with the calories they need to get through the day. They also use animal and plant products to build the shelters and clothes to enable them to survive in different climates. Foraging societies vary considerably (some ethnographers refer to the ‘foraging spectrum’) but most of the variation is explained by differences in geographical location. For example, in tropical climates, most energy is procured from plants; in colder, polar climates, animals are the main source of energy.

Foraging societies share a number of key features. They are generally small groups of people and they move about a lot. Modern foraging groups usually consist of tribes of up to around 500 people, but most individuals spend their days with two to eight closely related people (Morris 2015, 30-31). Foraging groups are close knit, linked by kinship relations. Foraging communities have very low population densities, typically less than one person per square mile. Foraging societies that buck these trends are able to do so because they live in regions of relative abundance, i.e. the local animal or plant population is sufficiently abundant to support larger groups of people.

What kinds of values do foraging societies have? Let’s start with a definition. For present purposes, I’ll define ‘values’ as biases in behaviour and understanding. This is a descriptive definition, not a normative one. A group of people can be said to value X if their behaviour is biased in favour of X, they try to punish or discipline people who deviate from X, and if they express approval or fondness for X. This descriptive approach to values fits with the perspective of Morris’s book. How do we know what foragers value? Morris admits that the evidence isn’t great. There are three main sources: (i) archaeological evidence, which is usually silent about values; (ii) ancient historical accounts, which are usually biased; and (iii) modern ethnographic studies. The latter are the best source of data but they have to be treated with some scepticism. Modern foraging societies have been exposed to farming and fossil fuel societies. This is likely to ‘contaminate’ the set of values they espouse. They are not like their historical predecessors who never encountered farming or fossil fuels.

With those limitations in mind, Morris investigates the values of foraging societies in four main domains: (i) attitudes toward violence; (ii) political inequality; (iii) wealth inequality; and (iv) gender inequality. He uses the same four domains in his analysis of farming and fossil fuel societies. Here is a brief summary of his interpretation of the data:

Violence: Foraging societies usually have a ‘middling’ attitude toward violence. They view it as a necessary means toward solving certain types of social and inter-tribal conflict. In support of this, Morris cites evidence on rates of violent death in foraging societies. Most such societies are small, but the rate of violent death seems to be far higher (per capita) than it is in modern fossil fuel societies. To be clear, it is not that members of these societies favour or condone violence; it is simply that they acknowledge situations in which it is acceptable, e.g. violent raids on rival groups, cycles of tit-for-tat revenge killings and so on.

Political Inequality: Foraging societies are generally flat in terms of political inequality. They do sometimes adopt leaders, but these are often temporary and they favour consensus decision-making. Some studies — such as Richard Lee’s study of the !Kung San — show how foraging societies try to reinforce the lack of political hierarchy. If anyone tries to assert authority over the group, other members of the group resort to mockery, ostracism, blunt criticism and, in extreme cases, exile in order to prevent them from being successful.

Wealth Inequality: Foraging societies are generally flat in terms of wealth inequality. The Gini coefficient — which is a way of measuring inequality of wealth distribution with 0 representing perfect equality and 1 representing perfect inequality — for foraging societies averages at 0.25, which is relatively low (by comparison with farming and fossil fuel societies). There are good reasons for this: resources are scarce and often shared among group members in a form on ongoing reciprocal altruism; and foragers move around a lot and are consequently unable to accumulate much material wealth. There are some exceptions to this (e.g. groups in Sungir in east Russia and North America’s Pacific Coast) but this is usually when foragers live in regions of abundance. ‘[N]o subgroup within a foraging society has ever set itself up as a rentier class that owns the means of production’ (Morris 2015, 38).

Gender Inequality: Foraging societies have some noticeable gender inequalities. There is usually a gendered division of labour — men hunt and do most handicrafts; women gather, prepare food and do some handicrafts. It is also usually taken for granted that men should be in charge in such societies. This is arguably because men are the source of meat and violent protection and women have to bargain for these things. That said, the gender hierarchies are not steep and, compared to farming societies, foragers tend to have more relaxed attitudes toward premarital virginity and marital fidelity.

I have summarised all this in the image below.

2. Farming Societies and their Values
Farmers get their energy from domesticated plants and animals. In other words, instead of moving around to find a natural environment that enables them to survive, they try to manipulate and control their environment in order to supply them with the energy they need. Farming societies tend to be much larger than foraging societies, sometimes growing to encompass empires of millions of people. They also tend to be static, expanding out from some stable geographical core.

There is huge diversity in farming societies. Morris suggests that one useful way to think about it is to use a three-pointed star to visualise the different types of farming societies. At one point, there are the horticulturalists, who are effectively just slightly more sophisticated foragers using food cultivation techniques. They have limited supplies of domesticated plants and animals and continue to live much like foragers. At the second point, you have protoindustrial nations/empires, which are very large social organisations using complex methods of domestication and having elaborate legal and bureaucratic systems. They were still standing at the dawn of the fossil fuel age. Then, at the third point, you have commercial city states like ancient Athens or medieval Venice, which were urban centres of trade and commerce for farming communities. At the centre you have what Morris calls ‘peasant societies’ which are the ideal type of farming society. Peasant societies are noteworthy for one main reason: they consist of a large underclass of agricultural labours who do the main business of energy capture, and then a ruling elite. This already tells us something interesting about the values of such societies.

 The history of the agricultural revolution is fascinating and is recounted in some detail in Morris’s book. He explains how the domestication of plants and animals first arose in certain geographical regions (the Lucky Latitudes) and how farming then spread from those regions. He also explains the differences in average hours worked when you compare farming societies to foraging societies. I won’t go into that historical detail here. What is noteworthy for present purposes is simply how farming enabled a massive ramping-up in our ability to capture energy from our environment. The most successful foraging societies typically captured about 5,000 kcal per person per day. The most successful farming societies peaked at around 30,000 kcal per person per day. This enabled much larger populations and much higher population densities. This forced innovations in social organisation, which in turn led to a shift in values:

Violence: Farmers have a somewhat ambivalent attitude toward violence. Farming societies require a great deal of cooperation and coordination among the peasant class in order to ensure adequate energy capture. They consequently tended to shun interpersonal violence as means to resolve disputes. The state, either a ruling God-like king or a political class, were deemed to be the legitimate users of force. They could use force to pacify peasant labourers and conquer new lands. That said, there were violent uprisings against the state if it was felt that it was not exercising its power legitimately.

Political Inequality: Farming societies have steep political inequalities. People within such societies are often obsessed with rank and class. That said, there was much more innovation in political organisation across farming societies than there was across foraging societies. The classic political organisation involved a God-like ruler (often explicitly recognised as a god) sitting atop a ruling aristocracy, propped up by a large peasant class. Some societies adopted a more bureaucratic or democratic leadership, though there was often movement back-and-forth between modes of political organisation. Alleged exceptions to the steep hierarchy often prove the rule. Athens is usually the go-to example. It was a democracy — indeed, the birthplace of democracy — but only for a privileged group of wealthy male land- and slave-owners. Athens was also unique insofar as it was a commercial trading port situated within a broader farming society.

Wealth Inequality: Farming societies have steep wealth inequalities. Indeed, virtually all farming societies relied upon slavery. A large underclass of forced labourers was used to prop-up an elite and often extremely wealthy upper class. The Gini coefficient in farming societies averaged at about 0.48, which is higher than what we currently have in the Western world. Morris gives some vivid examples of this. The most interesting is probably that of C. Caecilius Isidorus, a wealthy Roman, whose will survives to this day and contains a list of all the property he owned. It included enough cash to feed 500,000 people for a year. Owning property became important in these societies because it was one of the primary agricultural resources. Laws were put in place to protect such ownership. People could now accumulate wealth, keep it in their families and use it to further distinguish themselves from others.

Gender Inequality: Farming societies have significant gender inequalities. Morris argues that this is down to the gendered division of labour that emerged early on in agrarian societies. Possibly because of men’s generally greater upper-body strength, outdoor activity (tending to animals and crops) became men’s work whereas indoor activities (food preparation, home care, childcare) became women’s work. Farming societies could support a lot more people and so women started having more children. This consequently led to women spending more of their adult lives involved in childcare related activities. They had little time or opportunity for anything else. This in turn created systems of norms that reinforced the gendered view of the world. Because of the importance of family and property, societies became obsessed with female sexual purity and fidelity.
A simple way to think of the values of farming societies is in terms of the ‘Old Deal’. This is something that is described at length in Morris’s book. In essence, it was a general theory of who belonged where in the world. The idea was that there was some ‘natural’ order in society. Some people belonged in certain roles (e.g. slaves were best-suited to be slaves; kings were best-suited to be kings). The caste system is the classic instantiation of this worldview. Attempts to deviate from this natural order were treated with suspicion and hostility.

3. Fossil Fuel Societies and their Values
Fossil fuel societies get their energy from…well…from fossil fuels. The vast majority of us (certainly anyone reading this blog) live in fossil fuel societies. These societies got started in the mid-18th century in northern Europe. The invention of the steam engine is usually pinpointed as the spark that ignited the fossil fuel revolution. This was the first major revolution in energy capture. Subsequent revolutions emerged with the invention of electricity, combustion engines and, eventually, non-fossil fuel energy sources like nuclear power.

Unlike the farming revolution, the fossil fuel revolution did not start in several different places at several different times. It only started once. The reason for this is that once the fossil fuel method of energy capture was mastered, the social systems that mastered it managed to project their power globally, eventually colonising much of the known world. This is explored in great detail in Morris’s earlier work Why the West Rules for Now. There is diversity in the organisation of fossil fuel societies — we see that to some extent today — but as Morris points out there have really been two major forms of social organisation: liberal forms, that prioritise individual freedom and autonomy and facilitate democratic politics; and illiberal forms, that prioritise top-down control and often limit political participation. The 20th Century competition between liberalism, fascism and communism suggests to Morris that liberal forms are generally more sustainable.

The rise of fossil fuel societies brought with it another massive ramping-up in energy capture. Where farming societies peaked at around 30,000 kcal per person per day, industrial societies in the West were averaging over 230,000 kcal per person per day by the 1970s. That number is continuing to grow and energy capture is equalising between the East and West. This has in turn facilitated much larger populations and much higher population densities. The largest cities in farming societies tended to have around 1 million people living in them. Today, the largest city in the world (Tokyo) has over 38 million people living in it.

We have much more evidence for the values of people in fossil fuel societies. We live in such societies and so we have a sense of their values ourselves; we can infer the values from social and political organisations; and polling groups such as Gallup regularly conduct worldwide surveys of these values. Indeed, there is possibly too much evidence to categorise. There is also complexity in the picture because many societies inherit pre-existing value systems (particularly the values from farming societies) via their cultures, laws and institutions. Nevertheless, Morris argues that there are some clear trends emerging:

Violence: Fossil fuel societies are opposed to violence. There is very little tolerance for interpersonal violence (Morris cites poll data where the majority of people claim to be total pacifists in their daily lives) and increasingly less tolerance for political or state violence. There is some recognition that this is necessary, but it is generally to be avoided at all cost. The antipathy toward violence is reflected in some studies which suggest a declining rate of violence across the developed world (Pinker’s Better Angels of our Nature being the most famous work on this topic).

Political Inequality: Most fossil fuel societies are politically flat, at least in theory. There is no God-given, normatively validated political ruling class. There may be de facto political elites, of course, but most people express opposition to this idea. Furthermore, most regimes express fealty to the idea that everyone is equal before the law and is entitled to the same rights and protections. Indeed, the transition to fossil fuel societies has often been marked by opposition to pre-existing Old Deal political hierarchies. The most notable form of opposition was probably the abolition of slavery and the rejection of the view that some people naturally deserve to be slaves.

Wealth Inequality: Fossil fuel societies have an ambivalent attitude toward wealth inequality. Some regimes have tried to stamp out such inequality through forceful redistribution (communist regimes being the classic example of this); others tolerate it in an effort to incentivise economic activity. Morris suggests that the compromise position that has emerged in liberal Western societies seems to be winning out across the world: equality of opportunity is encouraged, but not equality of outcome. State taxation and redistribution is then used to correct for the worst excesses of wealth inequality. He also cites data suggesting that the most sustainable level of wealth inequality for fossil fuel societies seems to be a Gini coefficient of between 0.25 and 0.35. Since 1970 the Gini scores within Western countries have been rising, but global wealth inequality has been falling.

Gender Inequality: Fossil fuel societies have become intolerant of gender inequality, though it has been something of a struggle. Morris argues that the technologies supported by the fossil fuel revolution eventually broke down the rationale for the pre-existing gendered division of labour. Muscle power was no longer so important; brain-power became key. Contraception allowed women (and men) to control the number of children they had. Various other technologies reduced the burden of housework (e.g. automated cleaning equipment). Of course, gendered stereotypes and attitudes remained long after the dawn of the fossil fuel age (and linger to this day) but there is widespread recognition that they are undesirable.

I should note here that the chapter on fossil fuel societies is one of the longest in Morris’s book and he explores the nuances and the evidential basis for his claims about values in a lot of detail. I’m skipping over virtually all of that in my summary.

4. Conclusion
As I said at the outset, I think that it is an interesting way of describing and categorising the evolution of human values. And as set out by Morris, it seems to fit the data, but I’m not well-versed enough in the empirical minutiae to dispute what he says.

By way of conclusion, I should say something about why Morris thinks that these changes have taken place. Obviously, he thinks that the changes in techniques of energy capture are the root cause of the changes in values, but he does have a more elaborate explanatory framework. I don’t have time to cover it in great detail here, but in broad outline it all hangs on the relationship between energy capture and population size and density. In essence, he thinks that changes in energy capture encouraged changes in population size and density, which in turn forced changes in social organisation, which encouraged experiments in different value systems. Social organisations that adopted particular sets of values tended to do better than others who adopted alternative values, which eventually led societies to settle down into the general patterns outlined above. This sounds vaguely plausible, but of course it is very difficult to test.

I want to close with one final image, taken from Morris’s book. This is his ‘reductionist, simplifying and doubtless distorting’ attempt to compare the value systems of all three societies. It focuses on the different attitudes toward violence and inequality in those societies (i.e. whether they view violence or inequality as good or bad things). There is something interesting about the patterns this diagram reveals. Take a look:

Did you notice the pattern? Farming societies are relatively more different than fossil fuel and foraging societies. They are, in a sense, the societies with values most alien to our own. Morris suggests that some of the contemporary clash of civilisations can be understood in terms of cultures that continue to cling to agrarian value systems in the face of fossil fuel imperialism.

Morris has some interesting speculations about what all this means for the future as we transition to a post-fossil fuel society. I have some thoughts on this too. I hope to outline them another time.

*Morris doesn’t explicitly endorse the Marxist view in his book - he relies more on the work evolutionary theorists like Boyd and Richerson - nevertheless there is some affinity with the Marxist view.

Saturday, March 26, 2016

Are we heading towards a singularity of crime?

On the 8th August 1963, a gang of fifteen men boarded the Royal Mail train heading from London to Glasgow. They were there to carry out a robbery. In the end, they made off with £2.6 million (approximately £46 million in today’s money). The robbery had been meticulously planned. Using information from a postal worker (known as “the Ulsterman”), the gang waylaid the train at a signal crossing in Ledburn, Buckinghamshire. They put a covering over the green light at the signal crossing and used a six-volt battery to switch on the red light. When one of the train’s crew went to investigate, they overpowered him and boarded the train. They used no firearms in the process, though they did brutally beat the train driver, Jack Mills. Most of the gang were arrested and sent to jail, but the majority of the money was never recovered. It was known as the ‘Great Train Robbery’.

In November and December 2013, the US-retailer Target suffered a major data breach. Using malware made by a 17 year-old Russian hacker, a criminal gang managed to steal data (including credit card numbers) from over 110 million Target customers. The total cost of the breach is difficult to estimate. Figures suggest that the criminals made up to $54 million selling the credit card data on the black market; the breach is likely to have cost financial institutions around $200 million in cancelling and reissuing cards (Target have themselves entered into settlements with credit card companies costing at least $67 million); it had a significant impact on Target’s year end profits in 2013; and they promised to spend over $100 million upgrading their security systems.

So in fifty years we went from a gang of 15 meticulously planning and executing a train robbery in order to steal £2.6 million, to a group of hackers using malware manufactured by a single Russian teen, stealing customer data without having to leave their own homes, with an estimated cost of over $350 million.

These two stories are taken from Marc Goodman’s eye-opening book Future Crimes. In the book, Goodman uses the dramatic leap in the scale of criminal activity — illustrated by these two stories — to make an interesting observation. He argues that the exponential growth in networking technology may be leading us toward a ‘crime singularity’. The phrase is something of a throwaway in the book, and Goodman never fully explains what he means. But it intrigued me when I read it. And so, in this post, I want to delve into the concept of a crime singularity in a little more depth. I’ll do so in three phases. First, I’ll look to other uses of the term ‘singularity’ in debates about technology and see if they provide any pointers for understanding what a crime singularity might be. Second, I’ll outline what I take to be Goodman’s case for the crime singularity. And third, I’ll offer some evaluations of that case.

1. What would a singularity of crime look like?
I’m going to start with the basics. The term ‘singularity’ is bandied about quite a bit in conversations about technology and the human future. It originates in mathematics and physics and is used in those disciplines to describe a point at which a mathematical object is not well-defined or well-behaved. The typical example from physics is the gravitational singularity. This is something that occurs in black holes and represents a point in space time at which gravitational forces approach infinity. The normal laws of spacetime breakdown at this point. Hence, objects that are represented in the central equations of physics are no longer well-behaved.

The physicist and science fiction author Vernor Vinge co-opted the term in a 1993 essay to describe something he called the ‘technological singularity’. He explained this as a hypothetical point in the not-too-distant human future when we would be able to create superhuman artificial intelligence. In this he was hearkening back to IJ Good’s famous argument about an intelligence explosion. The idea is that if we manage to create greater-than-human AI, then that AI will be able to create even greater AI, and pretty soon after you would get an ‘explosion’: ever more intelligent AI being created by previous generations of AI. Vinge suggested that at this point the ‘human era’ would be over: all the concepts, values and ideas we hold dear may cease to be important. Hence, the point in time at which we create the first superintelligence is a point at which everything becomes highly unpredictable. We cannot really ‘see’ beyond this point and guess what the world will be like. In this sense, Vinge’s singularity is akin to the gravitational singularity in a black hole: you cannot see beyond the event horizon of the black hole, and into the gravitational singularity, either.

Ray Kurzweil took Vinge’s idea and expanded upon it greatly in his 2006 book The Singularity is Near. He linked it to exponential improvements in information technology (originally identified by Gordon Moore and immortalised in the eponymous Moore’s Law). Using graphs that depicted these exponential improvements, he tried to predict the point in history when we would reach the prediction horizon, settling on the year 2045. Kurzweil’s imagined singularity involved the fusion of man with machine as well as the creation of superhuman artificial intelligence. One of his infamous graphs is depicted below.

Culling from the work of Vinge and Kurzweil, I think it is fair to say that the term ‘singularity’, when used in debates about technology, appeals to one or both of the following:

Exponential Growth: The improvements in some technology (or related phenomenon) are exponential, i.e. they proceed relatively linearly at first but then enter a phase of rapid takeoff (e.g. the doubling in the thickness when you repeatedly fold a piece of paper in half). This may eventually be followed by a leveling-off or plateau, resulting in an ’S-curve’. The classic example in technology is Moore’s law which describes the doubling in the number of transistors that can be put on an integrated circuit every 2 years.

Prediction/Control Horizon: Once the improvements enter their rapid takeoff phase, it becomes almost impossible to predict, understand or control some phenomenon of interest. In the intelligence explosion case, the oft-expressed fear is that it will become impossible to control the superintelligent AI and that this AI may act in a way that is contrary to what we value.

I take it that references to a ‘crime singularity’ must involve similar appeals. In particular, I take it that there must be some reference to the exponential growth in a technology or related pheomenon (though the ‘exponential’ nature of this growth may be more metaphorical than real), and that this must lead to some unpredictability or lack of control when it comes to criminal activity. Is this actually the case? Let’s look at Goodman’s claims.

2. The Case for a Crime Singularity
Crime is a tricky concept. There are many theories about what makes something criminal and debates about whether certain activities should be criminalised. I’ve explored some of them in my previous work. I want to keep things simple here and so I’ll stick to two main categories of crime: theft (i.e. the stealing of property and identity) and violent attack (including terrorist acts, murder and so forth). Goodman’s case for the crime singularity focuses on these types of activity so limiting ourselves to these two categories is not too debilitating.

What then is the crime singularity? The technological basis for it seems reasonably clear. It is the rapid growth in networked technology or, more simply, connectivity. Every computer in the world is now, ostensibly, capable of communicating with every other computer. We rely on computer-based systems to carry out many day-to-day transactions: financial and credit card records are stored on the systems held by major banks and retailers. We also rely on computerised control systems to manage much of the critical infrastructure in society, from electricity to water to public transport. If the internet of things (IOT) takes off, there will be an even more rapid increase in connectivity. Every ‘thing’ in the world will become connected to internet. This growth in connectivity may or may not be exponential (I haven’t plotted it mathematically) but it certainly seems like it is. I discussed this in a previous post on the internet of things.

What are the actual implications of this exponential growth in connectivity for criminal activity? Two are highlighted in Goodman’s work. The first is the impact on the scale of criminal activity. With near-total connectivity it becomes possible for relatively small criminal gangs to target more and more people. This is highlighted by the opening stories contrasting the Great Train Robbery with the Target data breach. In fifty years the scale of theft increased dramatically. A single attack can affect hundreds of millions of people. The Target breach is far from being the largest in history. There have been larger breaches since 2013. The second implication has to do with vulnerability to crime. Goodman doesn’t define this concept precisely in the book, but I think it could be defined to mean ‘lifetime risk of being a victim of crime’. The claim in this respect is that once “everything is connected, everyone is vulnerable” (2015, 69). The lifetime probability of being a victim effectively approaches 1.

You can plot these two consequences of increased connectivity on graphs similar to those used by Kurzweil. I have done this below. These are very rough-and-ready. They are not intended to actually represent any real data or to predict when we will reach the crime singularity. Rather, they are intended to give a sense of the relationship between the variables that seems to be animating Goodman’s concerns (connectivity and scale; and connectivity and vulnerability).

Assuming the relationships depicted on these graphs is accurate does it point to a crime singularity? Do we reach some sort of prediction/control horizon when connectivity crosses a certain threshold? Maybe. Once connectivity is absolute, it may become almost impossible to predict when and where criminal activity is taking place, and to stop it from happening. We might all be permanent and potential victims of crime. We might then live in a radically different world. The era of criminality we have grown to know and love will have come to an end.

3. Evaluating the Case for the Crime Singularity
I find the idea of crime singularity fascinating. I think it is plausible to believe that connectivity leads to the kinds scale and vulnerability problems Goodman mentions, and I think this does lead to a different reality. But I’m not sure how radically different it is and I’m not sure if the analogy with the technological singularity holds. I want to close with three critical comments.

First, let’s deal with an obvious point. Connectivity has already brought considerable benefits to our lives, mainly in terms of convenience, access to knowledge/goods/services/expertise, and efficiency. Many tout the benefits of increased connectivity through the internet of things. The question is going to be whether these benefits outweigh the putative costs of increased vulnerability and scale. I think we’re already voting with our feet (or our actions) in this regard. Although Target may have suffered some reputation damage in the aftermath of the 2013 data breach, I doubt that it has stopped people from shopping there or from continuing to share their personal information via computer networks. That may not mean too much — one of the central lessons of Goodman’s book is that we don’t really appreciate how vulnerable we are — but this lack of appreciation is itself significant. It suggests that people are continuing on as they were in the face of these threats. People may simply adapt to the constant threat. They may treat it like any other mundane risk.

Second, some people may wonder why we can’t simply build greater security into our connected technologies in order to combat the problems of vulnerability and scale. They might argue that these technologies are a double-edged sword. They make us more vulnerable but they also increase our ability to detect and respond to crime. Better surveillance and monitoring via sensory devices will allow us to identify and respond to likely threats with greater ease. Better firewalls will allow us to keep the hackers out. There may be some room for optimism on this front, but two important caveats should be issued. The first is simply that building better security systems is exceptionally difficult. As Goodman points out, there is a serious asymmetry of power when it comes to the relationship between the system designers who are building the defences and the hackers:

Asymmetry problem: ‘[T]he defender must build a perfect wall to keep out all intruders, while the offense need find only one chink in the armor through which to attack.’ (Goodman 2015, 44)

And, of course, it is essentially impossible to build a perfect wall. This is exacerbated by the increasing complexity of the underlying technology. Goodman illustrates this vividly by reference to the number of lines of code (LOC) needed to build modern software systems. The software used in the Apollo moon missions contained only 145,000 LOC; Microsoft Office 2013 contained 45 million LOC. Each new line of code represents a new potential site for a hack. In other words, the asymmetry problem grows with the complexity of the technology.

The other problem with those who put their faith in better security is that this will come at a cost. We could indeed use the machinery of the IOT to surveil and respond to crime, but this comes at a significant cost to our privacy and autonomy. Are we willing to incur that cost? That’s a question we are currently asking ourselves.

This brings me to the final critical comment. Some people might be pretty sceptical of everything that has been said in this article. While they may accept that we are heading towards greatly increased connectivity, they may doubt that this changes things all that much. Surely we are already always potential victims of crime? Every time I leave my house (or stay in it) I am a potential victim of crime. Someone could attack me, break into my property, steal my stuff and so on. Does anything really change with increased connectivity? Similarly, the scale issues that Goodman mentions are nothing new. Since the dawn of the atomic bomb we have had technology with the potential to dramatically alter the future of life for all of humanity. Is anything different now?

I think the answer is ‘yes, sort of’. I agree that we are already always vulnerable; and I agree that the scale of potential damage has been high for quite some time. What seems to be different with the advent of increased connectivity is that the means and opportunity for criminality have increased. If everything is connected to everything else; and hacking techniques and malware are readily shared online; then everyone (in theory) has the potential to commit a crime on everyone else.

Thursday, March 24, 2016

Blockchains and DAOs as the Modern Leviathan

In 1651, Thomas Hobbes published Leviathan. It is arguably the most influential work of political philosophy in the modern era. The distinguished political theorist Alan Ryan believes that Hobbes’s work marks the birth of liberalism. And since most of the Western world now lives under liberal democratic rule, there is a sense in which we are all living in the shadow of Leviathan.

The central idea in Hobbes’s book is nicely summed up in its opening passages:

Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal… For by Art is created that great LEVIATHAN called a COMMON-WEALTH, or STATE (in latine CIVITAS) which is but an Artificiall Man; though of greater stature and strength than the Naturall, for whose protection it was intended.
(Leviathan, Introduction)

The gist of this passage is that the state (or the system of social governance) is an artificial being (an “artificiall man”), created through the will of natural human beings. This is represented metaphorically in the famous frontispiece to the book (pictured above). The giant man, overlooking the land, is the artificial man (the sovereign or the Leviathan). If you look closely you can see that this man is made up of lots of smaller people. So he is, in the picture, literally constructed out of the people themselves. In practice, this artificial being consists of all the laws and institutions of government. This being enforces the laws through its institutions thanks to its monopoly on violence. To Hobbes, giving the artificial being a monopoly on violence was essential because it enabled us to avoid the war of all against all that emerges in the ‘state of nature’.

Elements of Hobbes’s political philosophy are objectionable, but the central idea — that the state is an artificial being governing through laws and a threat of violence — rings true to me. And what is particularly interesting about this idea is how adaptable it is. Whereas Hobbes’s Leviathan ruled in an era of primitive weaponry and limited technological development, modern day Leviathans rule through a much more sophisticated technological arsenal.

What I want to consider in this post is how new technologies make possible new forms of the Leviathan. In particular, I want to consider the potential for Leviathans to emerge from novel uses of blockchain technology. I’ll explain the ideas as I go along. To start off, I want to go back in time to Hobbes’s Leviathan and consider the rationale and the moral underpinning to it. After that, I will consider how blockchain technology could develop a new form of Leviathan.

1. The Logical and Moral Structure of Hobbes’s Leviathan
Hobbes developed his conception of the Leviathan from a particular understanding the problems of social morality. He noted that, in their essence, all men (and women) were free and equal. They each had the natural freedom to pursue what they wanted; and they each had roughly equal natural capacity to achieve their aims (intelligence compensating for a lack of strength). On top of this, they all competed for scarce resources of time, food, wealth and so on. This gave rise to a ‘Trust Problem’, which Hobbes felt all successful forms of government needed to solve.

The Trust Problem can be illustrated using three simple thought experiments. Only one of these actually comes from Hobbes, but they each illuminate the problem that motivated him:

Two Farmers: This is taken from David Hume. Two farmers live side by side. Each year they must harvest their crops. Their crops become ready to harvest on different days. They need to harvest them before they rot. They are not physically capable of harvesting them all by themselves. They could do it if they worked together. But they cannot due to a lack of trust. If Farmer B agrees to help Farmer A out with his harvesting on Day 1, he knows that Farmer A is unlikely to return the favour on Day 2: there would be nothing in it for Farmer A — his crops are already harvested. They both work alone to their mutual deficit.

Tragedy of the Commons: This is taken from Garrett Hardin. In a village there is a common area where locals can graze their cattle. If the common area is overgrazed in any particular year, then the land becomes useless the following year. It would be for the benefit of all if everyone limited the number of cattle they grazed on the land. They agree to this in principle, but in practice things turn out badly. Individual villagers reason that if they graze a couple of extra cattle, they will get the benefit of extra, well-fed cattle, without damaging the land too much. But once one villager starts doing this, all villagers do the same and the land is ruined. The agreement breaks down.

The War of All Against All: This is from Hobbes. Anne and Bob live next to each other. They each have some stuff that the other would like to have, but they are able to survive by themselves. If they could agree to leave each other in peace they could get by okay. But they have no way of guaranteeing that the other party will live up to their end of the agreement. Indeed, they know the other party could use violence to steal everything they own. It consequently becomes logical for them each to cultivate a reputation for violence and to preempt the other’s potential attack. The result is a perpetual state of war.

These are thought experiments, not historically accurate vignettes. Nevertheless, they each illustrate the problem that Hobbes thought was fundamental to political morality. If you break down the logical structure of the stories, you see that they are all like the classic Prisoners’ Dilemma. In each of the stories it would benefit everyone if they could cooperate. But due to their particular circumstances, they have no way of guaranteeing that the other parties will cooperate. Consequently, it becomes rational for them to act selfishly and protect their own interests. The result is a state of affairs that is worse for all concerned.

Trust is the critical missing ingredient. If we cannot trust one another, cooperation breaks down. And one of Hobbes’s central claims was that we cannot — at least, not really, not for long. But although we cannot trust each other, Hobbes thought we might be able to trust someone else. The artificial man; the Leviathan. This is a being that we create out of our will via a social agreement. We give this being enormous power: a total monopoly on the use of force and violence. We give it the ability to create coercive laws and institutions. This enables us to enforce agreements and commitments. People can now be compelled to cooperate. Failure to do so will result in the full force of the state being brought down upon their heads. The Trust Problem is thereby solved (or dis-solved).

This is the essence of Hobbes’s political philosophy. It is broadly liberal insofar as it stems from the belief that we are all (naturally) free and equal; and that it is through exercising our freedom to contract that we get together and form the Leviathan. It is quite illiberal in other respects. Hobbes imagined a sovereign with extensive, borderline absolutist powers. Once created, there could be no going back. Later contributors to the liberal tradition softened some of Hobbes’s views. For example, Locke argued that the powers of the state were always contingent: the citizens had the right to remake their Leviathan if it stopped exercising its powers in legitimate ways. Nevertheless, there was still broad agreement that the state helped to ensure mutually beneficial cooperation through its coercive institutions.

2. Blockchains and Modern Leviathans
There is something very interesting about Hobbes’s conception of the state. Consider once more the trust problem and how the Leviathan supposedly resolved it. The trust problem stemmed from the inability of individuals to ensure that agreements were upheld. In order to solve this problem, Hobbes proposed that individuals form another agreement — the one that created the Leviathan. In other words, he held that one agreement could be used to solve the problem of agreements more generally.

This seems oddly paradoxical. How can one agreement solve the problem with other agreements? Surely we end up in a regress? But it is not paradoxical once you burrow down to its core. The agreement that creates the Leviathan is of a special kind. It is a once-in-lifetime agreement that involves the creation of an artificial being to whom we transfer our ‘natural’ rights and powers (to use Hobbesian language). This artificial being then becomes the ultimate enforcer of all remaining social agreements.

The resulting vision of the state is thus one of nested agreements. At the foundation there is a special agreement creating an artificial enforcer; at all remaining levels this artificial enforcer is used to oversee and implement more mundane agreements, e.g. agreements relating to the transfer and distribution of property. This is the conceptual essence of Hobbes’s Leviathan. To bring that conceptual essence to reality the Leviathan would be forged from the bodies and minds of individual human beings. These minds would create abstract institutions and laws. For most day-to-day business, agreement on the existence of these abstract institutions would be sufficient for enforcement. But if anything went wrong, the state could have recourse to the grim and bloody reality of violent force. The critical question for us then becomes: is there any other technical infrastructure that can instantiate the conceptual essence of Hobbes’s Leviathan?

Blockchain technology seems like an obvious candidate. I have explained how this technology works on previous occasions. I’ll limit myself to a more brief exposition on this occasion. Blockchain technology is what underlies cryptocurrencies like Bitcoin. But the potential for Blockchain technology goes far beyond the cryptocurrency use-case. The blockchain is a distributed ledger for recording and verifying transactional data. The ledger is maintained by a network of computers. In theory, this network could be distributed across the entire globe. Every computer on the network maintains a copy of the ledger. Whenever two (or more) people enter into a transaction using the blockchain network, each computer records and verifies the transaction. The verification process involves a number of cryptographic tools such as public key encryption, hash functions, and proof of work. The transaction is only locked-in when the network completes the verification process.

This might not sound very exciting, but once you see its potential you begin to see it how it provides a possible infrastructure for Leviathan. For in essence the blockchain is an artificial entity that is capable of managing and enforcing agreements. How so? Well, let’s go back to basics. A contractual agreement is simply any agreement involving conditional commitments: I’ll do X for you if you do Y for me. Satisfaction of the conditions is essential to the completion of the contract. People won’t enter into contracts if they think there is no reasonable prospect of the other side satisfying their conditional commitments. Think back to the Two Farmers thought experiment. The problem facing Farmer B was that he couldn’t rely on Farmer A completing his side of the deal. He needed a Leviathan — a system of courts with enforcement powers — to guarantee that Farmer A would live up to the bargain.

The interesting thing about blockchain technology is that it provides a way to monitor and enforce conditional commitments of this sort. The blockchain can be used to record and verify whether certain conditions have been met. In the simple case of a bitcoin transaction, the condition being recorded and verified is whether (a) one person has the requisite bitcoin in their digital wallet to transfer to another person and (b) whether that person did, in fact, initiate that transfer. Only once those two conditions have been met are the bitcoin released. This process is automated via the network.

But that’s relatively boring: bitcoin are intrinsically digital in nature so it’s not surprising that their transfer can be managed using a digital platform. Where the blockchain becomes more interesting is when you realise that it can be used to verify and record any machine-to-machine communication. And the reality is that nowadays everything is becoming a machine. That’s the technological shift that’s at the heart of internet of things. Soon, it will be possible for virtually every ‘thing’ in the world to be become visible to machine-to-machine communication. Your dishwasher will be able to communicate with your thermostat. Your insulin pump will be able communicate with your local hospital. Your smart car will be able to communicate with your Fitbit, and so and so on.

The communicative interactions between all these things can be managed by the blockchain. Your smartcar could be released into your control only after it has been verified that you have paid your motor tax. The computer in the motor tax office and the onboard computer in your car can communicate with one another. The blockchain will record and verify the communication. Once it is satisfied that the relevant condition (“the payment of motor tax”) has been met, it will release the car. If everything becomes susceptible to this type of control, we have a technological platform for implementing Leviathan. We won’t need governments, laws and civil institutions anymore. Everything can be managed through the technological infrastructure.

Many people are aware of this potential. In their 2015 paper, Wright and De Filippi comment on the possibility of creating distributed autonomous organisations (DAOs) via blockchain technology. They describe the concept of a DAO like this:

Over time, as Internet-enabled devices become more autonomous, these machines can use decentralized organizations and the blockchain to coordinate their interactions with the outside world. We could thus witness the emergence of decentralized autonomous organizations that enter into contractual relationships with individuals or other machines in order to create a complex ecosystem of autonomous agents interacting with one another according to a set of pre-determined, hard-wired, and self-enforcing rules.
(Wright and de Filippi, 17)

The situation is analogous to what Hobbes imagined in the creation of Leviathan. An initial agreement between coders and (potentially) citizens is needed to set up the DAO. This agreement will prescribe the powers and conditions that will be enforced by the DAO. This initial set-up could involve something akin to Hobbes’s once-in-a-lifetime transfer of rights and powers. Then, once the DAO is created, it becomes an independent and autonomous entity, enforcing agreements according to its code. What’s more, in exercising its enforcement powers DAOs could be given control over the machinery of violence. Much of that machinery is being imbued with the kinds of connectivity and automation that is fodder for DAOs.

3. Should we welcome this modern Leviathan?
So we have a plausible infrastructure for Leviathan — one that does not rely so much on the minds and bodies of humans and the systems of laws and institutions to which they acquiesce, but instead upon a technological infrastructure, originally fashioned by humans, but capable of autonomously enforcing contractual agreements through a distributed network. Is this something we should welcome or fear? I won’t answer that question here, but I will highlight some relevant concerns.

In one sense it may be something to welcome. Cyber-libertarians see DAOs as a way to obviate the powers of the state. The Hobbesian Leviathan called for a massive centralisation of power. The state had to become an all-encompassing artificial being, watching over and protecting each and every one of us. DAOs allow for at least some decentralisation of authority. The network that maintains them will be decentralised and controlled by those who own the computer nodes on the network. And, at least in theory, there could be many different DAOs created by different communities to manage their own preferred set of agreements. This brings to reality the hopes and dreams of many early-adopters of the internet.

But the practical reality may be somewhat bleaker. First, there is concern about what happens once a DAO gets created. Once it has been set-up and its rules of enforcement have been encoded, can there be any going back? I mentioned earlier how Locke modified some of the Hobbesian ideals by insisting on a right to otherthrow or reform the state. That seems like a good thing to me. Can we build conditionality into DAOs? Can we avoid becoming locked-in to a DAO with outmoded rules of enforcement? That is something that needs to be developed. Second, I think there are legitimate concerns about inequality when it comes to a technological infrastructure of this sort. Bitcoin has turned out to be a relatively unequal cryptocurrency. It massively favours early adopters and those who can afford the advanced computing technology that maintains the blockchain. These people then get to control how the network operates. This may be the product of bitcoin’s idiosyncratic mining-competition and so could be avoided in other blockchain-based technologies. But it is something to consider nonetheless. Finally, those enamoured with the potential of DAOs must realise that the pre-existing Leviathans are unlikely to give up their powers so easily. There is already much discussion about how to make blockchain-based systems susceptible to government control and regulation. If DAOs take off, we can expect more confrontations with the state.

Monday, March 21, 2016

Exploitation, Commodification and Harm: The Ethics of Commercial Surrogacy (Video Talk)

I recently participated in a conference on the reform of surrogacy laws in Ireland. The conference was organised by my colleague Dr. Brian Tobin. I spoke about the ethics of commercial surrogacy vis-a-vis altruistic forms. You can watch the talk at the link above. There is some annoying 'fuzz' in the audio, but I think it still possible to watch it and get something out of it. Check out the other videos from the conference too.

Brief Description: It seems to be the case that many countries legalise altruistic forms of surrogacy but refuse to legalise commercial forms. The Irish government have explicitly said they favour this approach. The goal of my talk was to query whether altruistic forms of surrogacy should always be presumed to be superior to commercial forms. To this end, I assessed the merits of commercial surrogacy in light of three key ethical concepts: (i) harm; (ii) exploitation; and (iii) commodification.

Thursday, March 17, 2016

Algocracy and Transhumanism - Project Website

I was recently awarded funding by the Irish Research Council for a project entitled 'The Threat of Algocracy and the Transhumanist Project'. The goal of the project is to ask (and start to answer) three key questions:

  • What new governance structures are made possible by technology? (Focusing in particular on what I call 'algocratic' forms of governance)
  • How is technology changing what it means to be human? (Focusing in particular on the role of the transhumanist movement in changing how we think about this)
  • What are the implications of all this for political values like freedom, autonomy, privacy, equality and so forth?  

These are big questions, of course. It is the intersection between the three that particularly interests me. I'll be doing a number of interesting things as part of this project. As a first step, I have launched a new website that will house all content and materials related to it. Please check it out. It is still very much a work-in-progress, but I have added lots of content already.

(Note for regular readers: I will continue to cross-post everything relevant to the project on this blog too, so you won't miss out on anything).

Tuesday, March 15, 2016

New Technologies as Social Experiments: An Ethical Framework

What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionise the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience. A considerable number have been quite negative: the assault on privacy; increased distractability, endless social noise. But were any of them weighing on the mind of Steve Jobs when he stepped onstage to deliver his keynote on January 9th 2007?

Some probably were, but more than likely they leaned toward the positive end of the spectrum. Jobs was famous for his ‘reality distortion field’; it’s unlikely he allowed the negative to hold him back for more than a few milliseconds. It was a cool product and it was bound to be a big seller. That’s all that mattered. But when you think about it this attitude is pretty odd. The success of the iPhone and subsequent smartphones has given rise to one of the biggest social experiments in human history. The consequences of near-ubiquitous smartphone use were uncertain at the time. Why didn’t we insist on Jobs giving it quite a good deal more thought and scrutiny? Imagine if instead of an iPhone he was launching a revolutionary new cancer drug? In that case we would have insisted upon a decade of trials and experiments, with animal and human subjects, before it could be brought to market. Why are we so blase about information technology (and other technologies) vis-a-vis medication?

That’s the question that provokes Ibo van de Poel in his article ‘An Ethical Framework for Evaluating Experimental Technology’. Van de Poel is one of the chief advocates of the view that new technologies are social experiments and should be subject to similar sorts of ethical scrutiny as medical experiments. Currently this is not being done, but he tries to develop a framework that would make it possible. In this blogpost, I’m going to try to explain the main elements of that framework.

1. The Experimental Nature of New Technology
I want to start by considering the motivation for van de Poel’s article in more depth. While doing so, I’ll stick with the example of the iPhone launch and compare it to other technological developments. At the time of its launch, the iPhone had two key properties that are shared with many other types of technology:

1. Significant Impact Potential: It had the potential to cause significant social changes if it took off.

2. Uncertain and Unknown Impact: Many of the potential impacts could be speculated about but not actually predicted or quantified in any meaningful way; some of the potential impacts were completely unknown at the time.

These two properties make the launch of the iPhone rather different from the other quasi-technological developments. For example, the construction of a new bridge could be seen as a technological development, but the potential impacts are usually much more easily identified and quantified in that case. The regulatory assessment and evaluation is based on risk, not uncertainty. We have lots of experience building bridges and the scientific principles underlying their construction are well understood. The regulatory assessment of the iPhone is much trickier. This leads van de Poel to suggest that a special class of technology be singled out for ethical scrutiny:

Experimental Technology: New technology with which there is little operational experience and for which, consequently, the social benefits and risks are uncertain and/or unknown.

Experimental technology of this sort is commonly subject to the ‘Control Dilemma’ - a problem facing many new technologies that was first named and described by David Collingridge:

Control Dilemma: For new technologies, the following is generally true:
(A) In the early phases of development, the technology is malleable and controllable but its social effects are not well understood.
(B) In the later phases, the effects become better understood but the technology is so entrenched in society that it becomes difficult to control.

It’s called a dilemma because it confronts policy-makers and innovators with a tough choice. Either they choose to encourage the technological development and thereby run the risk of profound and uncontrollable social consequences; or they stifle the development in the effort to avoid unnecessary risks. This has led to a number of controversial and (arguably) unhelpful approaches to the assessment of new technologies. In the main, developers are encouraged to conduct cost-benefit analyses of any new technologies with a view to bringing some quantificational precision into the early phase. This is then usually overlaid with some biasing-principle such as the precautionary principle — which leans against permitting technologies with significant impact potential — or the procautionary principle — which does the opposite.

This is isn’t a satisfactory state of affairs. All these solutions focus on the first horn of the control dilemma: they try to con us into thinking that the social effects are more knowable at the early phases than they actually are. Van de Poel suggests that we might be better off focusing on the second horn. In other words, we should try to make new technologies more controllable in their later phases by taking a deliberately experimental and incremental approach to their development.

2. An Ethical Framework for Technological Experiments
Approaching new technologies as social experiments requires both a perspectival and practical shift. We need to think about the technology in a new way and put in place practical mechanisms for ensuring effective social experimentation. The practical mechanisms will have epistemic and ethical dimensions. On the epistemic side of things, we need to ensure that we can gather useful information about the impact of technology and feed this into ongoing and future experimentation. On the ethical side of things, we need to ensure that our experiments respect certain ethical principles. It’s the ethical side of things that concerns us here.

The major strength of Van de Poel’s article is his attempt to develop a detailed set of principles for ethical technological experimentation. He does this by explicitly appealing to the medical analogy. Medical experimentation has been subject to increasing levels of ethical scrutiny. Detailed theoretical frameworks and practical guidelines have been developed to enable biomedical researchers to comply with appropriate ethical standards. The leading theoretical framework is probably Beauchamp and Childress’s Principlism. This framework is based on four key ethical principles. Any medical experimentation or intervention should abide by these principles:

Non-maleficence: Human subjects should not be harmed.
Beneficence: Human subjects should be benefited.
Autonomy: Human autonomy and agency should be respected.
Justice: The benefits and risks ought to be fairly distributed.

These four principles are general and vague. The idea is that they represent widely-shared ethical commitments and can be developed into more detailed practical guidelines for researchers. Again, one of the major strengths of Van de Poel’s article is his review of existing medical ethics guidelines (such as the Helsinki Declaration and the Common Rule) and his attempt to code each of those guidelines in terms of Beauchamp and Childress’s four ethical principles. He shows how it is possible to fit the vast majority of the specific guidelines into those four main categories. The only real exception is that some of the guidelines focus on who has responsibility for ensuring that the ethical principles are upheld. Another slight exception is that some of guidelines are explanatory in nature and do not state clear ethical requirements.

For the details of this coding exercise, I recommend reading van de Poel’s article. I don’t want to dwell on it here because, as he himself notes, these guidelines were developed with the specific vagaries of medical experimentation in mind. He’s interested in developing a framework for other technologies such as the iPhone, the Oculus Rift VR, the Microsoft HoloLens AR, self-driving cars, new energy tech and so forth. This requires some adaptation and creativity. He comes up with a list of 16 conditions for ethical technological experimentation. They are illustrated in the diagram below, which also shows exactly how they map onto Beauchamp and Childress’s principles.

Although most of this is self-explanatory, I will briefly run through the main categories and describe some of the conditions. As you can see, the first seven are all concerned with the principle of non-maleficence. The first condition is that other means of acquiring knowledge about a technology are exhausted before it is introduced into society. The second and third conditions demand ongoing monitoring of the social effects of technology and efforts to halt the experiment if serious risks become apparent. The fourth condition focuses on containment of harm. It accepts that it is impossible to live in a risk-free world and to eliminate all the risks associated with technology. Nevertheless, harm should be contained as best it can be. The fifth, sixth and seventh conditions all encourage an attitude of incrementalism toward social experimentation. Instead of trying to anticipate all the possible risks and benefits of technology, we should try to learn from experience and build up resilience in society so that any unanticipated risks of technology are not too devastating.

The next two conditions focus on beneficence and responsibility. Condition eight stipulates that whenever a new technology is introduced there must be some reasonable prospect of benefit. This is quite a shift from current attitudes. At the moment, the decision to release a technology is largely governed by economic principles: what matters is whether it will be profitable, not whether it will benefit people. Problems can be dealt with afterwards through legal mechanisms such as tortious liability. Condition nine is about who has responsibility for ensuring compliance with ethical standards. It doesn’t say who should have that responsibility; it just says it should be clear.

Conditions ten to thirteen are all about autonomy and consent. Condition ten requires a properly informed citizenry. Condition eleven says that majority approval is needed for launching a social experiment. Van de Poel notes that this could lead to the tyranny of the majority. Conditions twelve and thirteen try to mitigate that potential tyranny by insisting on meaningful participation for those who are affected by the technology, including a right to withdraw from the experiment.

The final set of conditions all relate to justice. They too should help to mitigate the potential for a tyranny of the majority. They insist that the benefits and burdens of any technological experiment be appropriately distributed, and that special measures be taken to protect vulnerable populations. Condition sixteen also insists on reversibility or compensation for any harm done.

3. Conclusion
I find this proposed framework interesting, and the idea of an incremental and experimental approach to technological development is intuitively appealing to me. I should perhaps make two observations by way of conclusion. First, as Van de Poel himself argues, there is a danger in developing frameworks of this sort. In the medical context, they are sometimes treated as little more than checklists: ticking off all the requirements allows the researchers to feel good about what they are doing. But this is dangerous because there is no simple or straightforward algorithm for ensuring that an experiment is ethically sound. For this reason, Van de Poel argues that the framework should be seen as a basis for ethical deliberation and conversation, not as a simple checklist. That sounds fine in theory, but then it leaves you wondering how things will work out in practice. Are certain conditions essential for any legitimate experiment? Can some be discarded in the name of social progress? These questions will remain exceptionally difficult. The real advantage of the framework is just that it puts some shape on our deliberations.

This leads me to the second observation. I wonder how practically feasible a framework of this sort can be. Obviously, we have adopted analogous protocols in medical research. But for many other kinds of technology — particularly digital technology — we have effectively allowed the market to dictate what is legitimate and what is not. Shifting to an incremental and experimental approach for those technologies will require a major cultural and political shift. I guess the one area where this is clearly happening at the moment is in relation to self-driving cars. But that’s arguably because the risks of that technology are more obvious and salient to the developers. Are we really going to do the same for the latest social networking app or virtual reality headset? I’m not so sure.