Morality can often seem pretty diverse. There are moral rules governing our physical and sexual interactions with other human beings; there are moral rules relating to how we treat and respect property; there are moral rules concerning the behaviour of officials in government office; and, according to some religions, there are even moral rules for how we prepare and eat food. Is there anything that unites all these moral rules? Is there a single explanatory root for morality as a whole?
According to the theory of Morality As Cooperation (MAC for short), there is. Originally developed by Oliver Scott Curry, the MAC claims that all human moral rules have their origin in attempts to solve problems of cooperation. Since there are many such problems, and many potential solutions to those problems, there are consequently many diverse forms of morality. Nevertheless, despite this diversity, if you unpick the basic logic of all moral rules, you can link them back to an attempt to solve a problem of cooperation.
This is obviously a bold theory. It is highly reductive in the sense that it holds that all of human morality can be reduced to a single underlying phenomenon: cooperation. People will rightly ask if the diverse forms of human morality really are reducible in this way. Does MAC effectively capture the lived reality of human moral systems? Does it simply explain away the diversity and plurality?
These are legitimate questions. Nevertheless, if true, MAC has some exciting implications. It tells us something about the basic structure of all moral systems. It also tells us something about the possible future forms of morality. If a purported moral rule does not ultimately link back to an attempt to resolve a cooperative problem, MAC predicts that it will not be accepted or respected as a moral rule. If a social or technical development threatens or undermines an existing solution to a cooperative problem, it is likely to force us to generate new forms of morality. I find this latter implication particularly exciting since it links the MAC to my own current interests in understanding the moral revolutions of the future.
In the remainder of this article, I want to explain how the MAC works and then consider how the MAC might shed light on the future of morality. I will do this in three stages. First, I will give a basic explanation of the MAC. Second, I will consider a recent amendment to the MAC Scott Smith Curry and his colleagues, suggesting that morality can be understood as a combinatorial system with a finite (but vast) number of possible forms. Finally, I will consider the implications of all this for the future of morality, focusing on some specific technological threats to our cooperative systems and how we might generate new moral systems to resolve those threats. Unfortunately, I won’t be overly precise in this last section of the article. I will be painting with a broad and speculative brush.
1. Morality as Cooperation: The Basic Theory
MAC takes as its starting point the view that human morality is about cooperation. In itself, this is not a particularly ground-breaking insight. Most moral philosophers have thought that morality has something to do with how we interact with other people — with “what we owe each other” in one popular formulation. Scott Curry, in his original paper on the MAC, does a good job reviewing some of the major works in moral philosophy and moral psychology, showing how each of them tends to link morality to cooperation.
Some people might query this and say that certain aspects of human morality don’t seem to be immediately or obviously about cooperation, but one of the claims of MAC is that these seemingly distinctive areas of morality can ultimately be linked back to cooperation. For what it is worth, I am willing to buy the idea that morality is about cooperation as a starting hypothesis. I have some concerns, which I will air below, but even if these concerns are correct I think it is fair to say that morality is, in large part, about cooperation.
As I say, this is not particularly ground-breaking. Where the MAC becomes more interesting is in claiming that there is a finite set of basic cooperative problems faced by human societies, that these problems have been mapped out by evolutionary game theory, and that each of these problems generates a set of solutions. This set of solutions defines the space of possible human moral systems. In other words, at its most abstract level, the MAC can be characterised like this:
Morality as Cooperation: All of human morality — i.e. any rule, virtue, norm (etc) that humans call “moral” — is an attempt to solve a cooperative problem.
A cooperative problem is any non zero sum interaction between humans. Non zero sum interactions are situations in which groups of humans can work together to generate a “win-win” outcome — an outcome in which all people (or certainly a majority of people) can benefit or gain — but in which there is usually some impediment or barrier that must be overcome in order to ensure cooperation.
The MAC can be made more precise by identifying the basic cooperative problems faced by humans and their potential solutions. In his work, Scott Curry claims that there are seven basic cooperative problems, each of which is recurrent in evolutionary history (not just human history) and each of which is linked to a specific manifestation of human morality. I will describe the seven problems in what follows. The first three problems are distinct. The fourth problem breaks down into four distinct sub-types of problem, giving us seven problems in total. They are:
1. Kinship Interaction This is perhaps the most fundamental evolutionary problem of cooperation. Genes have an interest (in a behaviouristic sense) in ensuring that their replicas survive into the future. This means that, within a sexually reproducing species, parents have an interest in ensuring the survival of their children and siblings have an interest in ensuring the survival of their other siblings, and so on, with the interest being proportional to the degree of relatedness. This is a cooperative problem because, in order to ensure the survival of replica genes, people need to be able to identify their kin and must act in a way that helps the survival of their kin. From the perspective of the MAC, this means we would expect moral norms to develop around the protection of kin. Sure enough, in every society, this is what we find. There are strong moral duties associated with parenthood and loyalty to one’s kin.
2. Mutualisms: A mutualism is any scenario in which a group of people, acting together, can achieve some immediate mutual gain. The classic example in human evolutionary history is big game hunting. Individual humans can hunt on their own but they can only hunt for relatively small animals. Working together, they can chase and kill larger animals. This a mutual benefit because the food value of big game is greater than the food value of smaller animals. The cooperative problem arises because you need to ensure that people are aware of the mutual benefit and are able and willing to coordinate their efforts to achieve the mutual benefit. There are a variety of tools and tricks that enable them to do so, e.g. focal points, signalling and communication systems, institutional punishment and so on. From the perspective of the MAC, this means we would expect moral norms to develop around the tools and tricks that enable groups to coordinate on mutualisms. And indeed we do. There are strong moral norms in virtually all human societies that encourage group loyalty, adopting local conventions, forming friendships and alliances, and so on. In fact, solving coordination problems might be the most widely discussed evolutionary origin for morality.
3. Exchange Interaction: An exchange interaction is like a mutualism but with one significant difference. The mutual benefit that is derived from the cooperative action is delayed and hence uncertain. You have to wait for someone else to do their part or to return the favour. Most commercial interactions are of this form. One person supplies a good or benefit first and then waits for the other side to do their bit. Informal exchanges are also common. Neighbours sometimes help one another out in times of need, expecting that the favour will be returned in the future. This is a cooperative problem because it’s not easy to guarantee that the other side will do their bit. They might free ride on or exploit the good will of others. There are a variety of tools for ensuring that they will do their bit including, most notably, various forms of group punishment (including gossiping, ostracising, shunning, shaming and physical assault). From the perspective of the MAC, this means we would once again expect reciprocity and promise-keeping to be duties or virtues in most societies. And they clearly are. Indeed, many cultures share a norm of reciprocity that is sometimes called “The Golden Rule” of morality: do unto others as you would have done unto you.
4. Conflict Resolution: On the face of it, conflict scenarios don’t seem like cooperative problems; they seem like the exact opposite. Conflicts usually arise when people are competing for some scarce or contested resource (food, power, sex, territory). These competitions usually look like win-lose scenarios: one side’s gain is the other side’s loss. But Scott Curry argues that most conflict scenarios include within them a non-zero sum element. Violent resolution of conflicts is costly to all sides. There is usually a way of resolving the conflict without resorting to violence that is less costly and can seem like a win-win (both sides get something of what they want or, at least, don’t end up dead). Can the parties to the conflict cooperate on the less costly resolution? Scott Curry identifies four ways of doing this:
4a. Domination: One ways to resolve conflicts is for some individuals to be recognised as dominant over other individuals. These people are seen, within their societies, to be powerful, brave, physically (and, perhaps more recently, mentally) superior to others. They often entitled to take slightly more of shared resources and they expect deference and loyalty from others. This might not seem, to modern minds, like an ‘ethical’ way of resolving conflicts but it is certainly a practical solution to the problem of conflict. Any society in which people know their place is one in which conflict can be minimised. From the perspective of the MAC, this means we would expect norms and virtues of dominance to be common. For example, we would expect people who are brave, courageous, physically dominant (etc) to be frequently celebrated as morally virtuous. We do see this across most societies. Ancient Greek societies, for example, placed a lot of emphasis on the virtue of physical prowess and bravery. Likewise, there are many societies in which there are norms of honour and social status that support dominance hierarchies.
4b. Submission This is just the flipside of dominantion. Domination cannot work as a conflict resolution strategy if everyone tries to be dominant. Some people have to submit and defer to those that are dominant. Recognising this, the MAC would predict that there will be norms and virtues of deference and submission across many societies. This is indeed true, knowing your place and deferring to your social superiors are seen to be moral duties (and virtues) in many societies. Again, this may not seem like an ‘ethical’ solution to a cooperative problem to modern minds. This is because many of us live in liberal societies which are usually premised on an assumption of moral equality. How did we end up with this assumption? It’s hard to say exactly why but Scott Curry suggests that moral systems built around domination-submission are only sustainable when there are clear power/ability asymmetries in societies. If technology, education and other social reforms remove those asymmetries, then the morality of domination-submission may fade away.
4c. Division: Whenever there is a conflict over a resource that can be divided up into different portions, an obvious conflict resolution strategy is to divide the portions among the competitors. This saves them having to compete for the full resource. From the perspective of the MAC, this means that we would expect norms to develop around the fair division of divisible resources. This could include norms around the division of food and land, for example. Again, it is obvious enough that we do see such norms across most societies. There is, however, a problem here. In game theory, these scenarios are modelled as bargaining problems and there are, in principle, a large number of potential solutions to them. Suppose two people are competing over $100. In principle, any division of that sum of money that exhausts the full $100 (e.g. 20-80; 30-70; 40-60 and so on) is a Nash Equilibrium solution. So we might expect to see high variability in norms of fair division across societies. We do see this, to some extent, nevertheless it is remarkable how many societies tend to gravitate towards roughly equal shares (in the absence of some other norms concerning, say, domination-submission or possession).
4d. Possession: Finally, another way of resolving conflicts about disputed resources is simply to defer to prior possession or ownership. This may not be fair or egalitarian, but it is often a quick and easy way to avoid protracted conflict. From the perspective of the MAC, this means we would expect norms of property and prior possession to emerge across societies. Again, we do see evidence for this, with many societies adopting something like a “finders keepers” rule of thumb when it comes to certain resources.
In summary, the idea behind the MAC is that human moral systems derive from attempts to resolve cooperative problems. There are seven basic cooperative problems and hence seven basic forms of human morality. These are often blended and combined in actual human societies (more on this in a moment), nevertheless you can still see the pure forms of these moral systems in many different societies. The diagram below summarises the model and gives some examples of the ethical norms that derive from the different cooperative problems.
Before I move on, let me say two further things about this basic model of the MAC.
First, let me say something about the evidence in its favour. It may sound plausible enough in theory but is there any good evidence for thinking that all of human morality is, in fact, reducible to an attempt to solve a cooperative problem? This is something that Scott Curry and his colleagues have explored in recent papers. In one particularly interesting study, they conducted a linguistic analysis of the ethnographic record of 60 societies. They selected these societies randomly from an established database of ethnographic records. Using specified keywords and phrases, they then searched for any mention of the seven moral systems outlined above and tried to see whether the behaviours associated with them (e.g. being loyal to your kin; keeping your promises; deferring to social authorities and so on) were positively valenced in those societies. In other words, did people think those behaviours were morally good? The MAC predicted that they would be and, with one exception, this was what they found. In fact, out of 962 recorded observations concerning the moral value of different behaviours, 961 were found to support the MAC. The one exception was among the Chuuk people of Micronesia, where stealing was morally valued, if it was part of a display of dominance. This is a case where one type of cooperative solution (dominance) trumps another (prior possession). So it may not be a true exception. I recommend reading the full study to get a sense of the evidence in support of the MAC.
Second, let me mention some concerns one might have about the MAC. Although I am attracted to its reductive and unifying nature, I am also wary of the attempt to link all moral rules and behaviours back to cooperation. After all some moral rules — e.g. purity rules associated with dress, food consumption, and personal hygiene — that are common in religious traditions, and are often understood to be moral in nature, don’t seem to be obviously linked to cooperation. To be fair, you could argue that they are linked in some distant way. Perhaps adherence to these quirky purity rules is, ultimately, about forging and maintaining a coherent group identity. If you refuse to eat pork, for example, you might be signalling membership of a Jewish or Muslim community and hence solidifying the bonds of that community. But there is a danger that this just distorts reality to fit the theory. I mention this example, incidentally, because purity rules are part of a famous rival to the MAC, Haidt’s “Moral Foundations Theory”. Scott Curry is quite critical of this theory, arguing that the MAC is superior to it in various ways. He might be right about this, but it is beyond the scope of this article to resolve the dispute between these theories.
2. Morality as a Combinatorial System
Despite the problems mentioned above, the MAC is an elegant theory. One of its neat features is its simplicity — all moral rules are explained by a single underlying phenomenon — and its subtle complexity — there are multiple possible solutions to cooperative problems and hence multiple possible moralities. This subtle complexity has been developed in another article by Scott Curry and his colleagues. In this article they argue that morality is a combinatorial system and that the seven basic moralities can combine together in different forms to create a vast number of new moral systems.
What does it mean to say that morality is a combinatorial system? An analogy might be helpful. Think about atomic chemistry. It starts with atoms, which are made up of three basic sub-atomic particles: electrons, protons and neutrons (yes, I know there are other sub-atomic particles!). Different combinations of these three sub-atomic particles give us different chemical elements. Hydrogen is the simplest, consisting of one electron, one proton and one neutron. Other chemical elements add in more of these sub-atomic particles. These elements can themselves combine together to form more complex molecules. For example, two hydrogen atoms combine together with one oxygen molecule to form the molecule we call water (H2O). This a relatively simple molecule. Much more complex molecules exist as well. The crucial point, however, is that from a small set of simple components (three sub-atomic particles), combined together in different ways, we can create all the complexity we see in the world around us.
The claim is that the MAC is has similar combinatorial complexity. You have seven basic moral systems and these can be combined together to form more complex moral molecules. For example, a kinship based morality can combine together with a mutualistic morality to create a group-based moral system that is premised on fictive kinship, e.g. the belief that all members of a tribe are brothers and sisters. This fictive kinship based morality can be sustained through symbols and rituals, even if the actual degree of biological relatedness between the group members is quite limited. Given that all human societies face multiple cooperative problems, and given that human moral systems can be quite complex, it seems plausible to suppose that most of the actual moral systems we see in the world are these more complex moral molecules.
Is there any evidence to support this idea? That’s what Scott Curry and his colleagues set out to determine in their article on moral molecules. They did this by combining pairs of moral systems drawn from the MAC, hypothesising as to what the likely combined moral system would entail, and searching to see whether such combined moral systems are found in human societies. Focusing on twenty one moral molecules initially, they found some evidence to suggest that all twenty one existed in actual human societies. I won’t go through every examples. One of them was the fictive kinship example mentioned above, which certainly can be found in human societies. Another was an honour based morality, which Scott Curry and his colleagues claim emerges from the combination of a dominance-based morality and an exchange-based morality (you display your dominance through retaliation against others). The full list of moral molecules can be found in the original article.
How many moral molecules might there be? One of the advantages of the MAC is that it seems possible to apply the mathematics of combinatorics to answer this question. If there are, indeed, seven basic moral systems, then all we need to know is how many combinations of those seven basic moralities are possible. It’s like asking how many combinations of students can be formed from a group of seven. You might be familiar with this calculation. There are 7 groups of one student; 21 groups of two students; 35 groups of three students; 35 groups of four students… and so on up to 1 group of seven students. The mathematical operation here is: (7 choose 1) + (7 choose 2)… + (7 choose 7). The total number of combinations is 127. So by applying the mathematics of combinatorics to the MAC we reach the conclusion that there are 127 possible moral systems.
Or are there? As Scott Curry and his colleagues argue, the reality is likely to be more complex than this. For starters, there are positive and negative variations of the seven basic moral systems, i.e. it is logically possible for cultures to disvalue norms like ‘be loyal to your family’ or ‘turn the other cheek’. It may not happen very often in reality but it is still a logical possibility. Furthermore, once you start combining basic moral systems together it is more plausible to imagine societies in which one moral system is rejected or deprioritised relative to another. This means there are 127 possible negative moral systems as well and you need to think about how those might combine with their positive variations. This gives at least 2,186 possible moral systems.
In fact, it’s probably even more complex than this. To this point, the calculations have assumed that there is only one type of moral norm or value associated with the seven basic moral systems. In reality, there may be many values and norms associated with each one. These norms can be combined with norms from other moral systems to create additional moral molecules. When you start adding in all those possible molecules you end up with a truly vast space of possible moral systems. Some of these might be very weird or alien to us, but they are at least logically possible.
This might seem like a pessimistic conclusion. The MAC begins in the hope of reducing morality to some simple underlying components. Although it may succeed in that aim, when we start to think about how those simple components combine into more complex moral systems, we end up with a mind-bogglingly vast space of possibility. Still, I think there are some reasons to be optimistic. Unlike other approaches to morality, the MAC places basic constraints on the space of possible moral systems. This is encouraging when we try to think about the future of morality. How might human moral systems change? What moral system will our grandchildren embrace? If the MAC is right, it will have something to do with solving cooperative problems.
3. The MAC and the Future
Let me close with some speculations about our possible moral futures. In particular, let me consider how technology might change our moral systems. According to the MAC, the way to think about this is to think about the impact of technology on the cooperative problems we face. Do they make these problems easier to solve or harder to solve? Do they create new cooperative problems? How might this, in turn, affect our attachment to certain moral norms?
It seems to me that there are at least three major things that technology can do to cooperation:
(a) It can enable humans to form larger cooperative networks: for example transport technology and communications technology allow us to interact with and coordinate our efforts with more, geographically dispersed, people, thereby securing newer and larger forms of mutual benefit. These larger networks can place greater strain on our traditional cooperative moral norms and values. For example, the usual tricks for maintaining cooperation, such as relying on fictive kinship or gossip or social ostracism might not work in a more globalised and anonymous world.
(b) It can help to implement new solutions to cooperative problems: some technologies enable faster and cheaper ways of maintaining cooperation, group loyalty, dominance and so on. Weapons technology, surveillance technology, and behaviour manipulation technology, for example, can help to maintain cohesion and coordination in a similar way to tribal punishment, gossip and ostracism (indeed, social media technology enables a globalised form of the latter). There is something of an arms-race to this though. We are tempted to use these technologies to solve cooperative problems arising from the strains placed on our traditional tools as a result of the larger cooperative networks we have formed.
(c) It can help to create new types of cooperative partner: this one is a bit more outlandish and controversial. The MAC assumes that cooperation involves humans cooperating with one another. But increasingly our technology has its own agency and autonomy (contested though this may be). This means that, at least in some cases, technology becomes a new cooperative partner, one that may not share many human traits or values or emotions. This might make it more or less reliable than a human moral cooperator. If machine cooperators are more reliable and easily controllable than human moral partners, this might make it easier to solve cooperative problems without recourse to moral tools and tricks (i.e. it could enable what Roger Brownsword calls the ‘technological management’ of our normative concerns). If they are less reliable and less easily controlled, then this might create a great deal of moral stresses and strains. A real challenge emerges as to how technological agents integrate into our cooperative moral systems. Are they treated as equal moral partners? Dominants? Submissives? These are questions that are actively debated and need resolution.
In conclusion, technology changes how we interact with ourselves and the world around us and thereby puts stress on our traditional cooperative morality. Some moral norms are no longer fit for purpose. Some need to be expanded to address the new technological reality. We might start to value technological solutions to cooperative problems over traditional human-centric one. Instead of valuing the loyalty and trustworthiness of humans; we start to value the efficiency and reliability of machines. These are themes and ideas already present in the philosophy of technology, but not ones that are explicitly linked back to the cooperative roots of morality.
There is a lot more to be said. But this is at least a start.