Pages

Friday, July 24, 2020

The Mechanics of Moral Change



I’ve recently become fascinated by moral revolutions. As I have explained before, by “moral revolution” I mean a change in social beliefs and practices about rights, wrongs, goods and bads. I don’t mean a change in the overarching moral truth (if such a thing exists). Moral revolutions strike me as an important topic of study because history tells us that our moral beliefs and practices change, at least to some extent, and it is possible that they will do so again in the future. Can we plan for and anticipate future moral revolutions? That's what I am really interested in.

To get a handle on this question, we need to think about the dynamics of moral change. What is changing and how does it change? Recently, I’ve been reading up on the history and psychology of morality and this article is an attempt to distill, from that reading, some models for understanding the dynamics of moral change. Everything I say here is preliminary and tentative but it might be of interest to some readers.


1. The Mechanics of Morality: a Basic Picture

Let’s start at the most abstract level. What is morality? Philosophers will typically tell you that morality consists of two things: (i) a set of claims about what is and is not valuable (i.e. what is good/bad/neutral) and (ii) a set of claims about what is and is not permissible (i.e. what is right, wrong, forbidden, allowed etc).

Values are things we ought to promote and honour through our behaviour. They include things like pleasure, happiness, love, equality, freedom, well-being and so on. The list of things that are deemed valuable can vary from society to society and across different historical eras. For example, Ancient Greek societies, particularly in the Homeric era, placed significant emphasis on the value of battlefield bravery. Modern liberal societies tend to value the individual pursuit of happiness more than bravery on the battlefield. That said, don’t misinterpret this example. There are many shared values across time and space. Oftentimes the changes between societies are subtle, involving different priority rankings over shared values rather than truly different sets of values.

Rights and wrongs are the specific behavioural rules that we ought to follow. They are usually connected to values. Indeed, in some sense, values are the more fundamental moral variable. A society needs to figure out what it values first before it comes up with specific behavioural rules (though it may be possible that following specific rules causes you to change your values). These behavioural rules can also vary from society to society and across different historical eras. To give a controversial example, it seems that sexual relationships between older men and (teenage) boys were permissible, and even celebrated, in Ancient Greece. In modern liberal societies they are deemed impermissible.

So beliefs about what is good/bad and right/wrong are the fundamental moral variables. It follows that moral revolutions must consist, at a minimum, in changes in what people think is good/bad (additions, subtractions and reprioritisations of values) and right/wrong (new permissions, obligations, prohibitions and so on).


2. Our Moral Machinery

How could these things change? To start to answer this question, I suggest we develop a simple model of the human moral machine. By using the term “human moral machine” I mean to refer to the machine that generates our current moral beliefs and practices. How does that machine currently work? It’s only when we can answer this question that we will get a better sense of how things might change in the future. To be clear,I don’t think of this as a machine in the colloquial sense. It’s not like an iPhone or a laptop computer. It is, rather, a complex social-technical-biological mechanism, made up of objects, processes and functions. I hope no one will mind this terminological preference.

At its most fundamental level, the human moral machine is the human brain. The brain, after all, is the thing that generates our moral beliefs and practices. How does this happen? All brains are, in a sense, evaluative systems. They record sensory inputs and then determine the evaluative content of those inputs. Think about the brain of a creature like a slug. It probes the creature’s local environment identifying potential food sources (good), mates (good), toxic substances (bad) and predators (bad). The slug itself may not understand any of this — and it may not share the conceptual labels that we apply to its sensory inputs — but its brain is, nevertheless, constantly evaluating its surroundings. It then uses these evaluations to generate actions and behaviours. It often does this in a predictable way. In short, the brain of the slug generates rules for behaviour in response to evaluative content.

Humans brains are no different. They are also constantly evaluating their surroundings, categorising sensory inputs according to their evaluative content, and generating rules for action in response. Where humans differ from slugs is in the complexity of our evaluations and the diversity of the behavioural rules we follow. Some of our evaluations and rules are programmed into us as basic evolutionary responses; some we learn from our cultures and peers; some we learn through our own life experiences. It is through this process of evaluation and rule generation that we create moral beliefs and practices. This isn’t to say that moral beliefs and practices are simply reducible to brain-generated evaluations and rules. For one thing, not all such evaluations and rules attract the label “moral”. Moral values and rules are rather a subset of these things that take on a particular importance in human social life. They are evaluations and rules that are shared across a society and used as standards against which to criticise and punish conduct.

To say that the basic moral machine is the human brain is not to say that much. What we really want to know is whether the human brain tends to engage in certain kinds of predictable moral evaluation and rule generation. If it does, then there is some hope for developing a general model of moral change. If it doesn’t -- if evaluation and rule-generation is entirely random or too complex to reverse engineer -- then the prospects are pretty dim.

Should we be optimistic or pessimistic on this front? Although there are people who think there is a good deal of randomness and complexity to how our brains learn and adapt to the world, there are plenty of others who disagree and think there are predictable patterns to be discerned. This seems to be true even in the moral realm. Although the diversity of human moral systems is impressive, there is also some remarkable affinity across different cultures. Humans tend to share some very similar values across cultures and this can lead to very similar cross-cultural moral rules.

So I shall be optimistic for the time being and suggest that there are some simple, predictable forces at work in the human moral machine. In particular, I am going to suggest that evolutionary forces have given humans a basic moral conscience — i.e. a basic capacity for generating and adhering to moral norms — and that this moral conscience was an adaptive response to particular challenges faced by human societies in the past. In addition to this, I am going to suggest that this basic moral conscience is, in turn, honed and cultivated in each of our own, individual lives, in response to the cultures we grow up in and the particular experiences we have. The combination of these two things — evolved moral conscience plus individual moral development — is what gives us our current set of moral beliefs and practices and places constraints on our openness to moral change.

In the future, changes to our technologies, cultures and environments are likely to agitate this moral machinery and force it to generate new moral evaluations and rules. This model for understanding human moral change is illustrated below.


For the remainder of this post I will not say much about the future of morality. Instead, I will focus on how our moral consciences might have evolved and how they develop over the course of our own lives.


3. Our Evolved Conscience

I suspect there is no fully satisfactory definition of the term “moral conscience” but the one I prefer defines the conscience as an internalised rule or set of rules that humans believe they ought to follow. In other words, it is our internal sense of right and wrong.

In his book Moral Origins — which I will be referring to several times in what follows — Christopher Boehm argues that our conscience is an “internalised imperative” telling us that we ought to follow a particular rule or else. His claim is that this internalised imperative originally took the form of a conditional rule based on a desire to avoid social punishment:


Original Conscience: I ought to do X [because X is a socially cooperative behaviour and if I fail to do X I will be punished]
 

What happened over time was that the bit in the square brackets got dropped from how we mentally represent the imperative.


Modern Conscience: I ought to do X because it is the right thing to do.
 

This modern formulation gives moral rules a special mental flavour. To use the Kantian terminology, moral rules seem to take the form of categorical imperatives — rules that we have to follow — not simply rules that we should follow in order to achieve desirable results. Nevertheless, according to Boehm, the bit in the square brackets of the original formulation is crucial to understanding the evolutionary origins of moral conscience.

Most studies of the evolutionary origins of morality take the human instinct for prosociality and altruism as their starting point. They note that humans are much more altruistic than their closest relatives and try to figure out why. This makes sense. Although there is more to morality than altruism, it is fair to say that valuing the lives and well-being of other humans, and following altruistic norms, is one of the hallmarks of human morality. Boehm’s analysis of the origins of human moral conscience tries to capture this. The bit in the square brackets links moral conscience to our desire to fit in with our societies and cooperate with others.

So what gave rise to this cooperative, altruistic tendency? Presumably, the full answer to this is very complex; the simple answer focuses on two things in particular.

The first is that humans, due to their large brains, faced an evolutionary pressure to form close social bonds. How so? In her book Conscience, the philosopher Patricia Churchland explains it in the following way. She argues that it emerged from an evolutionary tradeoff between endothermy (internal generation of heat), flexible learning and infant dependency. Roughly:


  • Humans evolved to fill the cognitive niche, i.e. our evolutionary success was determined by our ability to use our brains, individually and collectively, to solve complex survival problems in changing environments. This meant that we evolved brains that do not follow lots of pre-programmed behavioural rules (like, for example, turtles) but, rather, brains that learn new behavioural rules in response to experiences.
  • In order to have this capacity for flexible learning, we needed to have big, relatively contentless brains. This meant that we had to be born relatively helpless. We couldn’t have all the know-how we needed to survive programmed into us from birth. We had to use experience to figure things out (obviously this isn’t the full picture but it seems more true of humans than other animals)
  • In addition to being relatively helpless at birth, our big brains were also costly in terms of energy expenditure. We needed a lot of fuel to keep them growing and developing.
  • All of this made humans very dependent on others from birth. In the first instance, this dependency manifested itself in mother-infant relationships, but then social and cultural forces selected for greater community care and investment in infants. Families and tribes all helped out to produce the food, shelter and clothing (and education and technology) needed to ensure the success of our offspring.
  • The net result was a positive evolutionary feedback loop. We were born highly dependent on others, which encouraged us to form close social bonds, and which encouraged others to invest a lot in our success and well-being. A complex set of moral norms concerning cooperation and group sharing emerged as a result.


This was the evolutionary seed for a moral conscience centering on altruism and prosociality.

I like Churchland’s theory because it highlights evolutionary pressures that are often neglected in the story of human morality. In particular, I like how she places biochemical constraints arising from the energy expenditure of the brain at the centre of her story about the origins of our moral conscience. This makes her story somewhat similar to that of Ian Morris, who makes different technologies of energy capture central to his story about the changes in human morality over the past 40,000 years. 

That said, Churchland’s story cannot be the full picture. As anyone will tell you, cooperation can yield great benefits, but it also has its costs. A group of humans working together, with the aid of simple technologies like spears or axes, can hunt for energy-rich food. They can get more of this food working together than they can individually. But cooperative efforts like this can be exploited by free-riders, who take more than they give to the group effort.

Two types of free riders played an important role in human history:


Deceptive Free Riders: People who pretended to cooperate but actually didn’t and yet still received a benefit from the group.
 
Bullying Free Riders: People who intimidated or violently suppressed others in order to take more than their fair share of the group spoils (e.g. the alpha male dominant in a group).
 

A lot of attention has been paid to the problem of deceptive free riders over the years, but Christopher Boehm suggests that the bullying free rider was probably a bigger problem in human evolutionary history. 

He derives evidence for this claim from two main sources. First, studies of modern hunter gatherer tribes suggest that members of these groups all seem to have a strong awareness of and sensitivity to bullying behaviour within their groups. They gossip about it and try to stamp it out as soon as they can. Second, a comparison with our ape brethren highlights that they are beset by problems with bullying alpha males who take more than their fair share. This is particularly true of chimpanzee groups. (It is less true, obviously, of bonobo groups where female alliances work to stamp out bullying behaviour. Richard Wrangham explains the differences between bonobos and chimps as being the result of different food and environmental scarcities in their evolutionary environments.)

As Boehm sees it, then, the only way that humans could develop a strong altruistic moral conscience was if they could solve the bully problem. How did they do this? The answer, according to Boehm, is through institutionalised group punishment, specifically group capital punishment of bullies. By themselves, bullies could dominate others. They were usually stronger and more aggressive and could use their physical capacity to get their way. But bullies could not dominate coalitions of others working together, particularly once those coalitions had access to the same basic technologies that enabled big-game hunting. Suddenly the playing field was levelled. If a coalition could credibly threaten to kill a bully, and if they occasionally carried out that threat, the bullies could be stamped out.

Boehm’s thesis, then, is that the capacity for institutionalised capital punishment established a strong social selective pressure in primitive human societies. Bullies could no longer get their way. They had to develop a capacity for self-control, i.e. to avoid expressing their bullying instincts in order to avoid the wrath of the group. They had to start caring about their moral reputations within a group. If they acquired a reputation for cheating or not following the group rules, they risked being ridiculed, ostracised and, ultimately, killed.

It is this capacity for self-control that developed into the moral conscience — the inner imperative telling us not to step out of line. As Boehm puts it:


We moved from being a “dominance obsessed” species that paid a lot of attention to the power of high-ranking others, to one that talked incessantly about the moral reputations of other group members, began to consciously define its more obvious social problems in terms of right and wrong, and as a routine matter began to deal collectively with the deviants in its bands. 
(Boehm, Moral Origins, p 177)
 

What’s the evidence for thinking that institutionalised punishment was key to developing our moral conscience? Boehm cites several strands of evidence but his most original comes from a cross cultural comparison of human hunter gatherer groups. He created a database of all studied human hunter gatherer groups and noted the incidence and importance of capital punishment in those societies. In short, although modern hunter gatherer groups don’t execute people very often, they do care a lot about moral reputations within groups and most have practiced or continue to practice capital punishment in some form or other.

Richard Wrangham, who is also a supporter of the institutionalised punishment thesis, cites other kinds of evidence for this view. In his book The Goodness Paradox he argues that human morality emerged from a process of self-domestication (akin to the process we see domesticated animals) and that we see evidence for this not just in the behaviour of humans but also in their physiology compared to their chimpanzee cousins (less sexual dimorphism, blunter teeth, less physical strength etc). It’s an interesting argument and he develops it in a very engaging way.

The bottom line for now, however, is that our moral conscience seems to have at least two evolutionary origin points. The first is our big brains and need for flexible learning: this made us dependent on others for long periods of our lives. The second is institutionalised punishment: this created a strong social selective pressure to care about reputation within a group and to favour conformity with group rules.

Understanding these origin points is important because it tells us something about the forces that are likely to alter our moral beliefs and practices in the future. Most humans have a tendency for groupishness, we care about our reputations within our groups and we often try to conform with group expectations. That said, we are not sheep. Our brains often look for loopholes in group rules, trying to exploit things to our advantage. So we are sensitive to the opinions of others and wary of the threat of punishment, but we are willing to break the rules if the cost-benefit ratio is in our favour. This tells us that if we want to change moral beliefs and practices, an obvious way to do this is by manipulating group reputational norms and punishment practices.


4. Our Developed Conscience

So much for the general evolutionary forces shaping our moral conscience. There are obviously some individual differences too. We learn different behavioural rules in different social groups and through different life experiences. We are also, each of us, somewhat different with respect to our personalities and hence our inclinations to follow moral rules.

It would be impossible to review all the forces responsible for these individual differences in this article, but I will mention two important ones in what follows: (i) our basic norm-learning algorithm and (ii) personality types. I base my description of them largely on Patricia Churchland’s discussion in Conscience.

First, let’s talk about how we learn moral rules. Pioneering studies done by the neuroscientists Read Montague and Terry Sejnowski suggest that the human brain follows a basic learning algorithm known as the “reward-prediction-error” algorithm (now popularised as "reinforcement learning" in artificial intelligence research). It works like this (roughly):


  • The brain is constantly active and neurons in the brain have a base rate firing pattern. This base rate firing pattern is essentially telling the brain that nothing unexpected is happening in the world around it.
  • When there is a spike in the firing pattern this is because something unexpectedly good happens (i.e. the brain experiences a “reward”)
  • When there is a drop in the firing pattern this is because something unexpectedly bad happens (i.e. the brain experiences a “punishment”)

This natural variation in firing is exploited by different learning processes. Consider classical conditioning. This is where the brain learns to associate another signal with the presentation of a reward. In the standard example, a dog learns to associate the ringing of a bell with the presentation of food. In classical conditioning, the brain is switching the spike in neural firing from the presentation of the reward to the stimulus that predicts the reward (the ringing of the bell). In other words, the brain links the stimulus with the reward in such a way that it spikes its firing rate in anticipation of the reward. If it makes a mistake, i.e. the spike in firing does not predict the reward, then it learns to dissociate the stimulus with the presentation of the reward. In short, whenever there is a violation of what the brain expects (whenever there is an "error"), there is a change in the brain's firing rate, and this is used to learn new associations.

It turns out that this basic learning algorithm can also help to explain how humans learn moral rules. Our understanding of shared social norms guides our expectations of the social world. We expect people to follow the social norms and when they do not this is surprising. It seems plausible to suppose that we learn new social norms by keeping track of the norms we expected people to follow.

This has been studied experimentally. Xiang, Lohrenz and Montague performed a lab study to see if groups of people playing the Ultimatum Game learned new norms of gameplay by following the reward-prediction-error process. It turns out they did.

The Ultimatum Game is a simple game in which one player (A) is given a sum of money to divide between himself and another player (B). The rule of the game is that player A can propose whatever division of the money he prefers and player B can either accept this division or reject it (in which case both players get nothing). Typically, humans tend to favour a roughly egalitarian split of the money. Indeed, if the first player proposes an unequal split of the money, the second player tends to punish this by rejecting the offer. That said, there is some cross-cultural variation and, under the right conditions, humans can learn to favour a less egalitarian split.

Xiang, Lohrenz and Montague ran the experiment like this:


  • They had two different types of experimental subjects: donors, who would propose different divisions of $20, and responders, who would accept or reject these divisions.
  • They then ran multiple rounds of the Ultimatum game (60 in total). They split responders into two different groups in the process. Group one would run through a sequence of games that started with donors offering very low (inegalitarian) sums but ending up with high (egalitarian) ones. Group two would run through the opposite sequence, starting with high offers and ending with low ones.
  • In other words, responders in group one were trained to expect unequal divisions initially and then for this to change, while those in group two were trained to expect equal divisions and then for this to change.

The researchers found that, under these circumstances, the responders’ brains seemed to follow a learning process similar to that of reward-prediction-error, something they called “norm prediction error”. In this learning process, the violation of a norm is perceived, by the brain, as an error. This can be manipulated in order to train people to adapt to new norms.

One of the particularly interesting features of this experiment was how the different groups of responders perceived the morality of the different divisions. At round 31 of the game, both sets of responders received the exact same offer: nine dollars. Those in group one (the low-to-high offer group) thought that this was great because it was more generous than they were initially trained to expect (bearing in mind their background cultural norms, which were to expect a fair division). Those in group two thought it was not so great since it was less generous than they had been trained to expect.

The important point about this experiment is that it tell us something about how norms shape our expectations and hence affect the changeability of our moral beliefs and practices. We all become habituated to a certain normative baseline in the course of our own lives. Nevertheless, with the right sequence of environmental stimuli it’s possible that, within certain limits, our norms can shift quite rapidly (Churchland argues that fashion norms are a good example of this).

The other point that is worth mentioning now is how individual personality type can affect our moral conscience. Churchland uses the Big Five personality type model (openness, conscientiousness, extroversion-introversion, agreeableness and neuroticism) to explain this. This is commonly used in psychology. She notes that where we fall on the spectrum with respect to these five traits affects how we interact with and respond to moral norms. For example, those who are more extroverted, agreeable and open can be easier to shift from their moral baseline. Those who are more conscientious and neurotic can be harder to shift.

She also offers an interesting hypothesis. She argues that there are two extreme moral personality types:


Psychopaths: These are people that appear to lack a moral conscience. They often know what social morality demands of them but they lack any emotional attachment to the social moral rules. They do not experience them as painful violations of the moral order. These people have an essentially amoral experience of the world (though they can act in what we would call “immoral” ways).
 
Scrupulants: These are people that have a rigid and inflexible approach to moral rules (possibly rooted in a desire to minimise chaos and uncertainty). They often follow moral rules to their extremes, sometimes neglecting family, friends and themselves in the process. They are almost too moral in their experience of the world. They are overly attached to moral rules.
 

Identifying these extremes is useful, not only because we sometimes have to deal with psychopaths and scrupulants, but also because we all tend to fall somewhere between these two extremes. Some of us are more attached to existing moral norms than others. Knowing where we all lie on the spectrum is crucial if we are going to understand the dynamics of moral change. (It may also be the case that it is those who lie at the extremes that lead moral revolutions. This is something I suggested in an earlier essay on why we should both hate and love moralists).


5. Conclusion

In summary, moral change is defined by changes in what we value and what we perceive to be right and wrong. The mechanism responsible for this change is, ultimately, the human brain since it is the organ that creates and sustains moral beliefs. But the moral beliefs created and sustained by the human brain are a product of evolution and personal experience.

Evolutionary forces appear to have selected for proscial, groupish tendencies among humans: most of us want to follow social moral norms and, perhaps more crucially, be perceived to be good moral citizens. That said, most of us are also moral opportunists, open to bending and breaking the rules under the right conditions.

Personal experience shapes the exact moral norms we follow. We learn normative baselines from our communities, and we find deviations from these baselines surprising. We can learn new moral norms, but only under the right circumstances. Furthermore, our susceptibility to moral change is determined, in part, by our personalities. Some people are more rigid and emotionally attached to moral rules; some people are more flexible and open to change.

These are all things to keep in mind when we consider the dynamics of moral revolutions.

No comments:

Post a Comment