|The roundabout playpump - A flawed intervention?|
(Part One; Part Two)
After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title 'What's wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.
Gabriel’s article takes a critical look at the leading objections to effective altruism (EA). EA, for present purposes, is defined as the practice of trying to do the most good you can through charitable donations. In typical EA arguments, this practice brings with it a number of key commitments, three of which figure prominently: (i) welfarism, i.e. EAs think you should try to improve individual well-being; (ii) consequentialism, i.e. EAs tend to favour consequentialist approaches to ethics and (iii) evidentialism, i.e. EAs look to policy interventions with a robust evidential base.
Gabriel considers three main objections to this form of EA. The first is that it is unjust; the second that it is methodologically biased; and the third that it is not as effective as its proponents claim. I’ve looked at the first of these objections already. Today, I look at the second. That objection breaks down into three main sub-types of objection. I’ll discuss each of these in turn.
[Reader's note: I am basing this series on the original pre-published version of Gabriel's article because that's what I used when I originally structured this series and presented the taxonomy of objections. There have been some changes to the wording and framing of the critiques discussed below but, as best I can tell, it covers the same ground.]
1. Is EA too measurement focused and reductionist?
The first methodological critique highlights the evidential bias of the EA philosophy. The critique manifests itself in a couple of different ways. One of them is a variant on the classic ‘what gets measured gets managed’ concern. EAs place a premium on improving outcomes that are susceptible to quantification and measurement. This causes them to downplay other, less measurable and quantifiable outcomes, that might be equally morally worthy. To put the objection more formally:
- (1) EAs emphasise moral goals that are readily measurable and quantifiable.
- (2) There are many important moral goals that are not so readily measurable and quantifiable.
- (3) Therefore, EAs tend to ignore important moral goals.
Unlike the previous round of objections, the concern here is not that EAs fail to recognise other important moral goods. Rather, the concern is that their evidentialist methodology biases them away from these other moral goods. To give an example, there might be some value that is intrinsic to political processes that respect and honour human rights. At the same time, it might be very difficult to measure and quantify those outcomes. Contrariwise, there might be some value to individual health and well-being that is relatively easier to measure and quantify. When it comes to deciding between policies, this will cause EAs to prefer policies that emphasise the latter moral goal to the former, even though they acknowledge the value of the former.
This can have two particularly negative consequences. The first is simply that proponents of EA become absorbed in assessing the relative merit of interventions that target measurable and quantifiable outcomes and forget to consider the less measurable and quantifiable. The other is that EAs become accustomed to standards of proof that are unreasonable in many domains. For instance, EAs love randomised controlled trials (RCTs), but RCTs are often only appropriate for small scale changes where it is possible to have control groups and to precisely measure outcomes. They are often not appropriate to larger country-wide or international reforms. Does this mean we should abandon these initiatives? Or does it mean that EAs need to moderate their standards of proof? That’s an issue that needs to be resolved.
Another, more specific version of the materialistic objection, worries that EAs tend to be reductionists when it comes to assessing the value of different interventions. One example of this is the tendency for EAs to rely on the DALY-measure (Disability-adjusted life year) when it comes to assessing interventions. The DALY measure allows us to make indirect inferences about a person’s subjective well-being and to compare different people according to this metric. This makes it a very attractive measurement system for EAs. The fear is that overreliance on it reduces everything to a comparison of subjective well-being.
How can EAs respond to these objections? Gabriel identifies a number of possibilities some of which are already happening. One example is that GiveWell — possibly the leading charity evaluator — has moved away from overreliance on the DALY measure and instead favours interventions that are supported by multiple lines of independent analysis. Gabriel thinks that EAs should also be more upfront about the bounded nature of the information they provide. They could do this by concluding that some intervention is ‘unprovable’ rather than ‘unproven’. He also thinks that they should engage more with other potential metrics, such as the Multidimensional Poverty Index, which evaluates outcomes in non-welfarist terms.
2. Is EA too individualistic?
The second version of the methodological critique argues that EA is overly individualistic in its focus. That is to say, it prioritises interventions that improve individual well-being and either ignores or downplays those that improve collective or community-based goods. Enhancing and empowering local communities is often a goal for NGOs, and it is also something favoured by certain schools of political morality, but because EAs are so resolutely welfarist in their outlook, they tend to value communities in instrumental ways, i.e. as vehicles for improving individual outcomes. This is similar to the reductionist critique given above (and, indeed, in the final version of the article Gabriel merges them together).
To put the objection in quasi-formal terms:
- (4) EAs emphasise moral goods that accrue to the individual (i.e. that enhance individual well-being etc).
- (5) There are important moral goods that accrue to the community.
- (6) Therefore, EAs ignore an important set of moral goods.
The objection is defended and elaborated along similar lines to the previous one. Gabriel uses a thought experiment to highlight its practical consequences:
Medicine: Suppose it is known that condom distribution is more effective in minimizing the harm caused by HIV/AIDs than the provision of Anti-Retroviral drugs (AVRs). This is because AVRs only help those who have the disease while condoms can prevent people from contracting it. You are faced with the choice of funding two different programs. The first allocates all the money to condom distribution. The second allocates 90% to condom distribution and 10% to AVRs. Which do you choose?
Gabriel argues that if the evidence does indeed support the view that condom distribution is more effective than the provision of AVRs, then EAs will tend to favour the first program. It is, after all, the one that does the most good for the money provided. The problem is that this does not sit easily with most people. The idea of leaving those with the disease untreated seems wrong. Gabriel suggests that this might have something to do with the value of hope communities. People want to live in a society that will care for them if they are sick, even if this is not the most cost-effective approach. They want to have the hope that they will be looked after. Furthermore, hope may be an important resource for communities undergoing hardship, one that enables them to take collective action that addresses problems that cannot be addressed at the individual level. You get more buy-in at the community level if people have some sense of hope.
The upshot of this, for Gabriel, is that EAs shouldn’t move so quickly from claims about cost-effectiveness of policies at the individual level to claims about the overall value or desirability of a policy.
3. Is EA too instrumentalistic?
The final methodological critique holds that EAs are overly instrumental in their evaluation of policies. That is to say, they compare interventions based on the outcomes they achieve and not on the procedures they use to achieve those outcomes. This creates a problematic bias in their recommendations. Procedures that are inclusive and democratic in nature are often slower and messier than more non-inclusive and technocratic procedures. Consequently, EAs tend to favour technocratic interventions. This causes them to downplay or ignore important procedural values.
- (7) EAs assess interventions in instrumental ways: i.e. how efficient are they at achieving the desired outcome; they often ignore or downplay the values attached to the procedures that lead to those outcomes.
- (8) There are intrinsically valuable procedures (i.e. democratic and inclusive procedures) that may be less efficient than other technocratic and non-inclusive procedures.
- (9) Therefore, EAs tend to favour technocratic and non-inclusive procedures for achieving their desired outcomes.
Gabriel again uses a thought experiment to support the argument:
Participation: Some villages need help developing a water and sanitation system to combat the spread of waterborne parasites. You can fund one of two projects that help them in this regard. The first will hire a group of contractors to build the system - something they have done successfully in the past. The second will work with members of the community and help them build and develop the system themselves. This has also worked in the past but because villagers are not experts in this area of construction the systems tend to be less functional.
The complaint is that EAs would naturally choose the first project because it is the most effective. But the second project might have numerous advantages that go unappreciated by the standard EA methodology. It values the agency and autonomy of the villagers; it allows them to build capacity and understanding; and it can assist with the acceptability and perceived legitimacy of the intervention.
This objection works at more national scales too. There are concerns about largescale philanthropic projects that subvert democratic processes in favour of technocratic solutions, and thereby worsen the governance problems in certain developing nations.
Gabriel thinks that EAs need to be more sensitive to this problem. They need to appreciate the importance of popular control over social outcomes and the value of strong, democratic decision-making procedures. It strikes me, however, that many EAs are already sensitive to this problem. Indeed, Will MacAskill’s book Doing Good Better opens with a lengthy critique of the ‘Playpump’. This was a device that helped villagers pump water through a child’s roundabout. The idea being that water could be pumped and children could play at the same time. The pump was a failure for several reasons one of which (highlighted by MacAskill) is that nobody really consulted with the villagers who were being given these things. Now perhaps MacAskill thinks that non-consultation was a problem purely because it led the inventors and promoters of the playpump to favour an ineffective intervention, but there is still some sensitivity to the value of more inclusive procedures on display.
As you can see, each of these criticisms is a variation on the same basic theme: EAs prioritise certain ways of assessing the value of charitable interventions and this causes them to ignore or downplay something of importance. The response to each criticism is the same. Either the EA says that it is right to downplay and ignore those things, or they must try to expand their metrics and methodologies to include those things.