|Effective Altruism Logo|
(Part one; part two; part three)
This is going to be my final post on the topic of effective altruism (for the time being anyway). I’m working my way through the arguments in Iason Gabriel’s article ‘Effective Altruism and its Critics’. Once I finish, Iason has kindly agreed to post a follow-up piece which develops some of his views.
As noted in previous entries, effective altruism (EA) is understood for the purposes of this series as the belief that one should do the most good one can do through charitable donations. EA comes in ‘thick’ and ‘thin’ varieties. The thick version, which is what this series focuses on, has three key commitments: (i) welfarism, i.e. it believes that you should aim to improve individual well-being; (ii) consequentialism, i.e. it adopts a consequentialist approach to ethical decision-making; and (iii) evidentialism, i.e. it insists upon favouring interventions with robust evidential bases.
Criticisms of this ‘thick’ version of EA can be orgainised into three main branches. The first argues that EA ignores important justice-related considerations. The second argues that EA is methodologically biased. And the third argues that EA is not that effective. I’ve looked at the first and second branches in previous entries. Today, I take on the third.
In many ways, this might be the most interesting — and for proponents of EA the most painful — criticism. After all, the chief virtue of EA is that it is supposed to be effective: it allows its adherents to claim that they are genuinely doing the most good they could do through their actions. But what if this is false? What if following the EA philosophy turns out to be relatively ineffective?There are three ways to flesh out this concern. I’ll go through them in turn.
1. Does EA neglect important counterfactuals?
Counterfactual analysis is at the heart of the EA worldview. Suppose you meet a homeless person on the street and they are begging for money. You have £1 in your pocket. You could give it to them, but the EA proponent will urge you to think it through. What could you have done with that £1 if you donated it elsewhere? What was the opportunity cost of giving it to the homeless person?
When you engage in this kind of counterfactual analysis, the EA proponent is confident that you’ll see that there were better ways to allocate that money. In particular, the EA proponent will suggest that there are charities in the developing world where the same £1 could have done orders of magnitude more good than it could do for that homeless person in your home country (assuming you live in a developed nation). You shouldn’t succumb to the proximate emotional appeal of your interaction with that person; you should be more rational in your decision-making. That’s the way you will do the most good. This is illustrated in the diagram below. It shows how you could give your money to the homeless person or to GiveWell’s top-ranked charity. When you think of it in these terms, the rational (and most morally effective) choice becomes obvious.
This kind of counterfactual analysis doesn’t just apply to decisions about how to allocate your spare cash. It also applies to decisions about how to spend your time. Many altruistically-inclined people might like to spend their time and energy working for a worthy charitable cause. But the EA proponent will argue that this isn’t necessarily the best use of their talents. Again, you should ask: what will happen if I don’t work for that organisation? Will someone else fill in for me? Are there better things I could be doing with my time and energy? One of the leading EA organisations is 80,000 hours, a charity that helps young people make career decisions. One of their more well-known bits of advice (which, to be fair, can be overemphasised by critics) is that many people should not work directly for charitable organisations. Instead, they should earn to give, i.e. take up a lucrative gig with an investment bank (or somesuch) and donate the cash they earn to worthy causes.
There is a seductive appeal to this kind of counterfactual analysis, particularly for those who like to be thoughtful and rational in their decision-making. But it cuts both ways. It can be used to call into question the advice that EAs give. After all, there are many possible counterfactual worlds to consider when making a decision. EAs emphasise a couple; they don’t emphasise all. In particular, they don’t often pause to ask: what would happen if I didn’t donate to GiveWell’s top-ranked charity? What would that counterfactual world look like?
This is where Gabriel makes an interesting argument. He doesn’t explain it in these terms but it’s like charitable variant on the efficient market hypothesis. In very crude terms, the efficient market hypothesis tells us that we can’t beat the market when making investment decisions. The market factors all relevant information into market prices already; we can’t assume that we have some informational advantage. This idea is often illustrated with the story about the man seeing a five dollar note on the street. He’s tempted to pick it up but his friend is an economist and a fan of the efficient market hypothesis. The friend tells him that the fiver must be illusory: if it really existed it would already have been picked up.
When you think about it, the market for charitable donations could be susceptible to a similar phenomenon. If you see some apparently worthy charitable cause — such as GiveWell’s top-ranked charity — going under-funded, you should ask yourself: why isn’t it being funded already? After all, there are a number of very wealthy and well-staffed philanthropic organisations out there. If they are rational, they should be donating all their money to GiveWell’s top-ranked organisations. Your donations shouldn’t really be able to make any significant difference:
The Efficient Charities Problem: If large philanthropic organisations are trying to be effective, and if they accept that evaluators like GiveWell are correct in the advice they give, then they should be giving all their money to the top-ranked charities. This would mean that your individual decision to donate to those charities won’t make any (counterfactual) difference.
How seriously should we take this problem? As you can see, there are a number of conditions built into its formulation. Each one of those conditions could be false. It could be that large philanthropic organisations are not trying to be effective (or are so irrational as to be incapable of appreciating effectiveness). This is not implausible. The efficient market hypothesis is also clearly vulnerable to this problem: people are not rational utility maximisers; investors constantly believe that they can beat the market. The same could be true for philanthropic organisations. That said, and as Gabriel points out, it is unlikely to be true across the board. There are at least some well-known philanthropic organisations (the Gates Foundation springs to mind) that do try to be rational and effective in their decision-making.
This leads us to the second possibility: they do not accept the evaluations given by organisations such as GiveWell. This is also not implausible. As noted in previous entries, there are reasons to question the value assumptions and methodologies used by such organisations (though GiveWell is responsive to criticisms). But this should not be reassuring to proponents of EA. The Gates Foundation is well-staffed and has a huge research budget. If they aren’t reaching the same conclusions as GiveWell and other EA organisations, then the counterfactual analysis of how to do the most good may not yield the clear and simple answers that EAs seem to promote.
There is, however, a third possibility. It could be that large philanthropic organisations do not donate to the top-ranked charities because they want to assist the EA movement. In other words, they want to grow EA as a social movement and they know that if they donate all their resources to those charities they would risk crowding out the people who want to do good and make it more difficult for them to identify other effective interventions.
Interestingly, Gabriel argues that there is some evidence for this third possibility. GiveWell has received considerable funding recently — enough to allow it to fully fund its top charities for a couple of years. But it has chosen not to do so, thereby retaining the need for ordinary and enthusiastic EAs to donate to those organisations. This may be a clever long-term strategy. It may allow the EA movement to grow to a scale where it can do even more good in the future. But if this is what is happening, it has two slightly disturbing features: (i) it undermines the EA claim that individual decisions really can make a big difference; and (ii) it is resting hope on a speculative future in which EA achieves mass appeal, not on the goodness of individual charitable donations.
2. Does EA neglect motivation and overemphasise rationality?
This criticism was cut out of the final version of Gabriel’s paper (in response to reviewer comments) but he thinks (personal correspondence) that it is still worthy of consideration. The gist of it is that the EA movement underplays the psychological barriers to achieving the kind of social change they want to achieve.
As we just saw, its possible (maybe even probable) that the current goal of the EA movement is to change the societal approach to charitable giving. This widescale change will require a lot of people to change the way they think and act. In encouraging this change, the EA movement prioritises rational, evidential, counterfactual analysis. It highlights neglected causes, and urges its followers to do things that may seem initially counter-intuitive (e.g. earning to give). How successful is it likely to be in achieving this widescale social change?
Two problems arise. First, the EA movement may be underestimating the difficulty in sustaining a counter-intuitive lifestyle choice like earning to give. Some people may not be able to stay in the highly lucrative career, and others may find their attitudes altered by working in some ruthless and highly competitive industry like investment banking. They may consequently lose the desire to earn-to-give. Gabriel notes that EA proponents respond to this criticism by highlighting apparently successful individual cases. But he urges proponents to be cautious in using these examples. The members of the EA movement at present are largely self-selecting. They tend to be wealthy, largely male, highly-educated, non-religious and so forth. If EA is to succeed it will have to attract more adherents and they are less-likely to be drawn from this demographic. It may be more difficult for such people to sustain the compartmentalisation inherent in the earning to give mentality.
To be fair to EA proponents, I think the importance of the earning to give approach tends to be over-emphasised by their critics. The 80,000 hours site does try to give career advice that is based on the individual’s characteristics. The earning to give approach is only really recommended for those with the right mix of attributes. Still, it has to be said that there is something in the logic of the EA position which seems to favour the suppression of one’s personal desires for the greater good. But this is a general feature of utilitarian ethical theories.
The other problem is that by emphasising cold rational analysis, EAs may be underplaying the importance of moral sentiment, particularly when it comes to creating a movement with mass social appeal. Again, the relatively self-selecting group that currently embraces EA may love to engage in detached, evidential assessment of charitable causes; they may love to second-guess their emotional reactions to some charitable nudging; but whether others require some emotional seduction is a separate matter. This is something that EAs may need to ramp up if they want to achieve the kind of social change they desire.
3. Do EAs neglect the importance of systemic change?
We come, at last, to the most popular critique of the EA movement. This one has featured in virtually every critical piece I have read on the topic. Others have assessed it at length. I will only provide a rough overview here.
The gist of the objection is that EAs are too conservative and individualistic in terms of the interventions they promote. They focus on how individuals can make a difference through the careful analysis of their charitable decision-making. But in doing this they take as a given the systems within which these individuals operate. They consequently neglect the importance of systemic change in achieving truly significant improvements in the well-being of the global citizenry. In its most common form, this criticism laments the fact that it is the institutions of global capitalism that are responsible for the welfare inequalities that EAs are trying to mitigate through their actions. If we really want to improve things we must alter those institutions themselves, not simply try to make incremental and piecemeal improvements within those systems.
Critics go on to argue that EAs, through their methods of evaluation and their general philosophy, don’t simply ignore or downplay the importance of systemic change, they actually hinder it. One reason for this is that EAs are too responsive to changes in evidence when it comes to issuing recommendations. This means that they shift their interests and priorities over time. Achieving systemic change requires constancy of purpose. It means ignoring the naysayers and critics; ignoring (at least some) of the facts on the ground until you achieve the desired change.
Now, as I say, I think others have dealt with this criticism at admirable length. I’ll mention two possible responses. First, I think it is possible for the EA to acknowledge the critic’s point and argue that nothing in the EA philosophy necessitates ignoring the importance of systemic change. It is perfectly coherent for the EA to (a) think carefully about how their charitable donations could achieve the most good and pick the ones that do; and (b) think carefully about how to initiate the kinds of systemic change that might improve things even further. In other words, I don’t think that EA has the corrosive effect that the critics seem to believe it does. This is one of those cases where it seems possible to sustain both goals at the same time.
Second, I think proponents of systemic change often underestimate (or simply ignore) the moral risks involved in their projects. Not only is it likely to be difficult to achieve systemic change, it is also uncertain whether the outcome will be genuinely morally better. Communist reformers in the 20th century presumably thought that their reforms would create a better world. It seems pretty clear that they were wrong. Are proponents of systemic change in the institutions of global capitalism sure that their preferred reforms will make the world a better place? I think there is reason to be cautious.
Again, this isn’t to dismiss the importance of systemic change, nor to reject the value of utopian thinking, it is simply to suggest that we should inject some realism into our aspirational thinking. This applies equally as well to those who think EA will achieve dramatic changes in global well-being.
Okay, that brings me to the end of my contribution to this series. I’ll let Iason himself have the last word in the next entry.