Here are some common criticisms of effective altruism from within the movement that I disagree with. I won’t go into great detail over why I disagree with them here, but might do so later regarding individual points, depending on what feedback I get. The arguments I give should be seen more as brief explanations for why I have the intuitions I have than as attempts to give watertight justifications.
It’s a judgement call how common these criticisms are. Maybe some of them are less common than it seems to me. Exactly how common they are might not matter massively for my purposes.
There are also other criticisms of effective altruism that I don’t address here. I haven’t had a systematic approach for choosing what criticisms to consider, but picked a few I’ve seen online or which have been made in private conversations.
1. EA has stagnated
Some claim that EA is stagnating, but I don’t think that’s true. On the contrary, EA has, in my view, become more intellectually impressive in recent years, thanks to new organisations like Global Priorities Institute. There have also been many other positive developments in EA, as I outlined in a previous post.
2. EAs should defer less to the EA consensus
You often hear that EAs should defer less, and think more for themselves. It’s a complex question with many nuances - obviously we sometimes defer too much, and sometimes too little. However, it is useful to have bottom-line conclusions, and I don’t think “EAs should defer less” is the right one.
One reason for this is that I think that I have a positive view of the EA consensus (see below). It’s usually very difficult for individuals to beat it. Another is that I think that people have a natural tendency towards overconfidence and towards defending their pet projects in the face of evidence to the contrary - suggesting that they should defer more rather than less. Personally I much more frequently meet EAs who defer too little than too much (e.g. regarding cause prioritisation, or their own projects). However, this is inevitably a subjective judgement.
That said, it’s good if people independently try to come up with their own considerations - but when they decide how to act, they shouldn’t put special weight on those considerations just because they came up with them themselves.
3. EAs listen insufficiently to non-EA experts
It’s sometimes claimed that EAs listen insufficiently to non-EA experts. However, I think that when EAs have challenged the non-EA expert consensus, they’ve often (though not always) been proven right. E.g. that was arguably true about Covid-19, where EAs raised the alarm early on. (It also seems that some EAs correctly predicted the February-March stock market crash, and benefitted from that.) Likewise EA investments on the crypto markets, which conventional investors were largely suspicious of, have been successful.
Also, in many of the domains that EA is most focused on (such as existential risk), many of the leading researchers (as measured by conventional metrics) are arguably EAs. And leading EAs have many other conventional signs of competence, meaning that we should have a high prior that they compare well with other thinkers.
Thus, while I think that individual EAs normally should defer to the EA consensus, the EA community shouldn’t be overly deferential towards the non-EA consensus.
4. EAs shouldn’t try to work for EA organisations to the extent that they do
Another common claim is that EAs are too keen on working for (established) EA organisations, as opposed to working independently. But I think that for many EAs, this choice makes a lot of sense. The established EA organisations have a major impact, and if you contribute to that, you can have a large impact as well. My sense is that it’s typically hard for people to match that impact if they work independently (even though there are important exceptions).
One piece of supporting evidence is that most people who can get jobs at established EA organisations choose to do so over working independently (or so it seems to me). Granted, there may be other explanations for that (status, job security), but my sense is that it’s largely because they accurately believe that they maximise their impact through working for an established EA organisation.
Relatedly, I think that when EAs run projects of their own, they are too keen to use their own ideas, as opposed to ideas that others have come up with. I think that we to a larger extent should make use of repositories of project ideas (ranked in order of priority), that anyone can run with. The person who generated the idea and the person who executes on it need not be the same person.
5. EA is too hierarchical
A related criticism is that EA is too hierarchical, and that influence should be more evenly distributed. But as I recently argued, I think that competence (of the relevant sort) isn’t equally distributed, and that there therefore is a case for organising EA epistocratically (in fact I think that EA is so organised, by and large). Also, some degree of centralisation (e.g. about EA public goods and infrastructure) makes sense for pure coordination reasons (this is separate from the epistocracy point).
***
It seems to me that there are some noteworthy patterns to these criticisms. The claims that EA has stagnated intellectually, and that EAs should listen more to outsiders, are effectively humble claims. EA isn’t as good as it seems, especially relative to non-EAs.
The claims that EAs should think more for themselves, that they shouldn’t necessarily work for EA organisations, and that EA is too hierarchical, rather concern the relationships between EAs. They are effectively egalitarian claims. Relatedly, these claims also celebrate independence and freedom.
Since humility, egalitarianism, independence, and freedom all are concepts we have positive associations with, I think there is some reason to believe that people will be biased in favour of the above five claims. In fact, it seems to me that people often have a quite emotional and moralising tone when they argue for claims like “EAs defer too much” or “EA is too hierarchical”.
It also seems to me that even though EAs often say that EAs should defer less, that EA should be less hierarchical, and so on, they by and large don’t act on these claims. That may be further evidence that people have an emotional bias in favour of these claims. The claims sound good, so people make them, but when they have to act, they’re more focused on what’s actually most effective. And then they’re more inclined to defer (and to think that others should defer), more inclined to defend hierarchies, and so on. It may be related to our finding (in a recent paper) that people are more inclined to maximise utility even if that entails painful prioritisations in one-off emergencies, than regarding general rules about such emergencies. The closer you are to action, the more you focus on what’s of ultimate importance, and the more inclined you are to kill your darlings, even if painful.
So rather than being emotional about these five claims, I think we should discuss them in a detached way, to the extent that we can. We should neither embrace nor reject claims because they feel good, but consider them as dispassionately as we can. Questions like how deferential EAs should be and how hierarchical EA should be decided by careful scrutiny of the evidence.
Slightly edited 5 January 2021 (added the third paragraph) and 20 March 2022.
Thanks to Pablo Stafforini, John Halstead, Ryan Carey, Bastian Stern, and Kirsten Horton for comments.