Date: February 14, 2024
The Centre for Effective Altruism defines EA as
both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.
Freddie deBoer says this definition is almost tautological (h/t ACX):
Who could argue with that! But this summary also invites perhaps the most powerful critique: who could argue with that? That is to say, this sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all. The immediate response to such a definition, if you’re not particularly impressionable or invested in your status within certain obscure internet communities, should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably . . . Every do-gooder I have ever known has thought of themselves as shining a light on problems that are neglected. So what?
Couched in these terms, effective altruism does sound universally appealing. And yet, not everyone who’s heard of EA identifies with it. But is it too obvious?
I don’t think so. Lots of people say they want to do the most good. But if you dig a bit deeper into how this definition is operationalized, you find that EA has a uniquely powerful framework for evaluating charity. Let’s run down a few of these EA principles:
Most people donate to support charities that tug on their heartstrings, organizations that help people in their community, or whoever knocks on their door asking for money. This is all well and good, so long as you’re not under the illusion that you are thereby “doing the most good”.
Donations to almost any charity would help human flourishing, but given limited resources, it’s crucial to prioritize your dollars for only the charities that are most effective. The one that does the most good probably isn’t going to be one you have an inherent emotional attachment to.
It’s common for people to ask if organizations they’re considering supporting are doing good work, and sometimes how much money goes to overhead. It’s less frequent to demand statistics on impact per dollar, let alone comparison shop charities based on these metrics. Humans are notoriously bad at perceiving the scope of a problem. When making donations, they picture the abstract worthiness of the cause they’re donating to, instead of actually running the numbers.
EA orgs (well, neartermist ones) do a better job of quantifying their impact than traditional charities. The Against Malaria Foundation tracks every net distribution they make, so donors know exactly what their donation is accomplishing. Meta-charities like GiveWell scrutinize the work of many charities to calculate how much funding each should get.
“All human lives have equal value.” This is a belief that many people profess but don’t act consistently with. If the life of a child dying of malaria in Uganda can be saved for less than 1% of the cost of the life of a child dying of cancer in the United States, why do people donate to St. Jude rather than the Against Malaria Foundation? Humans inherently care more about their local communities, and I don’t always think this is a bad thing, but few would explicitly endorse a 100x difference.
Besides global health, the main EA cause areas are the welfare of future generations and of animals. These might seem highly speculative, but effective altruists have reasoned that their massive potential means they are worthy of our consideration.
Something that’s not explicit in these principles but still a crucial part of the EA way of thinking is a willingness to entertain unusual ideas and arguments. The core ideas of EA are simple, but their implications are anything but.
When most people are led through a series of arguments that produces an unintuitive conclusion, they tend to reject that for a common-sense baseline (or “prior”). EAs, by and large, are the people who don’t. They will scrutinize the steps in the causal chain supporting a concept, and, failing to find an inconsistency, might reorient their whole life or worldview around it. This is what is meant by “taking ideas seriously”.
On the margin, most people could use a lot more of this skill. Some EAs I know go too far in this direction for my taste, but how can I say for sure that they are wrong? Sometimes crazy ideas turn out to be true.
Sometimes they’re just crazy. But the only way to have new ideas is to allow yourself to think far outside the mainstream. If the ideas were obviously true, they wouldn’t be neglected. And similarly, if EA when fully formulated were obvious, then everyone would already be donating to EA charities. It’s like moneyball—every team wants to win, but only some take a serious and quantitative sabermetric approach.