A friend sent me an article criticizing effective altruism, one that seemed to levy very poor critiques. I thought I’d respond to it.
The first veiled critique comes here
Any movement contains a huge diversity of views; Effective Altruism’s views on what its own priorities should be are no exception. For example, GiveWell’s Maximum Impact Fund splits donations between the causes of malaria prevention, vitamin A deficiency, childhood vaccination, deworming, and direct cash transfers. Giving What We Can gives priority to issues like factory farming, climate change, and promoting beneficial AI development. 80,000 Hours lists, as its top two priorities, more research into what Effective Altruism’s priorities should be, and promoting Effective Altruism itself — which may or may not reassure anyone wondering whether Effective Altruism’s organizational structure is pyramid-shaped.
This claim is just bizarre. Effective altruism devotes a very small part of its resources to ea promotion. If we want to do the most good, one way to do so is to make there be more ea’s. Thus, it’s good to focus on ea promotion, the same way it’s good for a chess club to promote itself sometimes, rather than stick to playing chess.
As 80,000 hours points out
“For every $1 Giving What We Can has spent on creating and growing its community, as of 2014 its members had already given more than $6 to effective charities.”
“The combined budgets of organizations that work on building effective altruism is around $25 million.”
This is a very small part of ea.
In critiquing Effective Altruism, I do not intend to argue here that Effective Altruism’s preferred causes are bad ones, nor that any particular action which Effective Altruism recommends is harmful. I am also not saying that people should not try to do a lot of good, nor that they should not donate money to charitable causes, nor that they should not make career choices in virtue of concern for the common good. Indeed, I understand the motivation behind Effective Altruism all too well. In her book, “Trick Mirror,” essayist Jia Tolentino captures perfectly a feeling that I’ve had for as long as I can remember: “[T]he choice of this era is to be destroyed or to morally compromise ourselves in order to be functional — to be wrecked, or to be functional for reasons that contribute to the wreck.” What Effective Altruism promises is a third option: a lifestyle of levelheaded thinking and ethical action which shields both the doer and recipient of altruism from the vicissitudes of the modern world.
It is in this promise where Effective Altruism falls short. Effective Altruists can certainly do, and have done, a lot of good. But I argue that Effective Altruism is not true altruism, and that even if we recognize the positive impacts of Effective Altruism, we must also acknowledge that the grounds on which it defines its own effectiveness are incredibly shallow.
Consider two states of affairs.
Hundreds of thousands fewer children die of disease, but the actions done aren’t true altruism as defined narrowly.
Hundreds of thousands more children die of disease, but people aren’t deluded about the trueness of their altruism.
World 2 seems clearly better if we care about bettering the world, rather than maintaining strange moral purity. Real altruism or not, I’ll take anything that prevents lots of children from dying and makes the world much better.
A litany of other criticisms of Effective Altruism are located only a short Google search away from the interested reader. The criticisms I’ll highlight here are the following and are to some extent, interconnected. First, philosopher Amia Srinivasan points out that Effective Altruism is a hyper-individualistic movement which places too much emphasis on private charity, and does so to the exclusion of addressing structural causes of suffering at the level of communities, classes, and states.
This objection is bad for reasons I’ve discussed here.
Brian Berkeley points out that the institutional critique doesn’t apply, the effective altruism movement does do research into the cost effectiveness of institutional reform, it just often finds it’s less effective. However, in cases where it’s more effective, EA does systemic reform.
EA already does work on systemic change. Nick Cooney of the humane league is an effective altruist who got McDonalds, Dunkin’ Donuts, General Mills, Costco, Sodexo and many more to adopt cage-free egg policies. Lincoln Quirk reduced the costs of remittances dramatically, which is valuable given that remittances provide far more money overseas than foreign aid flows. Scott Weathers lobbied for the reach every mother and child act which would allow USAID to look for evidence before spending money overseas. Effective altruists have also pushed for criminal justice reform. The distinction is, unlike other movements we look for systemic reforms that work rather than ones that make us feel cool and radical.
As Fodor points out it “is quite plausible, indeed I think history indicates overwhelmingly probable, that even if all EAs on the planet, and ten times more that number, denounced the evils of capitalism in as loud and shrill voices as they could muster, that nothing whatever of any substance would change to the benefit of the world’s poor. As such, if our main objective is to actually help people, rather than to indulge in our own intellectual prejudices by attributing all evil in the world to the bogeyman of ‘capital’, then it is perfectly reasonable to ‘implore individuals to use their money to procure necessities for those who desperately need them’, rather than ‘saying something’ (what exactly? to whom? to what end?) about ‘the system that determines how those necessities are produced’.”
As Alexander argues, if everyone donated 10% to effective charities it would end world poverty cure major diseases and start a major cultural and scientific renaissance but if everyone become devoted to systemic change we would probably have a civil war.
There are already issues that do a bunch of research into politics such as the brookings institution. If ea were to become more political it would likely become a brookings institute or cato institute esque group.
Political campaign generally don’t result in particularly radical change, it’s unlikely that capitalism will be eliminated in the near future.
EA’s may work in opposite directions if they’re split on issues of systemic change and cancel out. Thus, rather than having a movement around influencing institutions in opposite ways, we should just do good things.
EA’s do systemic reform. 80,000 hours is the top job advisory EA organization, and one of its top recommendations is going into government. Other high ones include being a journalist, public intellectual, earning to give, or a researcher. EA definitely does systemic reform.
The fact that there are so many other people working on systemic reform means that EA’s working on systemic reform wouldn’t do very much and would be canceled out.
EA is largely about communal efforts to better the world that require lots of actors.
Second, Effective Altruists are biased towards the sorts of suffering which lend themselves to easy quantification, maligning ineffable impacts like culture loss and self-determination.
This is once again false. For one, culture loss and self determination can be cashed out in terms of harm. While it’s difficult to compare deaths to debilitating disease, ea’s do that frequently, given that that’s the only way to make decisions about what does the most good. There’s no reason the same couldn’t be true of culture loss.
Secondly, ea focuses lots on things that are hard to measure including existential risks, animal suffering, ai, bioengineering, nuclear war, and many others. EA does not just focus on things that are easy to measure. If you go to the 80,000 hours list of job recommendations, many of them don’t have clear, easily quantifiable impacts.
The main reason that ea doesn’t focus on these things is that they’re generally not the best way to do good. The cost to save a life if given to the malaria consortium is 4,500 dollars. There’s no culture loss preventing charity that does as much good.
Third, like other humanitarian movements, Effective Altruism’s viewpoint towards the developing world is that of western saviorship, exacerbated by the fact that a significant majority of Effective Altruists are English-speaking white men. Although I do not focus on them in this article, I view these as valid criticisms which pose another sort of pressing problem for Effective Altruism. Naturally, there are reciprocal responses and dialogue surrounding these issues from within the movement.
This objection isn’t very good.
The motivations of people bettering the world don’t matter. If a person saves another’s life, even if they have saviorist motivations, it’s still good.
As Karnofsky points out, criticisms of foreign aid never apply to the types of foreign aid done by effective altruists. They focus on government to government aid programs, rather than organizations that provide direct aid to people, generally fail to account for the number of people affected and the time elapsed, in Africa only 40 dollars is spent per person per year, and they don’t look at the most effective programs such as the ones that eradicated smallpox, which would have a cost effectiveness of one life saved per 40000 dollars. More evidence comes from David Beckmann, president of Bread for the World who reports “There are fewer hungry people in the world today than 25 years ago. The proportion of undernourished people in developing countries has dropped from one-third to one-fourth. Since 1960, adult literacy in sub-Saharan Africa has increased by over 280 percent; infant mortality has declined in East Asia by more than 70 percent; the under-five mortality rate has declined by over 75 percent in Latin America and the Caribbean; and life expectancy has risen by 46 percent in South Asia. Development assistance has contributed to these advances.”
Ted Rappleye of the Borgen project additionally explains, citing the Bill and Melinda gates foundation “Again, the idea that foreign aid has no effect is simply not true. In the 2014 Annual Letter, the Bill & Melinda Gates Foundation explain how much international development programs help the poor. Since 1960, over 1 billion people have escaped extreme poverty. Global health programs have done incredible work to stop disease; over 2.5 billion children have been immunized against polio since 1988, and in 2013 fewer than 400 cases were reported worldwide. Given current trends, extreme poverty (living on $1 per day) will end by 2035, and child mortality will drop to U.S. levels by that time as well.”
Even critics like Easterley, Deaton, and Moyo agree about the efficacy of these aid programs.
Ozymandias points out specific features of EA that makes it not colonialist, writing “First, effective altruists tend to emphasize evidence. It’s all too easy to make assumptions that people in developing countries really want what you think they ought to want, regardless of their stated preferences. It’s all too easy to blunder into some complex system you don’t understand and mess everything up, because you have a PhD and none of these people have a fourth-grade education and how could they possibly know more than you do? It’s all too easy to tell yourself a beautiful story about the grateful natives and ignore the facts on the ground. It’s all too easy to decide that clearly what would really benefit people in the developing world is whatever benefits you.”
Second, “…effective altruists care a lot about autonomy. Give Directly does what it says on the tin: it gives poor people in Africa unconditional cash transfers. Much to the surprise of burden-carrying white men everywhere, it turns out that if you give people money they make basically reasonable decisions about what to spend it on: they buy livestock, furniture, and iron roofs. It turns out that people generally know their own needs better than you do, and you should generally trust them instead of assuming that you know better. Consider the effective altruist proverb: if your intervention cannot outperform giving poor people cash, you should just give them the cash. This sort of essential respect for the preferences of people in the developing world speaks well for our ability to actually improve things, instead of just making ourselves feel better.”
These points are spelled out in greater detail in the article. So I think that this movement that has dramatically improved lives is not subject to the relatively lackluster objections rattled off.
Charity can cut both ways, so let’s take a charitable view of Effective Altruism. That is, let’s try for now to look past incredulous-stare-inducing arguments from the movement, such as Chapter Eight of the book, “Doing Good Better,” by William MacAskill (self-proclaimed co-founder of Effective Altruism), which quite literally defends sweatshop labor;
This is just well poisoning. If it’s being looked past, why is it included in the article. The claim MacAskill supports it is misleading: he thinks it’s good to buy from sweatshops because when people don’t buy from them they close, and the options that people turn to are worse. He has a significant amount of empirical data for this and it’s agreed with by the far left nobel prize winning economist Paul Krugman . The basic case is that if sweatshops go out of business people don’t just magically get goods jobs, they turn to worse jobs. Absent an argument against buying from sweatshops, this is just an appeal to personal incredulity in the face of robust data.
But also, even if MacAskill is wrong, the fact that a prominent effective altruism holds a false view is not an argument against EA.
let’s try to look past the movement’s apparent acquiescence to structures of domination
What? What structures of domination does it acquiesce to?
and the overwhelming number-crunching involved in the perpetual triage of quality-adjusted-life-year (QALY) maximization.
Rational choice theory has shown that any agent has to have a utility function—absent that disasterous things happen (see Dutch Book Arguments). So this is a very bad objection.
But even ignoring that, there’s a clear reason for people trying to better the world to quantify how much better particular actions make the world. While it’s not super precise, arbitrary metrics are better than no metrics at all, because they’re the only way to compare different actions. We may not know exactly how many people being cured of intestinal worms is as good as preventing one malaria death, but we know it’s not .001 and we know it’s not a billion. To make decisions at the margins, we need a metric. This case was spelled out in greater detail here.
Perhaps it’s possible that Effective Altruists need not accept the conclusions at the logical extremities of the movement.
Perhaps it’s possible?? Of course it’s possible!! One can disagree with most EAs about most things and still be part of the movement as long as they want to better the world as much as possible. Agreeing with Will MacAskill about sweatshops is not a rite of passage for being an effective altruist.
Perhaps, some will argue, Effective Altruism is merely a useful practical framework for the modern person, driven by altruism and a desire to do good in the face of the world’s complexities.
The problem with this rhetorical move is that Effective Altruism’s deeply conditional way of viewing moral goodness is at odds with our general concept of altruism. Altruism is often defined as selfless action for the wellbeing of others. We should be careful to note that not all selfless actions are altruistic, and altruism need not be contrary to a person’s best interests (indeed, some philosophers argue that altruism is in everyone’s best interest, including that of the doer!). The reason why selflessness is still a part of the definition of altruism is because for the altruistic person, their concern for others’ well-being is great enough to overtake their self-interest, if the two should come into conflict. Well-being resists narrow definition: it could refer to a wide range of things from improved life expectancy to flourishing and self-actualization. Take as an example of altruism a person who, under conditions of food rationing, eats less so that others can have more. This person is acting altruistically by our simple definition because she knows that nutrition is crucial to any kind of well-being for people, and will sacrifice some of her own food towards this end. If you agree with me that this suffices as an example of altruism, then note what is not a part of the example: the person in the example does not impose on herself the condition of only giving food to someone for whom the food would cause the largest relative increase in well-being, yet we regardless still view her as altruistic.
This reveals that Effective Altruism’s condition of optimizing well-being is not part of our organic concept of altruism. In particular, Effective Altruism is not actually altruistic because the purportedly altruistic choices the movement prescribes — donations, career choices, etc. — are all done conditionally. For example, an Effective Altruist making a donation to charity does so only on the conditions that the donation has some expected payoff in terms of QALYs, a comparative advantage over donations to different causes, and ultimately on the condition of the Effective Altruist maintaining both their moral high ground and immunity from risk. MacAskill’s own argument in “Doing Good Better” illustrates this perfectly: “Imagine saving a single person’s life: you pass a burning building, kick the door down, rush through the smoke and flames, and drag a young child to safety … You’d be a hero. But we [Effective Altruists] can do far more than that. According to the most rigorous estimates, the cost to save a life in the developing world is about $3,400 (or $100 for one QALY). This is a small enough amount that most of us in affluent countries could donate that amount every year while maintaining about the same quality of life.”
Even if we grant that EA isn’t real altruism, who cares. If it saves the lives of hundreds of thousands or millions of people, improves the lives of billions of factory farmed animals, and reduces the risks of the end of the world, who the hell cares whether it’s genuine altruism? Altruism isn’t a magic addition that is requires for actions to be good—altruism is good because it betters the world.
However, EA seems to be the purest type of altruism. EA doesn’t focus on helping others just so we can feel good. It looks hard at how we can do the most good, because it cares equally about other people. If you could either save the life of 1 person or 10 people, you shouldn’t just save the one person to be a true altruist. You should do what would make the world better. Caring about everyone equally and trying to help them as much as possible is the purest type of altruism.
This example is designed to attract people to the cause of Effective Altruism by emphasizing its convenience, but therein lies the problem. We would call the person who runs into a burning building to save a child “heroic” and “altruistic” in virtue of precisely a set of conditions which would cause a doctrinaire Effective Altruist to advise people not to save children in burning buildings. Someone who rushes into a burning building does not stop to calculate the expected QALYs saved or consider the counterfactual; although an Effective Altruist would think this irrational and arbitrary, it does not matter to the person running into the building how old the child is or whether the child has a disability (QALYs decrease as people age and are lower in the case of disability — I address this later). Second, more importantly, the person who runs into the building does not even know if they will save the child at all, but are nonetheless willing to put themselves at risk. Third, the person knows that there are things which cause suffering on a scale far more vast than a house fire, as we all do, and does not let this distract them from acting altruistically on behalf of the child. MacAskill makes an improper comparison in saying that Effective Altruists are far more heroically saving children: maybe Effective Altruists affect more children than the person who runs into one building, but these two actions cannot be compared just with respect to scope because they are different kinds of actions altogether. Even worse for Effective Altruists, adding the condition of optimization onto altruistic action may undercut that which makes actions altruistic in the first place. A less-charitable reading of MacAskill’s argument, quoted earlier, reads MacAskill as bragging when he writes, “we can do far more than that.” If Effective Altruism’s focus on optimization plays into competitiveness, flattering oneself, or derision of others who ‘do less good,’ then Effective Altruism is not selfless but self-aggrandizing, and therefore goes against altruism.
I, much like the children who will not die of malaria as a result of ea action, could not care less about the motivation of ea’s. EA would still be good if the motivation of every member of the movement was to follow some bizarre satanic ritual that would bring satan into the world. If it saves lots of lives and makes the world much better it’s good. Who cares if eas feel boastful.
The claim that eas would advocate not running into a burning building is false. The way to make the world best is not to bust out a calculator and do the utility math every time you have a decision. The world would go best if you’re the type of person who would run into a burning building.
Other than claiming that eas might feel like they’re even better than firefighters who save kids, there’s literally no argument here. Psychoanalysis about the motives of eas does not undercut the goodness of the movement.
I also think that it’s obvious interacting with eas that they’re some of the nicest, most humble people around. This psychological claim is just nonsense.
Now, if the point here is to object to the claim that it’s better to save a hundred people from malaria than one from a burning building, then the author would be mistaken here. If you could either save 1 person or 100 people, you should save the 100, because they have equal value, and 100 is more than 1. (There are more arguments that can be given supporting this, but it’s pretty obvious).
Some will still argue that the movement’s focus on effectiveness is worthwhile. After all, we are limited beings in a world with finite time and resources. To this I say: Effective Altruism defines the effectiveness of its charitable effort with respect to a certain extremely narrow standard. If we call that standard into question — which I will argue we have good reason to do — then what we are really questioning is the effectiveness of Effective Altruism as a whole.
As I mentioned earlier, Effective Altruism uses quality-adjusted-life-years, or QALYs, as a metric for quantifying good which could be done by various charitable interventions. Conversion between dollars spent and QALYs pervades Effective Altruist reasoning; the pressing question when it comes to optimizing where to spend one’s money and time is which cause or charity saves the most QALYs per dollar. In calculating QALYs, the quality of life under a health state is determined by surveying populations on how desirable they would rate living in that health state for a year — so in short, QALYs are a descriptive tool which quantify the perceived desirability of a year of life as a measure of its quality. Effective Altruists, on the other hand, use the difference between amounts of QALYs as a normative or prescriptive metric to determine which causes to contribute to over others. For example, MacAskill makes a brief argument in “Doing Good Better” using QALYs to argue for contributing to someone’s blindness-preventing surgery over someone’s AIDS-treating antiretroviral therapy — surgery wins out, with 30 QALYs as opposed to therapy’s 6.5.
But, crucially, it is necessary that if a difference in the amount of QALYs provides moral reason for an Effective Altruist to value one cause over another, then they must agree that a year lived by a person at a lower number of QALYs is less morally valuable than a year lived by a person at a higher number of QALYs. This latter claim is different from and does not follow from the fact that people would prefer to live healthy lives — of course we all would like to be in perfect health. Effective Altruism conflates the desirability of a particular QALY with its worth and how morally important it is to give a person a chance to live out that year. It is a mistake for Effective Altruism to equate desirability with moral value, and this mistake is particularly evident in the case of chronic disability. To take a personal example, I am profoundly deaf in both ears, so the rest of my life by necessity contains strictly fewer QALYs than someone else who is like me in all other respects but not deaf. Given a scenario where only me or my hearing twin can survive, the Effective Altruist will say that it is a clear choice which one of us should live. I think this is not only wrongheaded but repugnant. Whether a condition is desirable is different from whether the lives of people living with that condition have equal moral value. And if the movement’s answer to the latter question is not a resounding yes, then Effective Altruism has gone terribly wrong.
Quality adjusted life year conversion is fairly obvious. We need to have a way to compare quality of life to actually saving lives.
In terms of the claim of equal value, ea’s don’t need to make this consideration. This generally doesn’t come up very much in ea reasoning. However, there is a simple knockdown argument for why it’s better to save better lives (even putting aside that it follows straightforwardly from utilitarianism, which has been defended elsewhere).
Suppose we could either save the life of a person who is 79 and will live one more year or one who is 40 and will live 40 more years. We should obviously save the life of the person who is 40. Saving my life when I’m 40 would be more important than saving it when I’m 79, because it enables more years of happy life.
So this leads us to
1 It’s more important to save the lives of people who have more time left in their lives.
Now, consider a situation in which we can save a person’s life under two conditions
A) Their happiness decreases by half per day but they’ll live two years.
B) Their happiness is the same per day, but they’ll live one year.
It’s obvious that the importance of saving their lives in these cases is equal. After all, my life would be just as good if it was twice as good per day but half as long. If one denies this then they’d have to accept that drawing out good experiences across more days makes ones life better.
So from this we get
2 A life with utility of x is just as good (from the standpoint of protecting it) from a life with a utility of 2x per day that’s half as long.
So if we accept those two we get the conclusion that it’s more important to save the lives of people with better lives. This is pretty intuitive—if you could either save the life of a person who will be miserable and have a life barely worth living or one with an excellent life, you should save the person with the excellent life.
But again, this isn’t needed for ea analysis.
Perhaps Effective Altruists have ways besides QALYs of calculating the value of different causes. For example, in the charity GiveWell’s spreadsheet analyzing the cost-effectiveness of its Maximum Impact fund, dollars spent by particular charities are evaluated in terms of the moral value produced. The moral value of various options come from a spreadsheet cited throughout by GiveWell containing “moral weights” of people in different age brackets. The spreadsheet is devoid of units for the values listed and contains no rationale for the numbers (which aren’t in a range where it would make sense for them to be QALYs), so evidently Effective Altruists have figured out some alternative. Setting aside the details of comparing different causes in a fine-grained way, even if Effective Altruism finds a satisfactory alternative to QALYs, I still think that the movement’s focus on maximization of good is, at best, a tough pill to swallow, and at worst, groundless.
Lots of charities do very little good. For example, the make a wish foundation improves one person’s life a little bit at a high cost. Absent comparing organizations, how do we make sure to give to ones that are thousands of times better, rather than orgs like homeopaths without borders (it’s a real org)?
I found the following thought experiment in a Forbes article by Davide Banis. Banis (I believe, since I did not see it in my reading of the book) wrongly attributes the thought experiment to MacAskill’s “Doing Good Better.” Still, it logically follows from Effective Altruism’s principle of optimizing the good one can do. Return to the previous example of the burning building, and now suppose that you know with certainty that there is a child in one room and a Picasso in another room. Effective Altruism argues that donating the countless millions you would gain from the sale of the Picasso would save countless more children than the one child in the other room, and that this choice would lead you to donate dozens of times more than you would otherwise be able to give in your lifetime. So run with the Picasso and make haste lest the fire damage it.
Most people, including me, will find this both counterintuitive and morally troubling. On one hand, an Effective Altruist might argue that you should still save the child. Perhaps the child could do good which equals or surpasses the good that the sale of the Picasso would do, factoring in various guesses as to the expected price of the painting, the probability and magnitude of the good the child might do, and so on. Of course, this argument would be highly speculative, which reveals a deeper problem: Effective Altruism’s claim to its own effectiveness depends on one’s trust in the assumption that reliable prediction of highly specific future circumstances lies within humans’ powers of reasoning.
On the other hand, an Effective Altruist might bite the bullet, arguing that their version of doing good simply entails that some sacrifices must be made along the way: to do the most good requires us to dispense of options promising less good. Though this seems to cater to what the average person wants — don’t we all just want to do good? — it is deeply misleading because Effective Altruists have equated the question of whether an action is a good one with the question of whether an action maximizes some component of what is good. Twentieth-century philosophers of virtue ethics such as Anscombe and Foot discuss the notion of “thick” and “thin” concepts: thin concepts, such as “right/wrong,” “good,” and “obligatory,” are merely evaluative, whereas thick concepts like “rude,” “pious,” and “open-minded,” do descriptive work on top of making an evaluation. When the average person is asked what they think moral goodness is, they will most likely gesture towards a diverse set of thick concepts without saying that goodness is reducible to any one of the concepts. An Effective Altruist, in contrast, is forced by their assumption that goodness is scarce and that tradeoffs must be made to zero in on only a few thick concepts (such as hedonic pleasure) and say that goodness is reducible to the maximization of that quality. Under the guise of satisfying a general and robust picture of goodness, Effective Altruism smuggles in a niche definition of goodness as maximization of a particular small set of properties. This takes advantage of the natural plurality, and therefore ambiguity, of the average person’s view of moral goodness.
EA doesn’t require taking any stance on this. However, I’ve already argued in a blog post that one should do it here.
As I mentioned earlier, I’m not arguing that people shouldn’t donate money to charity, nor that the actions which Effective Altruism recommends are bad ones in themselves, but rather I am arguing that the mindset or framework which Effective Altruists use to conceive of moral concepts like altruism and goodness is a flawed one. And if I’m correct, this means that people who just want to be altruistic and to do good should find cold comfort in Effective Altruism’s message. Effective Altruism is like a broken moral machine in that it runs over the wrong set of data and reads the data in a lopsided way. Unfortunately, even the best intentions of its user will not change its brokenness. So, as Swarthmore students, of course we should make conscientious choices and try to do well by others, both in our career trajectories and everyday lives, while still acknowledging that we are a part of a vast and complicated world. We don’t have to settle for a pared-down, frugal sense of ethics, because moral goodness is not a finite resource. I don’t mean to discourage doing good, so let me be clear that being an Effective Altruist is better than being amoral or evil, and the actions of Effective Altruists have done tremendous good. But if we want to do good in many ways — and I think we should want this — we should notice that goodness in all its forms becomes accessible to us only when we dispense with the moral lens proposed by Effective Altruism.
What’s the alternative here? EA has saved hundreds of thousands or millions of lives and made the world much better. Who cares what other moral guidance ea gives—one should still give to ea organizations.
The author also curiously ignores the elements of ea about reducing factory farming and existential threats, which seems to undercut the thesis given that they’re harder to quantify, but they’re still championed by ea.
All things considered this article has very lackluster critiques of ea. The author ignores most of it, misunderstand the methods, resorts to psychoanalyzing effective altruists, and spectacularly fails to bare out the thesis that ea is either ineffective or not altruistic.
It shotguns through ill thought out critiques, while ignoring all things eas have said about them, and frequently poisons the well. The complains generally relate to ea picking up on sad facts about the world, rather than being targeted critiques of ea. Most of it involves criticizing ancillary beliefs of some eas, rather than criticizing the movement itself, and it seems to grant that the movement does lots of good.
Actions supported by eas have saved millions of lives and prevented hundreds of millions of cases of malaria. EAs have improved the conditions for hundreds of millions if not billions of animals on factory farms and are majorly responsible for reducing existential risks—they’re the primary group focusing on such risks. We should thus be hesitant to tar ea with vague guilt by association, because ea is an immensely positive force for good.
If one is going to write an article criticizing a movement, they should think twice before doing it on the backs of the victims of malaria, who will no doubt get scarce consolation from the knowledge that had their child not died a horrible death by malaria, some british shmuck who dedicates basically his entire life to bettering the world with controversial views on sweatshops might have felt proud about making the world better.
> This claim is just bizarre. Effective altruism devotes a very small part of its resources to ea promotion.
This is incorrect. As the article specifically states, certain EA organizations that you endorse have devoted (as their topmost priorities) self-referential work related to EA.
> If we want to do the most good, one way to do so is to make there be more ea’s.
Assuming that EA is good is certainly true, but there is an arrogance to be had in a movement who believes that the literal best possible use of resources is promoting itself.
> it’s good to focus on ea promotion, the same way it’s good for a chess club to promote itself sometimes, rather than stick to playing chess.
This is incorrect, even if promoting EA is in fact "good". It is certainly not good for the same reason that a chess club promoting itself is good. A chess club's primary goal is interpersonal competition and community relationships, which by their own terms require people to join as people naturally leave. EA's purported goal, on the other hand, is to take the best actions.
> Consider two states of affairs.
No.
> World 2 seems clearly better if we care about bettering the world, rather than maintaining strange moral purity.
Although ethics may be a foreign concept to some ************, most people do in fact believe in them. Regardless, this isn't an actual argument, the point being made in the post isn't that moral purity is "more" important then anything EA does. Indeed the part you quote says ***exactly*** that, the point is rather that the goals and marketing of EA as Altruism is simply false.
> Brian Berkeley points out that the institutional critique doesn’t apply [...] it just often finds it’s less effective. However, in cases where it’s more effective, EA does systemic reform.
Have you considered that the Author of this Article is saying that EA's research is simply wrong as a matter of fact? I have noticed that many EA's simply assert that their decision-making models simply "don't do bad things", which is about as effective as asserting that the Federal Government can do no wrong because its research budget is larger.
> McDonalds, Dunkin’ Donuts, General Mills, Costco, Sodexo and many more to adopt cage-free egg policies.
By adopting these policies they sapped momentum away from systemic movements that would have been able to remove factory farming entirely.
The other two reforms you mentioned are great though.
> denounced the evils of capitalism in as loud and shrill voices as they could muster
If all the EA's on the planet denounced children dying of Africa in their loudest voices, nothing much would change either. You can't say that EA is great because of monetary donations, and then mega-strawman other possible uses of that money as mere "shrill voices". If every EA gave 10% of their income to Socialist movements, things would probably improve massively.
> As Alexander argues, if everyone donated 10% to effective charities it would end world poverty cure major diseases and start a major cultural and scientific renaissance but if everyone become devoted to systemic change we would probably have a civil war.
This compares apples and oranges. You presume that everyone donates 10% to *effective* charities, but then assume that we don't devote ourselves to *effective* systemic change. For example, say 50% of people donated to "effective charities", and the other 50% donated to the "Elon Musk make-AGI-ASAP-for-profit" group, things would go very bad very quickly.
> There are already issues that do a bunch of research into politics such as the brookings institution. If ea were to become more political it would likely become a brookings institute or cato institute esque group.
Ok. And?
You say a lot more stuff but I think I can generally answer it by asserting with no argumentation that utilitarianism is evil.