Against Ineffective Altruism
Responding to another deeply terrible critique of effective altruism
I tend to be in favor of doing good things better as opposed to worse. However, some sadly disagree and support doing good things worse as opposed to better. Recently Triedman, Dai, and Ahlawat were kind enough to put their defense of ineffective altruism into writing, leaving it easily critiqueable.
Nowadays, effective altruism’s epistemology and tools often parallel those of the tech industry. At its heart, it is driven by the principle of maximization and informed by statistical analysis. With these methods, effective altruists make arguments such as maximizing disability-adjusted life-years by allocating time and money towards initiatives that provide a mosquito net for a child in a poor country (rather than providing direct donations to the child’s family).
Effective altruism does try to quantify things. This is a good thing—if we can either save 100 people or 10, we should save 100. Any rational decision maker can be modeled as having some coherent utility function which they’re optimizing—see this series.
The claim about providing mosquito nets rather than direct donations is baffling. If mosquito nets can save a life for a few thousands dollars, which they can, then we should donate to them. We should help people more, rather than less. More importantly, effective altruism as a movement gives a ton of money to givedirectly, which directly gives money to poor people. There’s dispute about whether givedirectly is as good as the others, but given that givedirectly is a top recommended charity, this critique totally misses the mark. It’s analogous to claiming that the federal government spends money on other things rather than social security, when a large portion of the federal budget goes to social security.
Next, the authors argue EA is largely longtermist. Longtermism is a significant chunk of EA, but not even the majority of it.
Regardless, in recent years, it has increasingly begun to drive giving, set priorities, and define the movement as a whole, prompting some to ask if effective altruism is just longtermism now.
The “some people,” is (bizarrely!) 3 links to the same post. The same link copied 3 times. Additionally, this post argues that ea isn’t just longtermism, and that longtermism doesn’t even get most ea funds. Heck, it’s not even the top recipient of funds, global health and development is.
Longtermism is also increasingly popular among rank-and-file effective altruists, to the point where many consider them to be synonymous. According to data from the Open Philanthropy Grants database, in 2021 effective altruists donated $92 million to AI risk research, $21 million to biosecurity and pandemic preparedness, and $10.5 million to global catastrophic risk research. Altogether, this $125 million towards longtermist existential risk research represents a larger slice of donations than any other individual cause. And the allure of AGI (Artificial General Intelligence) — a major focus/fear of effective altruism and longtermism — is especially clear in industry, where multiple startups and big tech companies pour billions of dollars into research and development.
Global catastrophic risks aren’t existential risks, so they shouldn’t be included. However, taking this into account, it barely beats out global health and development, and isn’t close to being the majority of funds.
It seems to me that the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do (except make the species extinct). It’s easy to see how this position can ultimately lead to reprehensible outcomes. Just this week, 80,000 Hours released a piece that argues for effective altruists to not focus their careers on climate change — a process which will uproot hundreds of millions of mostly non-white poor people and cause billions to experience chronic water scarcity — because it has a low chance of becoming uncontrollable and turning Earth into Venus. Other longtermists worry that their ideology would provide rationalizations for genocide if political leaders took it literally. Mathematical statistician Olle Häggström, usually a proponent of longtermism, imagines
a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
Besides the moral hazards of advocating these positions, these ideologies provide an overly simplistic formula for doing good: 1) define “good” as a measurable metric, 2) find the most effective means of impacting that metric, and 3) pour capital into scaling those means up.
Well, effective altruists would generally agree that politicians shouldn’t make these types of decisions. Giving politicians these powers has much greater risk than upside. However, if the only way to reduce existential threats considerably was really to use nuclear weapons, I’d be in favor of that, much like I’d support the black plague if it prevented global feudalism.
Effective altruists tend to, unlike most people, advocate nuclear no first use policies. So if one is concerned about what effective altruists actually do in reality, rather than in implausible counterfactuals, they need look no further.
But following the formula of effective altruism is clearly not all that being good requires. There are boundless ways of doing good that are fundamentally immeasurable or, if they are measurable, may not be optimized. Nevertheless, this universe of actions demands our consideration. To follow in the footsteps of Timnit Gebru (and to be purposefully contrarian), let’s call the philosophy of seriously considering the merits of doing good immeasurably or suboptimally ineffective altruism.
Ineffective altruism might look like giving $10 to a houseless person who asks for it.
Giving 10 dollars to a homeless person is plausibly a good thing. It makes one more likely to donate if they feel good about helping others. However, it costs about 20 dollars to cure blindness. In terms of external importance, curing blindness for 2 people is obviously way more important than giving away 20 dollars to two homeless people.
It might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare.
Effective altruism does this! However, if we’re looking for enshuring “as many people as possible have access to basic material needs,” we have to look at effectiveness.
It might look like the ephemeral work of knitting a social and political community together. After all, how can one quantify the resiliency of a particular neighborhood? None of these actions would be particularly “effective,” and yet they might also have more of a tangible impact than unknowably reducing an existential risk by some fraction of a percentage point. They also show an understanding of one’s responsibilities to their community, how strengthening community is also important for our shared future, even if it isn’t measurable.
Effective altruism is not just about easily quantifiable things. Existential risks aren’t easily quantifiable, but ea’s focus on them.
What the heck is this proposal. How would we spent thousands of dollars “knitting together a community.” Why would that be more effective than curing blindness for 50 people?
Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem. As an ideology of allocating resources, it is recognized as explicitly political, rather than cloaking itself in the discourse of science and rationality. It allows us to get outside of the concept of altruism entirely — a concept that feels limiting in its focus on the actions of the individual — and instead consider a paradigm of collective, democratic mutual aid. Most importantly, ineffective altruism allows us to ask harder questions than effective altruism does: questions about who and what we value.
But if we eschew metrics then, rather than curing blindness for 20 dollars, we’ll instead do things that are hundreds of times less effective.
What might “moral good” look like outside of market-derived values (like the maximization principle)? How can we collectively decide to allocate resources? How can we build societies based on principles that cannot be measured, like mutual respect and solidarity? How can we eliminate material misery from the world? What might we do to ensure the flourishing of future generations, rather than just their survival? How can we depart from a society where those who have the privilege to choose to care about others can, and move towards a society where everyone has the power to care about others and must?
EA is working on most of these things. EA’s focus on value promotion and other innovative programs. However, some movement to move away from markets will be less effective than EA, given the difficulty of moving away from markets. Traditionallylosuch programs tend to fail. EA is just doing them effectively, unlike this asinine proposal.
People all over the world have been attempting to answer these questions for generations. After massive street protests in 2019 in Chile, 80% of the population voted to redraft the nation’s constitution — an effort that is currently in progress and will be finalized this September. In Taiwan, Digital Minister Audrey Tang is building effective tools for building consensus and making decisions online. Tang helped enable a highly effective set of COVID-19 policies that kept the disease largely outside Taiwan for more than two years, influenced what digital democracy looks like on the island, and inspired other online civil processes around the world. And in the United States, the last few years has seen rising interest in small-d democratic institutions like labor unions and mutual aid organizations. These efforts may be inefficient or messy or unpredictable, but are good in part because of those facts, not in spite of them.
These all sound good. However, they don’t trade off with ea. Additionally, these weren’t influenced very much by people at the margins.
As we get some distance from effective altruism and longtermism, we can also begin to consider other ways of thinking about the long-term future. Our conceptions of the future inform our actions today, and the future is much too important to cede to an ideology with the ethos and rhetoric of longtermism. Seventh-generation decision making, for example, is an indigenous principle that is enshrined in the Constitution of the Iroquois Nation. It mandates Iroquois leaders to consider the effects of their actions over seven generations, encompassing hundreds of years. Seven generations is a long time, but it is also a finite amount of time. Although this framework prioritizes long-term thinking, it doesn’t bring the weight of infinity to bear on the present. And unlike longtermism, the seventh-generation principle doesn’t pretend to be scientific. It doesn’t rely on unfalsifiable guesses about a future we can’t even imagine to assign expected values to different political decisions; rather, it makes thinking about the future a moral imperative.
Neither does EA—it tends to reject pascallian mugging. The guesses about the future aren’t unfalsifiable—they will be confirmed or falsified in the future. Additionally, falsifiability is a terrible standard for such matters. The notion that the democrats are better than republicans is the type of thing that won’t be straightforwardly falsified, but that doesn’t mean it should be rejected. Only caring about the next 7 generations is a bad idea, when there are plausibly billions of future generations. It would mean we deprioritize the far future, in favor of caring about things that will happen in the next 7 generations. If we could make the next 7 generations go better at the cost of destruction of the world after 7 generations, that would obviously be a bad thing.
When we critically examine effective altruism and longtermism, we can see them as falsely utopian ideologies cloaked in the opaque vocabulary of science and math. Let’s instead strive for a world where altruism doesn’t have to be maximally effective for it to be worthy, where doing good doesn’t have to be optimized, where morals aren’t a function of the market.
This is starting to really get on my nerves. Articles always make snide references to longtermism, point out that it’s unintuitive, and then act as if they’ve refuted it. They haven’t! Longtermism is implied by every plausible normative system. If the critics of longtermism actually engaged with longtermist arguments rather than merely rejecting them on the basis of some unintuitive results, they would realize that bullets must be bit, regardless of whether we’re longtermists or not. In order to argue that ea is bad because it’s longtermist, you actually need to argue that longtermism is bad—something which the author here is either unable or unwilling to do.
As Matthews says about Beckstead’s thesis arguing for longtermism
That’s the basic argument behind Nick Beckstead’s 2013 Rutgers philosophy dissertation, “On the overwhelming importance of shaping the far future.” It’s a glorious mindfuck of a thesis, not least because Beckstead shows very convincingly that this is a conclusion any plausible moral view would reach. It’s not just something that weird utilitarians have to deal with.
I won’t rehash the points I’ve already made in favor of longtermism. However, as anyone who has studied the issue at all realizes, denying longtermism requires one to accept some pretty unintuitive things. Dismissing all the arguments because you find it unintuitive and Olle Häggström expressed some worries about politicians using longtermist justifications in foreign policy is ridiculous.
One final important note—all of the arguments here were about longtermism or the ea community broadly. None of these gave any arguments against donating to any of the givewell top charities—ones which can prevent blindness for about 100 dollars, save a life for a few thousand dollars, or provide money directly to those in extreme poverty. So if (somehow!) you’re convinced of every point raised by Triedman, Dai, and Ahlawat, you should donate lots of money to givewell top charities, and do all non-longtermism-related things advocated by ea. And if you are an effective altruist, to show that, contrary to the claims of this criticism ea isn’t just silicon valley technobabble, donate some money to givewell top charities! If you’re reading this and you have 20 dollars on hand, you can prevent 3 kids from getting malaria—so do that!!
respond to the ineffective egoists :D
respond to the ineffective egoists :D