Nathan Robinson's Idiotic Hit Piece About Effective Altruism
Effective altruism is effective and not defective
I’ve repeatedly chronicled Nathan Robinson’s bad habit of flagrantly lying about those who he disagrees with and his poor reading comprehension skills. This seems to extend to lots of domains; Robinson is at least consistent in his misrepresentation.
Robinson titles the article “Defective Altruism” — a very original charge indicating great wit and brilliance; only the greatest of minds could think of something so brilliant, really Shakespearian in its cleverness. (Btw, I’ve responded to the previous article here — it’s really bad).
The first thing that should raise your suspicions about the “Effective Altruism movement” is the name. It is self-righteous in the most literal sense. Effective altruism as distinct from what? Well, all of the rest of us, presumably—the ineffective and un-altruistic, we who either do not care about other human beings or are practicing our compassion incorrectly.
Imagine if we lived in a world where most people made purchases without looking at the price or how much the product benefitted them. If there was a group of people that wanted to effectively make purchases — taking into account cost, for example — it would make sense for them to call themselves ‘effective purchasers.’ After all, the whole point of the movement is doing purchasing more effectively — the same is true with effective altruism.
The movement isn’t attacking those who disagree, any more than evidence-based medicine practitioners are attacking others. Instead, the point is merely that we can and should do more good — and then EA presents a concrete roadmap for doing so.
We all tend to presume our own moral positions are the right ones, but the person who brands themselves an Effective Altruist goes so far as to adopt “being better than other people” as an identity. It is as if one were to label a movement the Better And Smarter People Movement—indeed, when the Effective Altruists were debating how to brand and sell themselves in the early days, the name “Super Hardcore Do-Gooders” was used as a placeholder. (Apparently in jest, but the name they eventually chose means essentially the same thing.)
Well, we live in a world where most people don’t care very much about making sure they’re doing good as effectively as possible. Most people don’t look to see, when they donate, whether their donations are the most effective they could be. Given this fact, it’s perfectly reasonable for those who are trying to do what makes things go best — not just what makes them feel good — to have their movement name reflect that.
Also, having names that sound good is part of branding. Would Robinson object to a movement called “Socialists for a Brighter Future” or “Socialists for Progress.” If not, this is an inconsistent double standard — one not applied consistently.
When I first heard about Effective Altruism, back around 2013, it was pitched to me like this: We need to take our moral obligations seriously. If there is suffering in the world, it’s our job to work to relieve it. But it’s also easy to think you’re doing good while you’re actually accomplishing very little. Many charities, I was told, don’t really rigorously assess whether they’re actually succeeding in helping people. They will tell you how much money they spent but they don’t carefully measure the outcomes. The effective altruist believes in two things: first, not ignoring your duty to help other people (the altruism part), and second, making sure that as you pursue that duty, you’re accomplishing something truly meaningful that genuinely helps people (the effectiveness part).
This is not a complete explanation of what effective altruism is — nor indeed what it was around 2013. Effective altruism — as is explained by every major effective altruist org — is about doing good as effectively as possible. If we think that people’s lives matter, we should care much more about saving a life for 5000 dollars than for 20,000 dollars — that way we can save four times as many lives. If you care about your arms, you’ll care much more about saving both of your arms than one of your arms — lives are the same way.
Put this way, it sounded rather compelling. Some Effective Altruists started an organization, GiveWell, that tried to figure out which charities were actually doing a good job at saving lives and which were mostly hype. I thought that made plenty of sense. But I quickly saw qualities in EA that I found deeply off-putting. They were rigorously devoted to trying to quantify moral decisions, to decide what the mathematically morally superior course of action to take was. And as GiveWell says, while first they try to shift people from doing things that “feel” good to doing things that achieve good results, they then encourage them to ask, “How can I do as much good as possible?” Princeton philosophy professor Peter Singer’s 2015 EA manifesto is called The Most Good You Can Do, and EA is strongly influenced by utilitarianism, so they are not just trying to do good, but maximize the good (measured in some kind of units of goodness) that one puts out into the world.
Maximizing the good is good, and this is true even if you’re not a utilitarian. If we can either save 100,000 lives or 200,000 lives, every plausible moral view will say that it’s much more important to save the 200,000 lives. As Richard Yetter Chappell explains, to be in favor of effective altruism, one merely must accept
Beneficentrism: The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.
Clearly, you don't have to be a utilitarian to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely important.
He later notes
Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal. Utilitarians do, of course, and are massively over-represented in the effective altruism movement as a result. But why don't more non-utilitarians give more weight to the importance of impartial beneficence? I don't understand it. (Comments welcome on this point, too.)
Next, Robinson says
To that end, I heard an EA-sympathetic graduate student explaining to a law student that she shouldn’t be a public defender, because it would be morally more beneficial for her to work at a large corporate law firm and donate most of her salary to an anti-malaria charity. The argument he made was that if she didn’t become a public defender, someone else would fill the post, but if she didn’t take the position as a Wall Street lawyer, the person who did take it probably wouldn’t donate their income to charity, thus by taking the public defender job instead of the Wall Street job she was essentially murdering the people whose lives she could have saved by donating a Wall Street income to charity.1 I recall that the woman to whom this argument was made became angry and frustrated, and the man making the argument took her anger and frustration as a sign that she simply could not handle the harsh but indisputable logic bombs he was dropping on her. (I don’t know whether she ultimately became a public defender.
I’m very curious how Robinson figured out that “the man making the argument took her anger and frustration as a sign that she simply could not handle the harsh but indisputable logic bombs he was dropping on her.” Putting that aside, Robinson gives no arguments against this — the hundreds of people who could have been saved would prefer that she take the better-paying job. (I’d agree that comparing it to murder is a tad inaccurate and is certainly bad PR).
The EA community is rife with arguments in defense of things that conflict with our basic moral intuitions. This is because it is heavily influenced by utilitarianism, which always leads to endless numbers of horrifying conclusions until you temper it with non-utilitarian perspectives (for instance, you should feed a baby to sharks if doing so would sufficiently reduce the probability that a certain number of people will die of an illness).3
This is not true — the reason that the EA community is heavily influenced by people who think carefully and reflect — that sometimes gets you to some weird places. Doing the most good you can sometimes involves doing things more impactful than working at soup kitchens — for example, working on making AI safe, given the vast numbers of AI experts who consider it an existential risk.
The point about sharks just seems to pray on the absurdity heuristic — how could the correct morality say that it’s right to feed people to sharks? However, as I’ve argued at great length, killing one to save several is moral. But you really, really don’t have to be a utilitarian to think that adopting a career in Wallstreet to save hundreds of lives is a good thing.
Patching up utilitarianism with a bunch of moral nonnegotiables is what everyone ends up having to do unless they want to sound like a maniac, as Peter Singer does with his appalling utilitarian takes on disability. (“It’s not true that I think that disabled infants ought to be killed. I think the parents ought to have that option.”)
I’ve previously exposed Robinson’s total misreading of Singer’s statements on the issue. This is either illiteracy or dishonesty.
Neuroscientist Erik Hoel, in an essay that completely devastates the philosophical underpinnings of Effective Altruism, shows that because EA is built on a utilitarian foundation, its proponents face two unpalatable options: get rid of the attempt to pursue the Quantitatively Maximum Human Good, in which case it is reduced to a series of banal propositions not much more substantive than “we should help others,” or keep it and embrace the horrible repugnant conclusions of utilitarian philosophy that no sane person can accept.
Hoel’s article is terrible, and does not “completely devastate” the philosophical underpinnings of effective altruism. In fact, as Yetter Chappell explains in an article that completely devastates the underpinnings of Hoel’s view
In an especially striking example of conflating utilitarianism with anything remotely approaching systematic thinking, popular substacker Erik Hoel recently characterized the Beckstead & Thomas paper on decision-theoretic paradoxes as addressing “how poorly utilitarianism does in extreme scenarios of low probability but high impact payoffs.” Compare this with the very first sentence of the paper’s abstract: “We show that every theory of the value of uncertain prospects must have one of three unpalatable properties.” Not utilitarianism. Every theory.
(Alas, when I tried to point this out in the comments section, after a brief back-and-forth in which Erik initially doubled down on the conflation, he abruptly decided to instead delete my comments explaining his mistake.)
Saying it’s really important to do good and we should do more rather than less doesn’t require accepting utilitarianism — it requires accepting beneficentrism. But beneficentrism is obvious!
(The approach adopted by many EAs is to hold onto the repugnant conclusions, but avoid mentioning them too much.)
I’ve already addressed this charge.
Hoel actually likes most of what Effective Altruists do in practice (I don’t), but he says that “the utilitarian core of the movement is rotten,” which EA proponents don’t want to hear, because it means the problems with the movement “go beyond what’s fixable via constructive criticism.”
Three points are worth making.
Robinson has given no arguments against effective altruism being mostly good. He just uses his general dissatisfaction with the movement’s alleged philosophy as a criticism of the movement, while asserting with no argument that EA is mostly bad.
Most of what EA does is some combination of working on global health and farm animal welfare. Thus, unless Robinson has some effectiveness based objection to these projects, he must be in favor of some combination of factory farming, poverty, disease and death.
Let’s imagine that it turns out that most socialists adopt egalitarianism about welfare — a view that I don’t adopt and think is disastrously wrong. Robinson presumably wouldn’t think that this was a good objection to socialism. Even if most people adopt a moral view that is wrong, that isn’t an objection to a movement — particularly if that movement is practical rather than philosophical.
It’s easy to find plenty of odious utilitarian “This Horrible Thing Is Actually Good And You Have To Do It” arguments in the EA literature. MacAskill’s Doing Good Better argues that “sweatshops are good for poor countries” (I’ve responded to this argument before here)
Two points are worth making.
You can disagree with MacAskill about some things and still be an effective altruist.
Robinson’s objection is terrible; it’s pretty much just pointing out that — as the title suggests — better than does not mean good. However, if purchasing from sweatshops improves the lives of those in the sweatshop and in the surrounding areas broadly, then purchasing from sweatshops is better than not doing so.
He also co-wrote an article on why killing Cecil the lion was a moral positive and argued against participating in the “ice bucket challenge” that raised millions for the ALS Association (not because it was performative, but because he did not consider the ALS Association the mathematically optimal cause to raise money for).
Robinson doesn’t respond to any of those arguments which appeal to non-utilitarian principles. The language of “mathematically optimal” obscures the fact that, as MacAskill argues, more people will die of disease because of the ice bucket challenge. It turns out that X causing lots of people to die is a better reason to oppose it than it being “performative.”
Robinson spends several paragraphs wheezing about how some effective altruists are in favor of working for immoral organization to get money to donate to do good. He never argues against this. However, my standard two objections still apply.
You don’t have to think this to be an effective altruist any more than you have to think Nathan Robinson is honest to be a socialist. Just because lots of people who are part of a movement think X, that doesn’t mean that you have to think X to be part of the movement.
It’s good to save lots of lives, as can be done more effectively if one works at a petrochemical company.
Most EAs no longer endorse this, including MacAskill.
I would again note the deep implicit devaluation of care work that runs through 80,000 Hours’ focus on jobs that supposedly maximize your impact rather than simply make you a helpful part of your community.
Being a helpful member of your community < saving hundreds of lives. The hundreds of people you saved won’t be helpful members of your community if they die of malaria.
Effective Altruism’s focus has long been on philanthropy, and its leading intellectuals don’t seem to understand or think much about building mass participation movements. The Most Good You Can Do and Doing Good Better, the two leading manifestos of the movement, focus heavily on how highly-educated Westerners with decent amounts of cash to spare might decide on particular career paths and allocate their charitable donations. Organizing efforts like Fight For 15 and Justice For Janitors do not get mentioned.
The average person who participates in a protest does very little good. This gives us good reason not to spend our time and money on protest movements. Also, it’s very hard to assess whether political changes are good — many smart people will disagree with you about every political issue and have reasoned arguments about them.
I have to say, my own instinct is that all of this sounds pretty damned in-effective in terms of how much it is likely to solve large-scale social problems, and both MacAskill and Singer strike me as being at best incredibly naive about politics and social action, and at worst utterly unwilling to entertain possible solutions that would require radical changes to the economic and political status quo. (MacAskill and Singer are not even the worst, however. In 2019 I was shown an EA voting guide for the Democratic presidential primary which did a bunch of math and then concluded that Cory Booker was the optimal candidate to support, one of the reasons being that he was in favor of charter schools. Sadly, the guide has since been taken offline.) If EA had been serious about directing money toward the worthiest cause, it would have been much more interested from the start in the state’s power to redistribute wealth from the less to the more worthy. After all, if your view is that people who work on Wall Street should give some of their money away, relying on them to make individual moral decisions is very ineffective (especially since most of them are sociopaths). You know what’s better than going and working on Wall Street yourself? Getting a tax put in place, so that the state just reaches into their pocket and moves the money where it can do more good. Taking rich people’s money by force and spending it is far more effective than having individual do-gooders work for 30 years on Wall Street so they can do some philanthropy on the side (and relying on them to maintain their commitments). In his The Life You Can Save, Singer, who has defended high CEO salaries and is dismissive of those who call for “more revolutionary change,” adopts a highly individualistic approach in which the central moral question facing us is how we should spend our money.
I’ll just quote the response I’ve given previously to the systemic reform objection.
Brian Berkeley points out that the institutional critique doesn’t apply, the effective altruism movement does do research into the cost effectiveness of institutional reform, it just often finds it’s less effective. However, in cases where it’s more effective, EA does systemic reform.
EA already does work on systemic change. Nick Cooney of the humane league is an effective altruist who got McDonalds, Dunkin’ Donuts, General Mills, Costco, Sodexo and many more to adopt cage-free egg policies. Lincoln Quirk reduced the costs of remittances dramatically, which is valuable given that remittances provide far more money overseas than foreign aid flows. Scott Weathers lobbied for the reach every mother and child act which would allow USAID to look for evidence before spending money overseas. Effective altruists have also pushed for criminal justice reform. The distinction is, unlike other movements we look for systemic reforms that work rather than ones that make us feel cool and radical.
As Fodor points out it “is quite plausible, indeed I think history indicates overwhelmingly probable, that even if all EAs on the planet, and ten times more that number, denounced the evils of capitalism in as loud and shrill voices as they could muster, that nothing whatever of any substance would change to the benefit of the world’s poor. As such, if our main objective is to actually help people, rather than to indulge in our own intellectual prejudices by attributing all evil in the world to the bogeyman of ‘capital’, then it is perfectly reasonable to ‘implore individuals to use their money to procure necessities for those who desperately need them’, rather than ‘saying something’ (what exactly? to whom? to what end?) about ‘the system that determines how those necessities are produced’.”
As Alexander argues, if everyone donated 10% to effective charities it would end world poverty cure major diseases and start a major cultural and scientific renaissance but if everyone become devoted to systemic change we would probably have a civil war.
There are already issues that do a bunch of research into politics such as the brookings institution. If ea were to become more political it would likely become a brookings institute or cato institute esque group.
Political campaign generally don’t result in particularly radical change, it’s unlikely that capitalism will be eliminated in the near future.
EA’s may work in opposite directions if they’re split on issues of systemic change and cancel out. Thus, rather than having a movement around influencing institutions in opposite ways, we should just do good things.
EA’s do systemic reform. 80,000 hours is the top job advisory EA organization, and one of its top recommendations is going into government. Other high ones include being a journalist, public intellectual, earning to give, or a researcher. EA definitely does systemic reform.
The fact that there are so many other people working on systemic reform means that EA’s working on systemic reform wouldn’t do very much and would be canceled out.
EA is largely about communal efforts to better the world that require lots of actors.
It would be very difficult to get politicians to tax wallstreet and spend it effectively — they practically never do that; very little government money gets spent on foreign aid. Given this, it makes good sense for EAs not to work on this.
Robinson spends some more paragraphs making points I agree with — namely, that you can be in favor of doing good better but disagree with current EA priorities. He next says
The actually-existing EA movement is concerned with some issues that are genuinely important. They tend to be big on animal rights, for instance. But I’m an animal rights supporter, too, and I’ve never felt inclined to be an Effective Altruist. What does their movement add that I don’t already believe? In many cases, what I see them adding is an offensive, bizarre, and often outright deranged set of further beliefs, and set of moral priorities that I find radically out of touch and impossible to square with my basic instincts about what goodness is.
Well, here are some things that EAs are doing that Robinson probably isn’t.
Donating to combat malaria — which could save hundreds of lives.
Donating to the most effective charities that save animals.
Using one’s career to do the most good, rather than publicly slandering leading utilitarians.
Working on combatting existential threats.
For instance, some EA people (including the 80,000 Hours organization) have adopted a deeply disturbing philosophy called “longtermism,” which argues that we should care far more about the very far future than about people who are alive today. In public discussions, “longtermism” is presented as something so obvious that hardly anyone could disagree with it. In an interview with Ezra Klein of the New York Times, MacAskill said it amounted to the idea that future people matter, and because there could be a lot of people in the future, we need to make sure we try to make their lives better. Those ideas are important, but they’re also not new. “Intergenerational justice” is a concept that has been discussed for decades. Of course we should think about the future: who do you think that socialists are trying to “build a better world” for?
So what does “longtermism” add? As Émile P. Torres has documented in Current Affairs and elsewhere, the biggest difference between “longtermism” and old-fashioned “caring about what happens in the future” is that longtermism is associated with truly strange ideas about human priorities that very few people could accept. Longtermists have argued that because we are (on a utilitarian theory of morality) supposed to maximize the amount of well-being in the universe, we should not just try to make life good for our descendants, but should try to produce as many descendants as possible. This means couples with children produce more moral value than childless couples, but MacAskill also says in his new book What We Owe The Future that “the practical upshot of this is a moral case for space settlement.” And not just space settlement. Nick Bostrom, another EA-aligned Oxford philosopher whose other bad ideas I have criticized before, says that truly maximizing the amount of well-being would involve the “colonization of the universe,” and using the resulting Lebensraum to run colossal numbers of digital simulations of human beings. You know, to produce the best of all possible worlds.
I’ll just link to my five part defense of longtermism. Robinson’s claims that this requires utilitarianism is a lie — at no point in my series did I assume utilitarianism. Instead I, like many longtermists, showed how lots of plausible assumptions necessitate longtermism. It would, in fact, be good if there were lots of future people that were really happy, duh. It doesn’t matter if the future people are carbon or silicon, if they’re happy, so should we be.
These barmy plans (a variation on which has been endorsed by Jeff Bezos) sound like Manifest Destiny: it is the job of humans to maximize ourselves quantitatively. If followed through, they would turn our species into a kind of cancer on the universe, a life-form relentlessly devoted to the goal of reproducing and expanding.
The reason manifest destiny was bad was because it killed lots of people, thus, analogies to manifest destiny don’t prove the thing that is being compared is bad unless they kill lots of people. MacAskill, like Hitler, wears pants, but MacAskill is good and Hitler is bad.
It’s a horrible vision, made even worse when we account for the fact that MacAskill entertains what is called the “repugnant conclusion”—the idea that maximizing the number of human beings is more important than ensuring their lives are actually very good, so that it is better to have a colossal number of people living lives that are barely worth living than a small number of people who live in bliss.
Robinson ignores the large numbers of philosophers who signed a letter agreeing that the fact that a population axiology implies the repugnant conclusion doesn’t mean we should immediately reject it. He also ignores the myriad of arguments that people give for accepting the repugnant conclusion, such that even intuitionists like Michael Huemer accept it! Instead, he just acts as though it’s a problem for utilitarians — treating utilitarians demand for answering puzzles in ethics as a mark against it.
Someone who embraced “longtermism” could well feel that it’s the duty of human beings to forget all of our contemporary problems except to the extent that they affect the chances that we can build a maximally good world in the very far distant future. Indeed, MacAskill and Hilary Greaves have written that “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.” To that end, longtermism cares a lot about whether humans might go extinct (since that would prevent the eventual creation of utopia), but not so much about immigration jails, the cost of housing, healthcare, etc. Torres has noted that the EA-derived longtermist ideas have a lot in common with other dangerous ideologies that sacrifice the interests of people in the here and now for a Bright, Shining Tomorrow.
Four points are worth making.
To be a longtermist, one just needs to accept that improving the future is very important — not that it’s the most important thing.
As Beckstead argues convincingly, improving the present improves the future.
EA is about using careers and money to make the world a better place — one can support it while disagreeing that we should violate constraints.
As I’ve argued as well as Beckstead and loads of other effective altruists, these actually follow undeniably from very plausible first principles — see my five part defense of longtermism for more.
Torres next complains that EA doesn’t care enough about climate change because it’s unlikely to be an existential threat. He gives no objection to this — he just complains. He weaponizes the politics of those who agree with him about climate change to make EA seem bad.
80,000 Hours does say there are other issues that “we haven’t looked into as much” but “seem like they could be as pressing,” and a “minority” of people should “explore them.” These are “mitigating great power conflict,” “global governance,” “space governance,” and a “list of other issues that could be top priorities.” Beneath this is a list of secondary priorities, where “we’d guess an additional person working on them achieves somewhat less impact than work on our highest priorities, all else equal.” These are “nuclear security,” “improving institutional decision-making,” and “climate change (extreme risks).”
Let’s consider the EA list of priorities. I will leave aside #1 (artificial intelligence) for the moment and return to it shortly. The 2nd priority is researching priorities. Again, this in a time when to me and many others, our priorities seem pretty fucking clear (stopping war, climate disaster, environmental degradation, and fascism, and giving everyone on Earth nice housing, clean air and water, good food, health, education, etc.) But no, EA tells us that researching priorities is more pressing than, for example, “nuclear security.” So is #3, Building Effective Altruism. Because there aren’t many Effective Altruists, and of course EA is about optimizing the amount of good in the world, “building effective altruism is one of the most promising ways to do good in the world.” This sets up a rather amusing kind of “moral pyramid scheme” (my term, not theirs) in which each person can do good by recruiting other people to recruit other people to do good by recruiting other people. This is considered a more obvious moral priority than stopping warfare between nuclear-armed great powers.
It’s not about which one is overall more important — rather, it’s about the most good one can do at the margins. If I could I would eliminate all risk of nuclear war at the cost of eliminating the effective altruism movement — it just turns out that it’s hard to significantly reduce nuclear risk and easy to do research on important priorities. Also, finding a new cause area that’s being ignored could change the direction of billions of dollars, so it’s pretty damn important.
But there’s very little here about, say, ensuring that poor people receive the same kind of care as rich people during pandemics. In fact, since the reason for focusing so much on pandemics in particular is to avoid outright extinction, issues of equity simply don’t matter terribly much, except to the extent that they affect extinction risk. Bostrom has said that on the basic utilitarian framework, “priority number one, two, three, and four should … be to reduce existential risk.” (Torres notes that Bostrom is dismissive of “frittering” away resources on “feel-good projects of suboptimal efficacy” like helping the global poor.) Torres points out that not only does this philosophy mean you are “ethically excused from worrying too much about sub-existential threats like non-runaway climate change and global poverty,” but one prominent Effective Altruist has even gone so far as to argue that “saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.”
Several points are worth making.
Bostrom is not a utilitarian.
As I’ve already argued in my series about longtermism, this follows from undeniable axioms.
The “prominent Effective Altruist” cited also argues that the things we should be doing right now is improving global health and development to improve the far future.
So even in the one domain where Effective Altruists (at least those at 80,000 Hours) appear to be advocating a core priority that makes a lot of sense, the appearance is deceptive, because all of it is grounded in the idea that the most important problem with pandemics is that a sufficiently bad one might cause humanity to go extinct. Should we work on establishing universal healthcare? Well, how does it affect the risk that all of humanity will go extinct?
If pandemics killed everybody they’d be much worse.
But I have not yet discussed the #1 priority that many Effective Altruists think we need to deal with: artificial intelligence. When it comes to 80,000 Hours’ top career recommendations: working on artificial intelligence “safety” is right at the top of the list:
The argument that is usually made is something like: well, if the computation power of machines continues to increase, soon they will be as intelligent as humans, and then they might make themselves more intelligent, to the point where they are super-intelligent, and then they will be able to trick us and outwit us and there will be no way to stop them if we program them badly and they decide to take over the world.
It is hard for me to present this argument fairly, because I think it is so unpersuasive and stupid. Versions of it have been discussed before in this magazine (here by computer scientist Benjamin Charles Germain Lee and here by artificial intelligence engineer Ryan Metz). The case for it is made in detail in Nick Bostrom’s book Superintelligence and Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control. Generally, those making the argument focus less on showing that “superintelligent” computers are possible than on showing that if they existed, they could wreak havoc. The comparison is often made to nuclear weapons: if we were living in the age before nuclear weapons, wouldn’t we want to control their development before they came about? After all, it was only by sheer luck that Nazi Germany didn’t get the atomic bomb first. (Forcing all Jewish scientists to flee or be killed also probably contributed to Nazi Germany’s comparative technological disadvantage.)
This is a poor presentation of the case. It turns out that
We have good reason to expect AI that’s much smarter than us to exist soon.
AI will optimize its goals.
We have no idea how to get AI to do what we want.
Even if we did, it’s hard to give AIs the right goals. Most simple goals would kill everyone.
A being 10,000 times smarter than Einstein would be decently likely to be able to end the world.
But one reason to be skeptical of the comparison is that in the case of nuclear weapons, while early scientists may have disagreed about whether building them was feasible, those who did think they were feasible could offer an explanation as to how such a bomb could be built. The warnings about dangerous civilization-eating AI do not contain demonstrations that such a technology is possible,9 and rest on unproven assumptions about what algorithms of sufficient complexity would be capable of doing, that are much closer to prophecy or speculative fiction than science.10 80,000 Hours at one point laughably tries to prove that computers might become super-intelligent by showing us that computers are getting much better at generating pictures of cats in hoodies. They warn us that if things continue this way for long, the machines might take over and make humans extinct.
Here are a few good reasons to think AGI will come soon.
1 Lots of leading AI researchers think so.
2 The most thorough report on this that tracked AI development through time concluded the same thing.
Now, don’t get me wrong: I think so-called “artificial intelligence” (a highly misleading term, incidentally, because a core property of AI is that it’s unintelligent and doesn’t understand what it’s doing) has very scary possibilities. I’ve written before about how the U.S. military is developing utterly insane technologies like “autonomous drone swarms,” giant clouds of flying weaponized robots that can select a target and destroy it. I think the future of some technologies is absolutely terrifying, and that “artificial intelligence” (actually, let’s just say computers) will do things that astonish us within our lifetimes. Computing is getting genuinely impressive. For instance, when I asked an image-generator to produce “Gaudí cars,” it gave me stunning pictures of cars that actually looked like they had been designed by the architect Antoni Gaudí. (I had it make me some Gaudí trains as well.) I’m fascinated by what’s going to happen when music-generating software starts going the way of image-generating software. I suspect we will see within decades programs that can create all new Beatles songs that sound indistinguishable from the actual Beatles (they’re not there yet, but they’re making progress), and you will be able to give the software a series of pieces of music and ask for a new song that is a hybrid of all of them, and get one that sounds extraordinarily good. It’s going to be flabbergasting.
One of the things EAs are working on is doing effective AI governance to protect against those drone swarms.
But that doesn’t mean that a computer monster is going to suddenly emerge and eat the world. The fact that pocket calculators can beat humans at math does not mean that if they become really good at math, they might become too smart for their own good and turn on us. If you paint a portrait realistic enough, it doesn’t turn 3-dimensional, develop a mind of its own, and hop out of the frame. But those who think computers pose an “existential risk” quickly start telling stories about all the horrible things that could happen if computers became HAL 9000 or The Terminator’s Skynet. Here’s Karnofsky:
“They could recruit human allies through many different methods—manipulation, deception, blackmail and other threats, genuine promises along the lines of ‘We’re probably going to end up in charge somehow, and we’ll treat you better when we do.’ Human allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an ‘AI headquarters’ can be set up. Since the ‘AI headquarters’ would officially be human property, it could be very hard for authorities to detect and respond to the danger.”
And here’s prominent EA writer Toby Ord, in his book The Precipice: Existential Risk and the Future of Humanity:
“There is good reason to expect a sufficiently intelligent system to resist our attempts to shut it down. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it. In this way, the ultimate goal of maximizing reward will lead highly intelligent systems to acquire an instrumental goal of survival. And this wouldn’t be the only instrumental goal. An intelligent agent would also resist attempts to change its reward function to something more aligned with human values—for it can predict that this would lead it to get less of what it currently sees as rewarding. It would seek to acquire additional resources, computational, physical or human, as these would let it better shape the world to receive higher reward.”
To me, this just reads like (very creative!) dystopian fiction. But you shouldn’t develop your moral priorities by writing spooky stories about conceivable futures. You should look at what is actually happening and what we have good reason to believe is going to happen. Much of the AI scare-stuff uses the trick of made-up meaningless probabilities to try to compensate for the lack of a persuasive theory of how the described technology can actually be developed. (“Okay, so maybe there’s only a small chance. Let’s say 2 percent. But do you want to take that chance?”) From the fact that computers are producing more and more realistic cats, they extrapolate to an eventual computer that can do anything we can do, does not care whether we live or die, and cannot be unplugged. Because this is nightmarish to contemplate, it is easy to assume that we should actually worry about it. But before panicking about monsters under the bed, you should always ask for proof. Stories aren’t enough, even when they’re buttressed with probabilities to make hunches look like hard science.
Robinson previously objected that EAs don’t describe how a superintelligence could end the world, before explaining the explanation of how an AI could be very dangerous. If an AI were much smarter than us and had a utility function — which it must have according to standard economic theory — then there’s a good chance the world ends; it would optimize for those goals. That’s a good reason to want it to follow good goals, dontcha think?
Those who fear the “existential risk” of intelligent computers tend to think non-experts like myself are simply unfamiliar with the facts of how things are developing. But I received helpful confirmation in my judgment recently from Timnit Gebru, who has been called “one of the world’s most respected ethical AI researchers.” Gebru served as the co-lead of Google’s ethical AI team, until being forced out of the company after producing a paper raising serious questions about how Google’s artificial intelligence projects could reinforce social injustices. Gebru, a Stanford engineering PhD and co-founder of Black in AI, is deeply familiar with the state of the field, and when I voiced skepticism about EA’s “paranoid fear about a hypothetical superintelligent computer monster” she replied:
“I’m a person having worked on the threats of AI, and actually done something about it. That’s also my profession. And I’m here to tell you that you are correct and what they’re selling is bullshit.”
Gebru is deeply skeptical of EA, even though they ostensibly care about the same thing that she does (social risks posed by artificial intelligence). She points out that EAs “tell privileged people what they want to hear: the way to save humanity is to listen to the privileged and monied rather than those who’ve fought oppression for centuries.” She is scathing in saying that the “bullshit” of “longtermism and effective altruism” is a “religion … that has convinced itself that the best thing to do for ‘all of humanity’ is to throw as much money as possible to the problem of ‘AGI [artificial general intelligence.’” She notes ruefully that EA is hugely popular in Silicon Valley, where hundreds of millions of dollars are raised to allow rich people, mostly white men, to think they’re literally saving the world (by stopping the hypothetical malevolent computer monster).
Gebru points out that there are huge risks to AI, like “being used to make the oil and gas industries more efficient,” and for “criminalizing people, predictive policing, and remotely killing people and making it easier to enter warfare.” But these get us back to our boring old near-term problems—racism, war, climate catastrophe, poverty11—the ones that rank low on the EA priorities list because other people are already working on them and a single brilliant individual will not have much effect on them.12
We can accept both that
1 There are risks from AI that don’t relate to it being existential.
2 AI is an existential risk.
While it’s true that you can find lone voices challenging skepticism, that’s true of loads of things — you can find lone scientific voices advocating climate skepticism. This doesn’t mean we should stop believing in climate change. If lots of experts are worried about something, so should we.
In the past, I’ve talked to people who think that while some of Effective Altruism is kooky, at least EAs are sincerely committed to improving the world, and that’s a good thing. But I’m afraid I don’t agree. Good intentions count for very little in my mind. Lots of people who commit evil may have “good intentions”—perhaps Lyndon Johnson really wanted to save the world from Communism in waging the criminal Vietnam war, and perhaps Vladimir Putin really thought he was invading Ukraine to save Russia’s neighbor from Nazis. It doesn’t really matter whether you mean well if you end up doing a bunch of deranged stuff that hurts people, and I can’t praise a movement grounded in a repulsive utilitarian philosophy that has little interest in most of the world’s most pressing near-term concerns and is actively trying to divert bright young idealists who might accomplish some good if they joined authentic grassroots social movements rather than a billionaire-funded cult.15
But all of those people are doing bad things! Robinson doesn’t describe bad things EAs do — just good things they’re allegedly not doing. We know the movement has saved hundreds of thousands of lives — why is that bad.
Robinson next claims that if you really wanted to do the most good you’d be a socialist. My previous comments about the systemic critique apply here too.
Effective Altruism, which reinforces individualism and meritocracy (after all, aren’t those who can do the most good those who have the most to give away?), cannot be redeemed as a philosophy. Nor is it very likely to be successful, since it is constantly having to downplay the true beliefs of its adherents in order to avoid repulsing prospective converts. That is not a recipe for a social movement that can achieve broad popularity. What it may succeed in is getting some people who might otherwise be working on housing justice, criminal punishment reform, climate change, labor organizing, journalism, or education to instead fritter their lives away trying to stop an imaginary future death-robot or facilitate the long-term colonization of space. (Or worse yet, working for a fossil fuel company and justifying it by donating money to stop the death-robot.) It is a shame, because there is plenty of work to be done building a good human future where everyone is cared for. Those who truly want to be both effective and altruistic should ditch EA for good and dive into the hard but rewarding work of growing the global Left.
Robinson claims the philosophy can’t be redeemed. But he previously admitted that the philosophy is mostly good — he just disagrees with it practically. Robinson totally ignores most of EA — that which is spent improving global health and development.
Overall, Robinson’s critiques of EA are relatively weak. He makes broad sweeping statements, ignores most of what’s done by EA, addresses the weakest versions of arguments, and ignores arguments for most things he’s arguing against. Once again, Riddler Robinson cannot be trusted.
> As Alexander argues, if everyone donated 10% to effective charities it would end world poverty cure major diseases and start a major cultural and scientific renaissance but if everyone become devoted to systemic change we would probably have a civil war.
This is one part of this article I may disagree with.
The problem is that you assume away the question by saying that everyone donates to “effective” change. We could do the same thing with the comparator: What if everyone suddenly became interested in only the *best* systemic reform?
That would probably be a LOT better then whatever EA is doing. We would probably have a utopia within the year!
So you can’t really say that…
If we said that everyone donates their money to a charity they research and find is effective, which is much closer to the systemic change comparator, I doubt that a great deal would change. Perhaps we would see progress happen, but we could also see a massively well funded Luddite Christian government take over the world backed by all the rich elderly people.
I think you should just stick with the broader points about EA being good instead of this “everyone does X” stuff.