35 Comments

Endorsing this over FdB's comments, which are just churlish. Like his recent arguments against conserns over AI risk, it seems to be based on a bunch of logical fallacies stemming mainly from him just not liking the people involved. I noticed the point too where he just assumes that his own deontological moral intuitions are correct. No attempt to justify why they are better. I don't know what's got into him lately.

Expand full comment

None of that has the slightest connection to anything I said. Just literally zero engagement with my post.

Expand full comment

This is so comprehensively non-responsive to my post that there's nothing I can say about it, other than that you need to develop much better reading comprehension.

What's good about EA is the parts that are typical; what's atypical about EA are the parts that are not good. The weirdo shit drags the whole philosophy down, but pieces like this exemplify the incredible sensitivity towards any criticism of the bizarre elements of that culture, which in turn prevents EA from getting better.

Expand full comment

Thanks for the reply. I'd be curious, which claims you think I didn't respond to. I argued at great length that EA is not mostly weird and that the fact that it is partially pretty weird is a good thing--that it's good that there are a few people thinking about shrimp welfare. Furthermore, if the criticism is just that there are a few parts that you don't like but it's mostly good, it seems weird to describe it as a shell game. I went line by line through your post and addressed pretty much all the major points! For instance, I objected to your claim that the slogan form of EA is trivial.

Expand full comment

How would you feel about a defense of Christian philosophy that was just a list of good things Christians had done? Because that's what a good portion of this amounted to. And a lot of the rest was just listing all of the non-controversially good things about the concept of Effective Altruism which does nothing to address Freddie's overall point that, despite it's uncontroversially good aspirations and definitions, in practice EA doesn't live up to what it aspires to be. Citing the definition of "beneficentrism" doesn't make it synonymous with EA.

Expand full comment

Christianity is a set of beliefs. Ea is a social movement with a philosophical underpinning. I defended both here.

Expand full comment

What's the difference between a "set of beliefs" and a "philosophical underpinning"? Maybe a better parallel is a homeless shelter or soup kitchen run by a church. That would be a social movement (the shelter volunteers) with a philosophical underpinning (Christian charity).

You might value the contributions of a Christian homeless shelter, without accepting the underlying principles that justify it (a belief in Christ). In that case, you'd find conflating homeless shelters with Christianity frustrating and consider it a type of "shell game". This is similar to how Freddie would describe Effective Altruism's projects.

He'd also go further and criticize the philosophical underpinning of EA as trite, compared to things like Christianity, socialism, national service.. No one would argue that their charitable giving isn't trying to be cost-effective, while some organizations would reject that their organization is organized around the sacrifice of Christ, class solidarity or patriotic duty. Like that hypothetical Christian homeless shelter is still trying to stretch their budget to help the most people, regardless of whether they are "effective altruism focused". When effective altruism is not trite, it ends up as naval-gazing about predictions that can't come true until centuries from now or a repackaging of utilitarianism.

I've read both Freddie's piece and the SSC response and think of it as mostly a semantic argument that can't really be resolved. EA is a vague enough concept that people can really label anything as EA or not. Depending on what you call EA, it's either good or pointless.

Expand full comment

Freddie, I'd be very interested to hear your response to the OP's argument that EA is not "typical" at all -- rather, it's "totally false" that most people already follow even neartermist EA principles. This is a direct reply to the central thrust of your post. Here's the key section, for your convenience:

> deBoer declares that this makes it trivial. Who could object to doing good effectively? Thus, he claims, EA is just doing what everyone else does. But this is totally false. It’s true that most people—not all mind you—who donate to charity are in favor of effectiveness. But many of them are in favor of effective charity the way that some liberal Christians are in favor of God; they think of it as a pleasant-sounding buzzword that they kind of abstractly like but don’t do anything about that fact and it’s not a big part of their lives.

> Virtually no one has spent any time looking at the most effective charities. There’s a reason that almost all money given to charities helping animals goes to the small number of them in shelters rather than the billions of them being tortured in factory farms. The reason is that when people are donating to help animals, they donate to the cute puppies they feel a positive emotional sentiment to because they saw ads about them looking sad, rather than trying to do good effectively.

Expand full comment

Is trying to identify the most effective charities to donate to typical and good or atypical and not good? Because half the time the criticism is the former and half the time it’s the latter.

Expand full comment

It is typical and good. However some of the people that are identifying effective charities are actually just self-indulgently arguing. Freddie brings this up. If you're deciding between funding a library in America or mosquito nets in Papua New Guinea that can lead to a reasonable conversation about effectiveness and what society should look like. This is an example of it being typical and good. However, sometimes you're deciding between a local library and fending off Roko's Basilisk from torturing people in VR in centuries. In that case it is atypical, and not good.

The "Shell Game" in the title refers to switching between these two concepts. The reasonable conversation (nets vs. libraries) is switched for the unreasonable one (speculative fiction about robotic torturers).

Expand full comment

But it’s not typical. The vast majority of donors don’t do any research at all into effectiveness of charities. Before GiveWell, there wasn’t even a prominent charity evaluator for people to even look at things through this lens.

The CEO of Charity Navigator once published a hate manifesto about the idea of even approaching giving this way: https://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism.

Expand full comment

Upon reading this comment, I went back to read your article in full, and I don't agree at all with your accusation of non-responsiveness. This article challenges how truly "typical" EA is in practice and then defends the "weirdo shit" elements as well. Perhaps you disagree with the conclusions, but I don't think you can reasonably say that it failed to tackle head-on what you highlight as the main thrusts of your argument.

Expand full comment

It’s actually good that some people are thinking about wild animal welfare yet you categorize it as bizarre. Yes it is strange and different — no one else has thought about the welfare of individual wild animals to this extent — but it’s hard to see what’s bad about thinking about and seeing if you can alleviate heretofore unaddressed suffering.

Expand full comment

Well said. I think what gets lost in this very-online drama is that most people have never heard of EA, and have never seriously considered the possibility that some charities are more effective than others. If EA becomes more popular, I can't see how it would make the average American donation less effective!

Expand full comment

It won't become more popular when its acolytes talk about whether shrimp are sentient and say that we should spend money on a rocketship to Alpha Centauri instead of on poor people. I don't know why you guys are so rigidly afraid of contending with how alienating the weirdo stuff is to potential donors.

Expand full comment

I'm not sure what there is to contend with. The EA project involves *cause-neutral* inquiry into doing more good. It's surely an open empirical question whether our aquaculture practices cause shrimp suffering. Given the immense numbers involved, if the individual shrimp suffer at all, then it would seem obviously a good thing to try to reduce this harm.

If you only want to do more good *insofar as the cause seems normal* then you aren't really committed to doing the most good. So again, when you claim that the EA project is "trivial", this isn't true. Most people aren't willing to consider the possibility that "weird" things might actually be better than what they're antecedently inclined to want to do: donate to photogenic kids and puppies, while completely ignoring the relative scales or magnitudes of different problems.

Expand full comment

"Given the immense numbers involved, if the individual shrimp suffer at all, then it would seem obviously a good thing to try to reduce this harm."

I'm not sure most people would agree that that seems obvious, especially given limited resources. I think the dominant view is that suffering is not very significant if it is occurring in a creature not very similar to humans in some multi-dimensional metric. For example if it doesn't have long-term-enough goals and propositional attitudes, or other cognitive properties. Or if it doesn't have eyes that look like human eyes.

Expand full comment

My main objection to your article is:

"The immediate response to such a definition ... should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably."

These goals are held by almost nobody! That's why dog shelters raise 100x more than farmed animals.

Otherwise, I mostly agree! I would like EA to be more associated with Givewell and less with AI.

Expand full comment

whenever a writer appears good in general but then when they write about a topic you are knowledgeable in it's bad, that's a sign that they might actually not be good at writing and you just dont notice because it's never about something you are knowledgeable in. not to say that freddie deboer is a consistent bad writer but it's worth giving some thought.

Expand full comment

Maybe a productive way to frame the debate is this: "Relative to a counterfactual world without EA, how much extra good has EA done?" The way I read FdB, his take appears to me "not very much", while Scott Alexander argues it's hundreds of thousands of people. Alas, counterfactuals are tough.

For the EA/utilitarianism connection, perhaps this post may be of interest: https://inexactscience.substack.com/p/the-case-for-narrow-utilitarianism

Expand full comment

Point one: there is no amount of charitable exertion in the world which buys you either the right to congratulate yourself in public, or exemption from scrutiny.

Point two: you seem to be saying: Other people don't give to charity like you do, or give to fluffy animal charity because they are less intelligent than you. As it happens I give money to treat blindness in children in Africa, you do shrimp. You think that makes you interesting, and cleverer than me. I don't agree.

Point three: We want to make AI safe, who could possibly be against that, is a sleight of hand. EA is concerned with fringe, ooh look at us being weird and nerdy issues which I am glad to see got virtually no air time at all at Bletchley last month. You are not saving the world.

Point four and one I am sorry to have to make. There's no doubt and no denial - just a rather belated apology - about Bostrom's history of racism. There's Shear: "The Nazis were very evil, but I'd rather the actual literal Nazis take over the world forever than flip a coin on the end of all value." The denial which these points evokes from EA ers when they are pointed out on social media suggests a pattern here, and not a good one. If EA isn't creepy it's certainly creep adjacent. I see no reason, absent audited accounts, to believe any unsupported assertion that EAers in fact more reliably give more of their money than anyone else. Bill Gates is not an EAer despite desperate claims that he sort of is spiritually but without really knowing it. Sure there's money sloshing around the EA world, but SBF...

If you want to give to serious charity good for you, but please less of the Thank God I am not as other men are.

Expand full comment

1) never denied this. However, if a movement does lots of good it's reasonable to say good things about the movement in public.

2) This is not what I'm saying at all. I think most people don't igve to effective charities because htey don't care much about doing it. Giving money to treat blindness in Africa is absolutely giving to effective charities and is quite laudable--Helen Keller International is a GiveWell top charity.

3) EA is behind most of the work on AI safety.

4) Nothing I say in this article has anything to do with Bostrom. Bostrom could be literally satan and it wouldn't make the slightest difference to anything I say.

Expand full comment

I don't think you realize how much the world gives to charity. A lot is the answer:

https://www.theglobalfund.org/en/

https://www.globalgiving.org/

To give you some idea. And the instinct to look for value for money and resist being cheated, is as old and as widespread as money itself. You are virtuous but small and undistinctive fish in a very big pond.

3 and 4 - you can't have it both ways. I don't for one moment accept your claim about EA and AI safety, the world summit at Bletchley got on fine without paying any attention whatever to the illusory and theoretical PCM stuff. But if you are going to make the claim in 3 Bostrom is the high priest of it, and you can't turn around in 4 and disclaim any connection with him.

Expand full comment

People give a lot to charity but few spend much time thinking hard about how to give to effective charity.

You can review the Scott Alexander for the evidence for 3.

Bostrom isn't the high priest of anything. He's one academic that lots of EAs happen to like.

Expand full comment

Where's your evidence that people don't think about the effectiveness of their charitable giving? This is like saying OK everyone drives cars, but only our tiny and exclusive clique looks through the windshield to observe potential hazards.

Expand full comment

I gave some in the article. Charities evaluated as most effective at saving lives get almost no funding while charities giving to cute puppies are flooded with funding.

Expand full comment

DeBoer's book on education is not "great," nor is it well thought or honest. It is dishonest and evasive, and its bleak nihilism about student achievement is belied by many actually functioning schools. DeBoer defends the most dysfunctional government schools for reasons he can perhaps explain, but which seem to come down to his fundamentalist faith in the socialist idea: government schools have to be the best possible schools because, er, well because if they aren't, despite the vast money they receive, well then it must be that there are private sector alternatives that would actually work better, and that leads us into capitalism. And we can't have that, can we?

DeBoer is at his best as a social critic of recent rhetoric. When he has to deal with facts, he ends up floundering around and can't really function because facts don't support his hardline socialist ideals.

Expand full comment

Freddie is wrong 80% of the time and annoying 100% of the time.

Expand full comment

Not a complaint but this note has been constantly at the top of my notes feed for weeks, is it just me?

Expand full comment

Freddie makes his living being wrong. But he’s wrong in such a thoroughgoing and frankly impressive fashion that he does the work of 100 normal nutjobs, or maybe more. He might be his own nutjob sector, and you have to respect that.

Nutjobs are close to my heart (there but for the grace of God and all that) and we unquestionably need them, but I’m pretty sure I’m done subsidizing Freddie.

Expand full comment

EA makes two claims.

1) Within a given goal/value system you should use utilitarianism more.

This is non-controversial but also not new. As SSC notes the Gates Foundation and other non-profits already did this.

2) that utilitarianism can be used to determine what your goals/value system should be.

That is what I think people hate, for all kinds of different reasons.

Look, if you want to get hardcore utilitarian then “increase the population of low iq Africans by buying bed nets” is a fucking dumb long term utilitarian strategy.

If your goal is “save lives in the short term” the bed nets are the best way to do that and go for it buddy. Just don’t tell me you have a monopoly on truth.

I think 99.9999% of good in the world was done by selfish people for selfish reasons. Please just fucking re-invest your money for maximum ROI, that is the best way to create the most utils for most people most of the time.

Expand full comment

These AI defenses seem to talk past the entire point. It’s not that doing good is weird, or counterintuitive or obscure that’s the problem. It’s that they claim they have discovered the secret of the universe (like: how to maximize the Good), and then are offended that you are not credulous about that. No, it’s not that doing good the EA way is weird, it’s that it is simply not good at all, and the rest of us somehow understand that. It’s must what talking to communists was like in the 1910s.

Expand full comment

What do you mean by eas claim to have discovered the secret of the universe. If all that means is they’ve discovered a very effective way to do good--and have discovered it by gathering high quality data and other evidence--that’s true but unobjectionable.

Expand full comment

Only a traditional kind of conservative skepticism here - it takes a lot of credulity to think that a handful of really smart kids know how to maximize Good write large, Good for the whole world, over a period of generations. I'm not sure why we are supposed to accept that credulously. Especially when it must inevitably say that x amount of human pain is worth y shrimp lives, etc. You act as if this is a marginal thought of ea, when it is the only thing it contributes at all.

Expand full comment

I don’t think it’s likely that ea programs are literally the best places in the world to donate given that the world is a complicated place. But if your goal is to do good, you should donate to the places that use evidence and reasoning to try to figure out what does the most good rather than simply whatever strikes their fancy.

Expand full comment