10 Comments
Apr 5Liked by Bentham's Bulldog

I continue to think that almost all of the disdain directed towards EA comes from mistakenly conflating two very different things: (1) the philosophy of effective altruism, which basically just says that you should give to well-evidenced charities; and (2) the online EA subculture, which is (at least perceived as being) full of hyper-utilitarians who are way too obsessed with artificial intelligence and think that you can use Bayesian reasoning to prove that Shakespeare isn't as good as people say he is. But you can (and should!) endorse (1) without signing up to (2).

Expand full comment

As a pretty strong utilitarian myself, I resent the implication that it’s utilitarianism causing the bad parts of the subculture (which I agree are bad). I tend to blame it on the rationalist movement

love your profile pic btw

Expand full comment

This. Yes. Except with the addition, that with philosophy and culture, there is a third category and that is all of the discrete, object-level empirical claims. These three are often way too lumped together. Would be fine to take a look at dev econ, health interventions, cash transfers, Al capabilities & safety and etc in isolation, but they tend to be combined haphazardly to produce triumphalist takes "against EA", which is just much more interesting to rail against.

Expand full comment
Apr 4Liked by Bentham's Bulldog

Good article, thanks

Indeed, we can wonder what he is proposing that is actually better for the world, with specific steps and evidence on how to achieve it.

Expand full comment

I went back and read the article after finishing your post, and I think you undersold the strongest section of his critiques. In the first section (I agree that the latter 80% was… nah), he claims that we under-consider outside harms of EA interventions in general, and specifically that modelers are more willing to make low-confidence guesses on potential benefits than they are on potential harms, especially if those harms occur outside of the treatment group, resulting in models that skew positive. Those are plausible claims, and seem worth investigating!

As is usual in any critique of EA, that part isn’t an indictment of core EA principles but is instead an example of doing EA by improving our calculations. But it’s still a real critique. I trust GiveWell’s analysts more than my own eyes, but the reason they spend so much time on this is that these issues are sufficiently complex that new complications surface all the time. I’d be curious to read a followup from them about that part.

Expand full comment
author
Apr 4·edited Apr 5Author

Well, when EAs lislt extra benefits the ones they list have plausible arguments for being significant and aren't implicitly accounted for in the estimates by the RCTs.

Expand full comment

Are you not on Twitter anymore? Wanted to praise you for this piece there - but will share it anyway. Thanks for the post!

Expand full comment
author

No

Expand full comment

The last sentence appears to have a typo.

Expand full comment
author

Fixed

Expand full comment