Hello, thanks for writing this. I also found some straightforward, factual errors in the Bulwark article. Since the Bulwark doesn't allow commenting on their articles:
1. "novel contribution was to specify a unit of measurement: “quality-adjusted life-years,” a notion borrowed from the world of health policy."
This statement was confusing because the author seems to simultaneously claim that QALYs were invented by EAs and that the notion is borrowed from the realm of public health. QALYs have been in the public health discourse since the mid-70s, and are widely used for assessing the efficacy of public health systems (e.g., the National Health Service in the UK).
2. "In many cases of EA-directed philanthropy, the donated nets were used for fishing and not malaria prevention, which led to overfishing and the risk of entire communities being starved."
This seems to be made up and exaggerated. The linked article doesn't talk about EA-driven philanthropy or starvation. I don't know how the author landed on the conclusion that, presumably, malaria nets distributed by the Against Malaria Foundation led to starvation. I am unsure if the author carefully parsed through the article they cited. Please correct me if I am wrong here.
While looking into this topic, I did find this GiveWell article titled: Putting the problem of bed nets used for fishing in perspective. GiveWell conducted a survey and found that "80% to 90% of households have nets hung up 6 months after distributions." If this is true, then I think the trade-off of saving and extending lives is substantially higher than the risk of overfishing. The blog in question: https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/
There are other things I think the author of the Bulwark article is wrong about, such as the perception that EA has recently shifted towards longtermism (a surprising number of EAs also hold this belief), when the ideas have been around for a decade or so.
There are places where the author describes their perception of something, or how something makes them feel, as a fact: "The QALY measure promises not only precision but the beautiful tidiness of goodness at aremove. The pathos of distance allows the good deed on another continent to take on a brilliant purity of simple cause and simple effect" — I don't think QALYs promise or imply any of this.
Many of the author's grievances are towards utilitarianism, and the author conflates this normative theory with EA as you point out. I think the bulk of the critique doesn't work once you realize that EA is not the same as utilitarianism.
"For one, the psychology of EA doesn’t matter to the assessment of whether EA is good or bad"
If ethics is fundamentally about virtue, it's the only think g that matters..If ethics is about consequences.it doesn't matter at all. You're both tacitly treating a particular theory of ethics as 100% correct.
"That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. *
I don't think that's true. When we argue about whether EA is worthwhile, we're asking something like:
1) is EA a force for good in the world
2) is it good, at the margin, for a person to become an effective altruist
The virtue of most EAs is largely irrelevant to this. In regards to 2, if it is good for the world but most EAs suck, the solution would be to simply be an EA who doens't suck.
Virtue ethicists -- and I should know, because I am one -- are often very conscious of the effect that a community can have on your personal development. If a virtue ethicist thinks that they have a choice between being "an EA who doesn't suck" but who nevertheless has their morality warped by their association with a deeply flawed community, or being a non-EA with a stronger sense of morality who remains outside the community (whether or not they still allow themselves to be influenced by it to some extent), then they will naturally conclude that they ought to do the latter.
"But this problem avails any attempts to do good. Whenever one tries to better the world, there is a risk they’re not doing any good or not doing the most good they can do."
And that can be alleviated by thinking more about what "good" is conceptually, as opposed to applying sophisticated maths to a naive concept.(EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA.”)
Hello, thanks for writing this. I also found some straightforward, factual errors in the Bulwark article. Since the Bulwark doesn't allow commenting on their articles:
1. "novel contribution was to specify a unit of measurement: “quality-adjusted life-years,” a notion borrowed from the world of health policy."
This statement was confusing because the author seems to simultaneously claim that QALYs were invented by EAs and that the notion is borrowed from the realm of public health. QALYs have been in the public health discourse since the mid-70s, and are widely used for assessing the efficacy of public health systems (e.g., the National Health Service in the UK).
Yes, QALYs aren't perfect and you break it down well. But anyone doing statistics knows that no model is perfect. For onlookers, here's a good article on health-adjusted life years: https://rethinkpriorities.org/publications/health-and-happiness-research-topics-background-on-qalys-and-dalys
2. "In many cases of EA-directed philanthropy, the donated nets were used for fishing and not malaria prevention, which led to overfishing and the risk of entire communities being starved."
This seems to be made up and exaggerated. The linked article doesn't talk about EA-driven philanthropy or starvation. I don't know how the author landed on the conclusion that, presumably, malaria nets distributed by the Against Malaria Foundation led to starvation. I am unsure if the author carefully parsed through the article they cited. Please correct me if I am wrong here.
While looking into this topic, I did find this GiveWell article titled: Putting the problem of bed nets used for fishing in perspective. GiveWell conducted a survey and found that "80% to 90% of households have nets hung up 6 months after distributions." If this is true, then I think the trade-off of saving and extending lives is substantially higher than the risk of overfishing. The blog in question: https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/
There are other things I think the author of the Bulwark article is wrong about, such as the perception that EA has recently shifted towards longtermism (a surprising number of EAs also hold this belief), when the ideas have been around for a decade or so.
There are places where the author describes their perception of something, or how something makes them feel, as a fact: "The QALY measure promises not only precision but the beautiful tidiness of goodness at aremove. The pathos of distance allows the good deed on another continent to take on a brilliant purity of simple cause and simple effect" — I don't think QALYs promise or imply any of this.
Many of the author's grievances are towards utilitarianism, and the author conflates this normative theory with EA as you point out. I think the bulk of the critique doesn't work once you realize that EA is not the same as utilitarianism.
I liked how you mentioned the 100,000+ people whose lives have been significantly impacted thanks to AMF. I will leave this comment with another optimistic counterpoint: https://forum.effectivealtruism.org/posts/GCaRhu84NuCdBiRz8/ea-s-success-no-one-cares-about
This is a great comment!
"For one, the psychology of EA doesn’t matter to the assessment of whether EA is good or bad"
If ethics is fundamentally about virtue, it's the only think g that matters..If ethics is about consequences.it doesn't matter at all. You're both tacitly treating a particular theory of ethics as 100% correct.
"That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. *
Again , she assumes virtue theory.
I don't think that's true. When we argue about whether EA is worthwhile, we're asking something like:
1) is EA a force for good in the world
2) is it good, at the margin, for a person to become an effective altruist
The virtue of most EAs is largely irrelevant to this. In regards to 2, if it is good for the world but most EAs suck, the solution would be to simply be an EA who doens't suck.
Virtue ethicists -- and I should know, because I am one -- are often very conscious of the effect that a community can have on your personal development. If a virtue ethicist thinks that they have a choice between being "an EA who doesn't suck" but who nevertheless has their morality warped by their association with a deeply flawed community, or being a non-EA with a stronger sense of morality who remains outside the community (whether or not they still allow themselves to be influenced by it to some extent), then they will naturally conclude that they ought to do the latter.
I didn't say she was right, I said there was a miscommunication.
There was no miscommunication--you asserted her view assumes virtue ethics, I argued that it fails even if we accept virtue ethics.
"But this problem avails any attempts to do good. Whenever one tries to better the world, there is a risk they’re not doing any good or not doing the most good they can do."
And that can be alleviated by thinking more about what "good" is conceptually, as opposed to applying sophisticated maths to a naive concept.(EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA.”)