Arguing About Effective Altruism With Tracing Woodgrains
Talking with a gay ex-Mormon furry about whether effective altruism goes against family values.
I recently spoke with the wonderful and thoughtful
about effective altruism. He is, while in various ways sympathetic, pretty critical. I disagree with his criticisms. Enjoy!Here’s the transcript
Matthew (Bentham's Bulldog): [00:00:00] Okay. Hello everyone. So I guess this is my, my first, episode where I'll do a recorded podcast with some guest. I'm here with, Tracing Woodgrains or Jack Despain Zhou. he, was a producer for the, funniest podcast on the internet, known as Blocked and Reported. Now he is, he's no longer producing for them.
He's now, in law school and writing various articles. He has a substack that'll be linked below. Very interesting guy worth, checking out, trace for any, anything you want to add, to, to who you are.
Jack (Tracing Woodgrains): I'll just note that it's, Jack Despain Zhou. And other than that, yeah. I have, at this point, I guess something of a long history doing deep dives into various internet controversies, diving into various policy and cultural issues and looking at trying to navigate a path towards a sane center.
Matthew (Bentham's Bulldog): this conversation arose because, so I, wrote an article where I I, criticized many of the critics of EA arguing that the criticisms of EA are not any good and Jack disagreed. we'll be talking about whether in fact the criticisms are, not any good. So I guess, do you want to lay out your, basic, criticisms of EA or at least the kind of broad worries about EA?
Jack (Tracing Woodgrains): Yeah, I can. And I think the easiest thing to do, the most effective thing will be to describe where I see EA fitting into the framework. I am very interested in religious movements and have been since my time in Mormon. And when I say religious movements, I use it as a broad term to describe the unified root of someone's ethics and beliefs, the sort of cultures, organized cultures that spring up around a unified ethos.
And I think you have traditional [00:02:00] religions like Christianity, like Islam, the ones that people normally think of like religions. And then to my perspective, you have newer religions that, spring up from a secular framework. And I think Marxism was the first serious example of this back in the 1800s.
And more recently you have social justice progressivism, which is something that was an incredible combination of old time Marxism, New postmodern lefty ideas and a lot of, memes that have generated online and coalesced into a unified culture around the oppressor oppressed ethos that can be best understood in those terms.
I think effective altruism can also be understood best in religious terms as something of a new religion. and. I take particular interest in it because I believe that it is the, at this point, the default narrative for someone like me who wants to do good in the world is the way that you do good in the world if you are like me, is you go, you become an effective altruist.
You devote yourself to this community, and it's a community that's focused on several cause areas you have. Global health as a cause area, you have animal rights more broadly as a cause area, you have AI safety and a few connected things, all tied together by the concept of doing good effectively on the surface and, by a lot of lower lying memes and a lot of lower lying structures and community structures, more specifically, I have a lot of respect for effective altruism as a whole.
I appreciate the people within it. And I think by and large, they're incredible people who, for the most part, do better living up to my own values than I do. However, I have some serious issues with the framework as a whole. I think in particular that the framework is rooted in singers, Distance blind utilitarian ethics that functionally [00:04:00] proposes that you have no greater duty to someone near you than you have to someone distant from you, and that you should view, broadly speaking, I think, effective altruists, while they have disagreements, they hold that you should, take something of a utilitarian stance, something of, in particular, you see a lot of Benthamite utilitarianism.
You see a lot of preference utilitarianism, the concept of we are going to, quantify how much good to do. And, I just don't think that distant blind distance blind utilitarian stance holds up in practice. I think that people have greater duty. In meaningful, important ways to those who are closer to them by every definition of the word closer.
Parents have a special duty to their children. Friends have a special duty to friends. Neighbors have a special duty to neighbors. People who share similar ideas and are trying to form some sort of common community have a special duty to people within that community. you have a special duty to the areas, to the ideas, and more broadly, the more.
You ignore that. And the more you try to take on a duty to the whole world, the more it can best be understood as looking to exercise power over broader and broader circles. So by it's need, blind or distance blind utilitarian approach, what I think EA really becomes is. Is a movement that very consciously seeks to assert power over the entire world that very consciously seeks to say, we want to define the way the entire world functions in as much as we receive power.
We want to accumulate as much power as we can, and we want to use it towards the ends that we have collectively decided will be best for everyone, and that it is our duty, in fact, to impose this on the world and to impose this frame on the world as thoroughly as possible. I think that it's [00:06:00] broadly, it doesn't frame itself in these terms.
It doesn't think of itself in these terms. It doesn't precisely think of itself as a power seeking movement, but it is a power seeking movement, which that's not inherently a bad thing to be power seeking, but it does, Lead you to take on a lot of responsibility that I don't think effective altruists have always risen to meet and it does mean that many, people have standing to seriously question and seriously, criticize your frame, which I think, people like you sometimes, flinch at and, get irritated at wondering why people would criticize someone who's just trying to do good in the world.
That's a short version of many of the critiques.
Matthew (Bentham's Bulldog): Yeah, okay, yeah, so there, there was a lot there. I, the sense in which I think there are not any good criticisms of effective altruism is that I think that there are not any good criticisms of the things that effective altruists do. So there are all sorts of criticisms that one can give of certain ancillary assumptions and effective altruists.
That lots of effective altruists happen to have. So for instance, it's a true sociological fact that the overwhelming majority of, of effective altruists are atheists. And you might, have a criticism of that. suppose you're a Christian and you think that being a Christian is of great value.
You might have a criticism of that. But that wouldn't really be much of a criticism of effective altruism, because it's just a criticism of certain ancillary assumptions that are not integral to effective altruism, that are rejected by some effective altruists, and that are not related to the things that effective altruism does as the movement.
And so then I think you have to look at the things that effective altruism does as a movement. they fall into roughly three categories. One of them is they work on trying to improve global health. I think that is clearly unambiguously good, providing anti malarial bed net distribution so that people don't get malaria.
The cost to save a life from via anti malarial bed net distribution is about 5, 000. And on account of the overwhelming effectiveness, these programs done by EAs have saved about 50, 000 lives a [00:08:00] year. So this small group of nerds, many of whom are in the, are in Silicon Valley, that, most people aren't paying much attention to has basically done about as much good as preventing 17 9 11s every year.
In terms of just total number of lives saved. and then, so I think if you look at what, EA does as a movement, that's clearly good. Then if you look at just what practical advice EAs give, whether the things that EA recommends are in fact good, there are certainly some of them that you can have disagreements about.
So for example, maybe, maybe you're not on board with long-termism. Now, I, think long-termism is good. I, would dispute that. one can have reasonable criticisms of long-termism. But I think that there are at least some things that EAs recommend that are unambiguously good, like giving to anti malarial bed net charities, and that virtually no one does.
And so from these, it's pretty clear that the world would be a much better place if, say, if, every person gave 10 percent of their income to effective charities, the world would be a better place. And so if you think that almost everyone in the world should be more effective altruists, should do lots of things recommended by effective altruists, By effective altruism, then even if you have some criticisms at the margins of certain antiwar ideas, then it, it seems weird to, think, if I thought that everyone in the world should be more libertarian, I would call myself a libertarian.
If I thought that nearly everyone in the world should be more effective altruist, to do more, give more to effective charities, and take certain high impact careers, and so on, then I think it makes sense to, to think of yourself as an effective altruist, and in particular, it means that effective altruism serves as an important framework for navigating in the world.
Now, you raised kind of two broad issues about EA. The first one was you said that it's this, distance blind utilitarianism. Where it cares just as much about, a stranger on the other side of the world as it does about, your own son. And second of all, it, that it, it's power seeking.
Now, in terms of the first criticism, I agree that is a moral view held by many effective [00:10:00] altruists. In fact, it's a moral view that I hold. I'm a sort of hardcore Singerian utilitarianism, but I don't think that it's relevant to the things that are done by EAs or the things that EAs recommend.
So I think it's more analogous to, lots of EAs being atheists, say, or being non Christians where, a Christian could have a criticism of that, but it wouldn't be much of a criticism of the sort of core things done by the movement. Now, you might think, okay, if you're not a, if you're not a sort of, impartialist utilitarian, why are we going on the other side of the world and, helping out, people who are dying of malaria when there are people struggling here?
What turns out that the, charities that help people in other countries are so much more effective than the charities helping people, in the United States, hundreds or thousands of times more effective. But even if you're, you think that, that people in, that we have stronger duties to members of our own country, for example, that then it would still be the case that it's much more valuable to help out people in other countries.
It's still much more valuable to provide anti malarial bed nets than to, for example, give to Salvation Army. Now, Bye bye. Maybe then your obligations to your own children, for example, would be greater than your obligations to other faraway people. And, that's a perfectly reasonable view to hold, but no part of EA is committed to the idea that your obligations to your children are less great.
The idea of EA is just, people should, make helping others effective way a bigger part of their movement. Now, you worry that it would be power seeking. I guess I, don't see it as that power seeking except in a few small areas like EA, they're trying to get certain kinds of AI regulation passed to make AI safe and trying to get, factory farms to be reformed so that, they don't torture as many animals.
And I guess this is power seeking in one sense, but I don't think it's power seeking in an objectionable sense. yeah, okay. Sorry. I realize I went on for a bit, but, yeah, so that those are my two cents. Why am I wrong?
Jack (Tracing Woodgrains): Yeah, absolutely. And I appreciate you laying that all out as well. so to [00:12:00] start with the end, this idea of power seeking, I think it's important to, define first With power seeking, what it looks like to be power seeking, the extent to which that's good, the extent to which that's bad.
When you say, I want to do good in the world, whenever you say, I want to do good in the world, what you are saying is, I want power and I should be trusted with it. When you interfere in an area, when you interfere in someone's life, you take responsibility to a degree for that. For example, when I, as a writer, when I write about a topic, There is no such thing about writing, there is no such thing as writing about a controversy from a completely detached perspective.
Every time I write about, for example, sent a millionaire Brian Johnson and his, fight with his ex or a scandal in the FAA hiring process or, any sort of niche internet controversy. What I do with that is I start digging into the lives of people connected to it. I start directing and outlining parts of the narratives around their lives.
I, insert myself all of a sudden into something where because I am doing it, other people will act differently towards it because I'm doing it. Some people will say, okay, he's handling this. I don't have to think about this. Some people will say he is attacking me or he is helping me or this or that, like suddenly by bringing myself to that situation.
I have suddenly asserted power in that sphere. And every time you assert power in a sphere, you take on. Specific and extensive duties in that sphere. so when I say EA is power seeking, it's not a criticism precisely in and of itself, what it is something that requires a great deal of caution because EA is like you, someone like you doesn't really.
Holds that there [00:14:00] is any special duty in particular spheres that when you insert yourself in a situation, you don't really recognize the extent to which you take on obligations to that situation that arise because of that insertion. And, that leads you to, I think, be careless with power at times. The idea that, you're, stepping into various spheres, that, for example, with a lot of effective altruists, you look at something like wild animal suffering and a lot of effective altruists.
I don't know how you feel about it, but a lot of them feel qualified to say, if I had the chance, would I press a button and, kill all wild animals because that would reduce their suffering? And this is a serious topic of conversation among effective altruists. I think for almost everyone, the answer to a question like that is.
Yeah. No, you maniac. That is so far out of your sphere of responsibility and so far out of the sphere of anything that you personally could ever be trusted with, that you would do much better focusing on things that you have a personal understanding of, that you are willing to personally take responsibility for, and that people can credibly trust you with responsibility for.
And I think this sort of distance blind utilitarianism that you push, and that's very common in effective altruist spheres, almost entirely ignores that duty.
Matthew (Bentham's Bulldog): Yeah, okay, yeah, so that's interesting. one question I'd have for you before I address some of that is, would you agree that the world would be a much better place if a lot more people gave to effective charities, say?
Jack (Tracing Woodgrains): I think that the effective altruist movement It is good that the effective altruist movement exists.
I do not think that everyone should join the effective altruist movement or that everyone should follow its charity recommendations. And I do not necessarily agree that the world would [00:16:00] be a much better place if everyone suddenly hopped on board and said, we trust your movement to tell us where to donate and donated per EA recommendations.
That's not something I'll commit to.
Matthew (Bentham's Bulldog): Okay. Okay. yeah. So I guess, you suggested that EA is this power seeking enterprise where, anytime you attempt to improve the world, you're asserting power. I agree that there's a sense in which that's true, anytime you, write about something, you're having some effect on the world and you're saying you're the legitimate, arbiter, you're, in a legitimate position to, make the decisions and to, to.
You see yourself as having the right to at least, take the actions that you're taking. There are tons of cases in which this, in which things are like this, and we, where we don't think. they're objectionable, right? if, a person, they see a drowning child and they wade in and they save the child, there's a sense in which the child that they're saving, they're asserting their power, they're having some influence on the world.
But because the way in which they're asserting their power is in a way that's clearly unambiguously good, We don't, see that as objectionable. And so I would say that in a similar way, the things that EA actually does and recommends, not the things that the EA's sort of talk about when, that they're at their conferences and, they're having a fun discussion about moral philosophy, but the actual things EA does and the actual things that EA's recommend, I think are clearly, unambiguously good.
And the world would be a much better place if there were a lot more effective altruists. And so I think that, like, if everyone gave to give wealth charities, until basically they'd met their funding goals, the world would be a much better place. And so if you think that there should be a lot more people in an X group on the margins, it seems weird to think of yourself as a critic of X group.
if I thought there should be 10 times more libertarians, it would be weird to think of myself as a critic of libertarianism. I want to
Jack (Tracing Woodgrains): emphasize, I really don't. I don't think there should be 10 times as many effective altruists. I am not calling for everyone to become an effective altruist.
I'm not [00:18:00] saying that your movement should grow any faster than it is growing. I'm saying that I'm glad that the people in it are in it, but I think that. It is not a movement that I am shouting for people to join in any way, shape, or form. It is not something where I'm saying, at the margins, everyone should join that.
Matthew (Bentham's Bulldog): Okay, yeah. Sorry, I didn't mean to suggest that you were saying that everyone should join it. My, the sense I got was that you, you thought the world would be a better place. If you think the number of people that it has is good and you think the world would be a better place if it had more, but not if it had everyone, is, that a fair characterization?
Or do you think the current amount is right? Or
Jack (Tracing Woodgrains): so not particularly I, think that effective altruism as a movement has a lot of very specific issues that needs to reckon with as a movement and without help. Without overcoming those issues or without, building more specifically and without recognizing itself as a social movement versus just an abstract philosophy.
I think that it, has a lot of growing pains to go through that. I personally am not by any means calling for it to expand.
Matthew (Bentham's Bulldog): Okay. okay, there's a sense in which I can, in which, it's, a little bit unclear, what it means to be an effective altruist, and maybe it means calling yourself an effective altruist, going to conferences, thinking a lot about these things, but would you at least agree that, say, five, if tomorrow ten times as many people gave to GiveWell top charities, would that be an improvement to the world, do you think?
Jack (Tracing Woodgrains): This is going to be a frustrating answer for you, but I don't know. I simply don't, I think that it is beyond my own sphere of experience and beyond my own sphere of responsibility to make firm declarations about whether everyone should give to give well. I haven't looked into the charities thoroughly enough.
I don't have thoroughly enough knowledge of those things. The marginal use of me [00:20:00] saying anything about it, that there is, it is outside my domain. What I can say is that there are some specific things inside my domain, things that I do feel very qualified to comment on and very qualified to note that I see as clear and present issues.
And those clear and present issues, seem much more pressing and much more salient for me personally to comment on. that is something in which I, can take a duty towards.
Matthew (Bentham's Bulldog): I find it very hard to see why the pressing issues and the are, supposed to be anything like more salient. So the pressing issues that EA addresses is like, like thousands of people die every day, many of them children of these horrible, preventable illnesses.
These things are preventable. We in the West can do something about it at fairly minimal cost. EA's are doing something about it and are recommending that others do it. a Boeing 737 plane ride worth of. Children is dying every day from these horrible diseases and like then there are things we could do to prevent this and that where any individual person can save many lives.
And then almost no one does that. So those are the like the things the A is addressing. And then the criticisms that you're making of the A are like, they have these sort of incorrect philosophical positions. A lot of them would, if given, if given the power to destroy nature. would be in favor of that because they think wild animal suffering is very serious.
and even if you think that, this is very repugnant view, the fact that a lot of them have repugnant views, I, I struggle to see why that's supposed to be a big problem. with sure.
Jack (Tracing Woodgrains): Let me be specific then, give well as an organization and EA as a philosophy, relies on these views as foundational assumptions.
In many of its analyses and many of its charity explanations. So as someone is working downstream from that and then saying, trust me. I've done the calculations. Here is all of the things that you should do as a result of these calculations. If I'm starting from very different premises [00:22:00] to that, I can't trust it.
I can look instead. No, I can't trust the downstream calculations from that. if we're starting logic works, when you're starting from the same axioms, if your axioms are different than a logical train, I can't trust their logical train with that. And. From an outside view, you can see things like scandals like the Sam Bankman-Fried scandal, which I realize that this is a frustrating thing to bring up, and I don't try to hang it over EA heads.
However, he was, in every way, shape, and form, an adherent to the effective altruist movement, a central example of it. He did tremendous harm in the world, and the harm he did in the world was perfectly in line with this need blind, risk blind, utilitarian view that I criticize. and, He existed within the movement and people trusted him and gave him more and more power within the movement because he aligned broadly speaking with their philosophy and because he seemed trustworthy according to the tools they used to gauge trust and the movement as a whole, suffered a great deal and caused a great deal of harm based on his affiliation with it.
That is one specific example of the way this, these philosophical principles actually do come into play and this sort of power seeking and this sort of. Yeah.
Matthew (Bentham's Bulldog): I, think that it's, clearly the case that first of all, the things that Sam Bankman-Fried did where, he was very risky. He violated all sorts of both laws and social norms in extremely risky ways that could be, that both could have, were likely to lead to the collapse that was observed and what's likely to have this just really negative effect on the, career of the EA movement.
I think that with, a lot of stuff like this. doing this sort of high risk, high reward, things where you violate common sense morality, [00:24:00] these are things that have been routinely condemned by top EAs for literally decades. one, one of the earliest papers on EA was written by Wilma Caskill, and one of the things that he says quite explicitly is, if you're taking a job to earn to give, Definitely don't take a job where you're doing unethical or illicit things.
Even if you earn more money, the risks are much greater. And so I definitely don't hold
Jack (Tracing Woodgrains): on a question about that, though. What did Will McCaskill think of Sam Bankman-Fried up until the time of the collapse? What did Mel McCaskill personally think and say about SBF until the time of the collapse?
Matthew (Bentham's Bulldog): he was pro SBF, but I don't think I think that I heard.
Jack (Tracing Woodgrains): Yeah, but I don't think so. So, set aside this abstract, yes, abstractly, you condemn things apart from common sense morality. In practice, as a social movement, the norms and the practices being put in place, the culture, this frame, explicitly the founders, explicitly the people who are boosting it most.
Through their names and their reputations behind this figure that matters
Matthew (Bentham's Bulldog): a great deal I don't think that I don't think they like through their names or reputations behind it They were they you know express support for him Which but I don't think there's anything unreasonable about that like prior to finding out that Ted Bundy was a serial killer, you know It wasn't a it wasn't a mark against people that they had fond views of Ted Bundy Prior to finding out that a person who eventually you discover that bad things prior to finding out that they did bad things Which and it's not clear how hold on he was outlining
Jack (Tracing Woodgrains): his whole philosophy like so the things that you're saying EA was condemning He had been very clear the whole way up.
I believe this, you know This coin flipping, risky approach, this defying social norms, and rationalism as a whole is founded in many ways on first principles thinking on deviations from a lot of these social norms. I'm saying we're not going to rely on society's morality as a whole. You personally deviate from society's common sense morality in many ways [00:26:00] from first principles, rationally, reasonably, and say, I think that there are many ways we should depart from it.
EA as a whole. explicitly tries to depart from common sense morality in many ways. So I get the sense on the one hand you want to appeal to common sense morality. No, don't, we wouldn't do that. Nobody would. This is common sense. at the same time, the movement seeks to undermine and tear apart common sense morality in favor of first principles, utilitarianism.
And I don't think you can have it both ways. Yeah, I
Matthew (Bentham's Bulldog): don't think that the movement, I think you're thinking of the movement as much more overreaching than it is, that it's trying to terraform the world into being a utilitarian dream. And really, it's just about, it's a social movement that's about trying to do good effectively.
I don't think that it's true that the things that Sam Bankman-Fried did were predictable from what he said. What he said was, he described that, there's, a view that's held by many philosophers where the idea is that a one half chance of twice, twice the payouts. Is twice as valuable is just as valuable as certainty of half the payouts.
and so now, you can disagree with this moral view, but even if you accept this moral view, clearly, what SPF did not pass the sniff test. It did not pass any sort of reasonable cost benefit analysis. The best case scenario was he got a few extra billion dollars that could have been donated to good charities.
And the worst case scenario is he loses tens of billions of dollars and permanently tanks the credibility of the movement such that it's discredited for, the coming. many decades, right? And it just clearly did not pass the cost benefit analysis. And one way to see that it didn't pass the cost benefit analysis
Jack (Tracing Woodgrains): so it's, hold on, it's frustrating though, that you can say, here is the downside, but then we should set aside this downside and say, yes, even though, It is the sort of thing that could take the credibility of this movement for many decades.
It's not a true EA. It's no true Scotsman. He's not the sort of thing that we should really reckon with as a movement and say, yes, this is a problem because he didn't follow it well enough because he wasn't precise enough with it. [00:28:00] From an outside view, that's a very frustrating thing to grapple with, a very frustrating thing to face.
Is someone retreating away from that and not saying, yes, this, was a product of our ecosystem, a product of our framework, and here are the issues with that, and here are the ways we're addressing them. At some point, you need to be able to address those.
Matthew (Bentham's Bulldog): Yeah, I would certainly agree that it's true that if effective altruism had never existed, I think that Sam Bankman-Fried almost certainly would not have committed major fraud.
And I agree that's the worst thing that the EA movement that was caused by the EA movement was Sam Bankman-Fried engaging in lots of fraud. But I don't think that what Sam Bankman Free did followed from the principles that EAS actually espouse or recommend. And, this is perfectly reasonable, Bernie Sanders does not have to take credit when a deranged Bernie Sanders supporter, shoots a congressman at a congressional ballgame.
Bernie Sanders does not have to say that actually follows from his principles when it plainly does not. And in a similar way, EAS don't have to say that major fraud follows from their principles when the major fraud clearly does not follow from the principles, that they espouse.
Jack (Tracing Woodgrains): so this is a problem that I see from this sort of dual power seeking and power avoiding.
There is no one person who is willing to take responsibility for effective altruism. There is no one person who can say, I am in charge of effective altruism. There is no one organization that says, We are in charge of effective altruism. And, in fact, the people who have done the most to spread the philosophy try to treat it as an in between.
anti movement. Will McCaskill flinches away from this idea of it as a movement and tries to frame it as an abstract philosophy, for example. What that means is that any time You know, so you're pointing out like effective altruism doesn't need to rely on this need, or this distance blind utilitarianism that you yourself espouse.
It doesn't need to rely on this, it doesn't need to rely on that. What that means is that you can pick and choose, and anytime someone throws a criticism at it, you can say, [00:30:00] it doesn't rely on that element that you're criticizing. Or you throw a criticism at an individual, it doesn't rely on that.
And this is. Some of what I mean by responsibility dodging is that there is no person saying, the buck stops with me. The buck stops with us for this movement. Here's the way we're accountable. And in fact, the specific accountability mechanisms set up within effective altruism have serious flaws and have serious flaws that I think Are well meaning, are well intentioned, but are extremely visible from an outside view.
As an example, the first time that I personally got, quickly, to carry, to say what I'm saying with that. As an example, the first time that I personally got involved with the effective altruism movement, in terms of more than just observing from the side, was when, Ben Pace, wrote an extensive takedown on an effective altruist org, Nonlinear.
And, From my own angle, and many people agreed when I shared my thoughts on it, there were a number of really clear ways in which this violated common sense, basic journalistic principles that I was used to that broader society in general follows. And I went through and outlined here are the major, issues with this.
not only had nobody within effective altruism, I hadn't done anything about that until I stepped over to it and said, Hey, here's a major problem. People were praising it as an example of effective governance. The, community health, central effective algebra
angle, very screwed up governance and accountability mechanism that nobody within the movement was noticing.
Yeah.
Matthew (Bentham's Bulldog): Wait, sorry. You cut out for a second. Were you done with that thought or, [00:32:00] Yeah,
Jack (Tracing Woodgrains): shoot, you're coming out.
Matthew (Bentham's Bulldog): Okay. Yeah. can you hear me now? I think, yeah, you are cutting out for a moment. So there, there are, I think, 3 kind of, there, there are 3 distinct things that need to be distinguished relating to EA. One of them is the actual social movement as it exists. So that's the people who go to the conferences, the people who attend the global, the people who call themselves effective altruists.
As for that, I certainly agree it's not perfect. There might be many, there are, there are many things that, that are, you know, in, in various ways. Not good. I'm not familiar with the example that you gave, but I have no, no trouble believing that there are, there, there are some ea groups where there are weird things, going on with power imbalances and with With kind of weird social rules that violate norms and common sense and so on, but I think it's not enough to discredit a movement to just look at the fact that some things go wrong. lots of, I'm sure you can find like groups of Democrats where, where, there are weird things going on and various Democrats have done bad things, but you need, you also need to look at the upside.
So I think when we look at the upside, EA as a movement is that it's like. Okay. Saved about 50, 000 lives a year. It's been, drastically reduced the number of animals in factory farms. It's been an integral part of getting AI safety to become mainstream and so on. So I think, all of that's very valuable and all of that drastically dwarfs the harms of this sort of, small, of the sort of small, elements of the movement.
Then there's, there's the, there's the sort of philosophical side, which is just people should try to do good effectively. And that I think is quite trivial, that just people should try to do good effectively. and then the third, element is that, The sort of concrete recommendations that EA's give, which are take a high pay, either, take a job that, [00:34:00] give a lot of money to, effective organizations like the give talk recommended charities that are distributing malarial bed nets, or, Charities that are reducing the number of animals being mistreated in factory farms or take a job going being like a congressional staffer where you have high impact, they're all sorts of concrete recommendations. And I think those are overwhelmingly good. And so I think, why I'm happy to defend any, three, any of the three, the one that I think is the most relevant to like how a person should live their life.
Is the third one, which is okay. If you're going to decide to live your life, should I follow the recommendations given by EA as of how to live my life? And I think the answer is clearly yes. Given that, it's just, it's very high impact and it saves lives at such an extraordinarily low cost.
so yeah.
Jack (Tracing Woodgrains): So here I can speak to my individual life and I can say, why am I living my life the way that I am outside EA instead of joining EA giving to. These charities joining AI safety organizations. First off, it's important to consider for each individual, where their role in a web of power, their responsibilities, their places.
in my own case, I am not particularly wealthy. The money that I can spend to any given cause will not move the needle on that cause. More than a drop compared to someone, like one of the billionaires and effective altruism, there is almost nothing I can do in that regard. at the same time, looking at where I have responsibilities in terms of spending, I am committed to becoming a father and the cost of child rearing for me and having a kid for me.
As a gay man are extraordinary and have meant that, in terms of my own income and my [00:36:00] own saving, a tremendous portion of that goes towards simple questions of child rearing. So when it comes to donating to charity, there is, priority wise, there are other focuses for me, and I think very materially relevant, critical focuses for me.
More, if you look at where my competitive advantage is. and where my duty is, I'm a reasonably good writer and reasonably perceptive about a range of a specific, I think, less understood cultural phenomena and specific stories that get less attention and less press than they should. And by operating outside the ecosystem and by forging my own path.
What that has enabled me to do is consistently draw people's focus towards, as I see, here is a story that I think my voice can make a real impact on, can move the needle on. I should. I should push people's attention towards here. I should push people's attention towards here. I should push people's attention towards here.
And my writing projects, broadly speaking, a lot of them are on trivial internet controversies. And a lot of them are on internet controversies that are, I think, microcosms of the world or microcosms of broader cultures. And some of them are on broader, sweeping cultural controversies, like my, FAA reporting, that I think virtually everyone in the United States absolutely should pay attention to.
But what each of those has in common, what everything that I write has in common, is that if I personally don't write it, nobody will. If I personally don't dive into the stories that I dive into and dig up the things I dig up, nobody is going to do them. They're not aligned with the EA movement in the sense of, I recommend global charities and AI safety and this or that, [00:38:00] nor do I think that I would have much to contribute on any of those topics.
but they are so far as I can tell within my sphere of responsibility, my sphere of capability, much more essential things, for me to do. more generally, I think you're really comfortable generalizing about what people as a whole should do. Based on your distance blind utilitarian approach, I can answer basically what specific people, what people adjacent to me and what people in my direct spheres of influence should do and that, it, it falls less, much less clearly, and not particularly towards even the side of the A than it does for you.
With people in my spheres and people on my focus where EA is, I think the dominant cultural mainplex and in many ways a flawed but admirable cultural mainplex that entrances a tremendous amount of people who think like me and channels them into a few specific focus areas that, obscure other things I think are important.
Matthew (Bentham's Bulldog): Yeah, so the advice that I would give to people would obviously depend on. Various features of the person, right? if you're a starving African child, I would not, recommend giving to give top charities, right? but. If, and and I obviously don't know about your financial situation, it would, I, I have no desire to like, go into detail about your financial situation and be like, you should give, this amount to effective charities, but the basic point is just like for most people, the vast majority of people in the United States are such that they would not be broke if they took a 10 percent haircut in terms of their income.
And it turns out that the vast majority of people are such that if they took a 10 percent haircut in terms of [00:40:00] their income, if they gave 10 percent of their income to effective charities, they could save like a hundred lives over the course of their lifetime. And so given that, I think way more people should be doing that, and that's extremely valuable.
in terms of what you're doing, I agree, being a writer, I think is, pretty high value, and I think that more people should be writers at the margins for the sort of standard reason that economists say that public goods are underproduced, where writing is both non excludable and non, it's technically excludable, but it's at least costly to exclude and non rival risk.
And so it's the kind of thing that we would expect to be underproduced. But just I would say. More people should give to effective charities. I think more people, certainly not everyone, it depends on the details of a person. It depends on their various, skills. the thing I want to do is either go into academia or become a writer, which are probably, I'm optimistic that there'll be high impact, hopefully being a public intellectual have decent impact.
It might not be the highest impact thing. I think I have a comparative advantage at those things. I think I'm a pretty good writer. I think I'm pretty good at, academic philosophy. And I would rather be doing, those things than, take a job at some trying to try to take a high paying job or, become a congressional staffer, which I don't think I have any, special ability to do, for example.
But so the point isn't that like literally every single person in the world should follow this very concrete guide to, do these five things, give 10 percent of your income to charity. But the point is, I think a lot more people should be doing that. A lot more people should be giving to effective charities.
A lot more people should be taking, careers that they anticipate being high impact. I think, yeah, I think that would be very valuable. what do you make of the earlier point about you're, pointing to the certain things that EA has done wrong, certain, the Sam Bankman-Fried scandal, the, the, I forget the details of the scandal that you described, but the kind of unhealthy things happening with the [00:42:00] hiring practice for some group.
But if you're going to count that against EA, then you have to count the, just Enormous amounts of good, like many millions or billions fewer animals being, subject to certain excruciating conditions in favor of VA, you have to count the 50, 000 lives saved a year as counting in favor of VA and so on.
Jack (Tracing Woodgrains): again, and this is where it gets difficult because we really are arguing from different frames. You're saying, from this distance blind, duty blind, utilitarian framework, don't you think that this movement that, is part of a broad system of charities? Because EA wasn't the first to give to charities, it wasn't the first to establish these charities in Africa, it wasn't the first to, build any of these practices, it wasn't the first to have a ten percent tithe.
And you're telling me, don't you think that the good with all of this, with this organization, this marginal good spread out over the whole world, is, much more important and much more essential to focus on than these very specific particularized issues that you have specialized knowledge of?
And my answer is, Look, I grew up within Mormonism that had a 10 percent tithe, and I was able to tell people, look them in the eye, and say, I think you all should be Mormon, and I think you all should give 10 percent of your income, and I think you all should live a strict life, and I think you all should do this and this within the framework of this organization.
When I stepped away from that I no longer thought that. I no longer agreed with that. the foundational assumptions didn't hold. but I'm sympathetic to, for example, the idea that both Mormonism and effective altruists are correct, that a 10 percent tithe, that a 10 percent giving to charity, that Christianity had it right, and that effective altruists were right to follow Christianity in that.
I think that, broadly speaking, it's a healthy impulse to give money away. in terms of the specific charities EA donates to. I've donated to the Against Malaria Foundation. I, think that [00:44:00] there is no problem with, and there are a lot of good things about giving certain amounts of money to global impact charities.
but in terms of. saying what, you're asking now, you're saying giving to effective charities and taking high impact positions. I want to define those more specifically because effective is a really vague, really broad term.
When you say giving to effective charities, what you're saying is give to the charities that effective altruist organizations recommend. When you say take high impact positions, you're saying take positions that effective altruist organizations recommend. And my response to all of that is. I think that it is, it does not have the same, from an outside view. I do not trust it nearly the same way you do for that. I do not trust it the extent, saying this.
understands and comports with my moral principles. This has taken my moral principles into account. I can essentially, when you're saying give money to charities, they recommend you're saying substitute your moral reasoning with their own. You're saying, take for granted that they have already thought through and considered and solved the issues that you're worrying about.
And therefore you can give them more power by giving them more money and give them more influence and trust that they will use that influence for good. My basic statement is I don't trust that they have thought through the moral issues that I care about and I've taken into account the moral issues I care about the things within my direct sphere and within my direct influence indicate that they haven't taken into account in sufficient ways the moral principles I am most worried about and as a result.
I am very wary of yielding my moral reasoning to them and saying, yes, I should rather than the 10 percent tithe I was giving as a Mormon, I should give a 10 percent effective altruist tithe and that will fill my doing good quota. [00:46:00] I don't think that's a sensible way for me to approach.
Matthew (Bentham's Bulldog): Okay. so I think this is, I think, maybe a kind of core disagreement because you keep describing these things as, okay, EA gives recommendations that you follow if you were a, I forget the word you used something blind utilitarian.
Distance blind utilitarian. Distance blind, yeah, that's a good phrase. but I, the way I see it, none of the things that EA recommends, they all follow from fairly trivial moral principles that it's better to do more good rather than less, and if you can help out people overwhelmingly, if you can save the lives of a hundred African children, or instead you can like, give one dollar, one, Guide dog to one blind American that it's better to save the hundred African children that I think does not require being a distance by utilitarian.
Nor does it require affirming any particularly controversial moral principles. I think it's quite trivial. And so I think that the concrete,
Jack (Tracing Woodgrains): I want to pause you and say, it's not trivial. None of that. None of that is trivial in any sense. What you are saying. And in terms of charity, in terms of all of this, to actually understand proper impact of say, I invest 50 or I invest a thousand dollars in a company versus give a thousand dollars overseas to charity.
Investing a thousand dollars in a company, has all sorts of implications in terms of expanding that company's power in a specific ecosystem, short run, long run market implications, economies of scale, all sorts of very complex economic things that people spend their lifetime studying, taking that thousand dollars out of circulation within the United States and sending it to, some organization overseas, where it will be sent to.
I hope, but, don't know a cause that is, effectively targeted at a specific needy situation. the question of keep this 1, 000 in this [00:48:00] ecosystem or send this 1, 000 out of this ecosystem. There is nothing simple about that question. That is a question that people have debated and people have written books about, and people have written long, back and forth trying to figure out very, smart and very thoughtful people have come to broad ranging disagreements on.
I think there is nothing simple about that question whatsoever.
Matthew (Bentham's Bulldog): wait, so I think if the idea is that it's better to spend money in the United States than to spend it on anti malarial bed nets, for example, first of all, I think this is not super important to the point, but I do think most of the, manufacturing of the bed nets is done in the United States and then the bed nets are shipped overseas.
I understand correctly. I'm not positive about that, but I think it's just, it's undeniably clear that you have a much higher impact if you, The impact of just like investing in US companies is much less great than the impact, unless you invest and then plan to donate it later.
But the impact of just investing for its own sake is much less than the impact of, of giving to anti malarial bed nets.
Jack (Tracing Woodgrains): What's undeniably clear about that? How is that possibly undeniably clear?
Matthew (Bentham's Bulldog): I was about to explain. Here's one just rough napkin math. the, average EA, so there are about 10, 000 committed EA's and they say they've saved about 50, 000 lives a year.
Let's say conservatively, the sort of committed EAs save about one life a year. there are a lot of Americans who invest a significant amount of money in the stock market. If all of the Americans who invested significant amounts of money in the stock market had as much impact as stopping one death a year, Because there are more Americans who invest significant amounts in the stock market than there are deaths globally, we would end death in a year.
Okay. So one way that we know that they have higher average impact is that we have not ended death in a year. And if the claim is really that, if you get, say, you earn 50, 000 a year, you tie it with 10%. a year. and so that saves [00:50:00] about a life a year. If the claim is really that investing 5, 000 in the stock market annually does more good than saving the life of a child every year.
I think that's a very implausible claim. That claim does not pass the sniff test. So
Jack (Tracing Woodgrains): let's, look at that. Let's unpack it. When you're saying this person spends 5, 000 saves a life. Let's start counting lives. That person spent 5, 000 saved a life. The Against Malaria Foundation. did that stuff saved a life.
The person who encouraged you to donate that money to that organization, got this amount of people to donate this amount of money saved a life. the person who got them into effective altruism, basically you start seeing a pretty broad spreading chain where you're saying this one person's 5, 000 saved this life.
that I think that the answer is. Many fractional people did many fractional things that presumably added up to the saving of a life and ultimately the impact of that individual's life, what they do. very clear, very fuzzy down the line in terms of their downstream impacts on doing good to other people.
Again, this starts relying on all sorts of assumptions that are very difficult to calculate from a distance and without immediate direct personal involvement in the situation and seeing where it develops and seeing how it goes. The number of horror stories of trying to interfere in a distant, distant setting and things going unexpected ways are extraordinary.
Meanwhile, you look at, for example, the Techno capital machine that I have plenty of criticisms of, but I think that one thing that people very convincingly point to with it is that this whole whatever broad economic system that we've built up has [00:52:00] done more to pull the entire world out of poverty, to raise everyone's standard of living and to push people more and more towards things that I very strongly value than donating to the AMF ever could.
In a vacuum, like you say, compare this broad technical capital machine of companies and building things and so forth in terms of its impact on people's lives to the AMF like I think the AMF does good work. but in terms of putting more money into this machine to doing that, you're asking me again to draw trivial answers to extraordinarily complex microeconomic and macroeconomic questions that require all sorts of considering implications, considering downstream effects, considering side effects, considering so forth, that what ultimately it comes down to is trust that other people have done all of these calculations, have done them right, and have done them in a way that comports with my moral instinct, such that the extremely pared down version and the extremely pared down argument of it that you're doing can be trustworthy.
And that comes down.
Matthew (Bentham's Bulldog): You can read, you can read the reports. They're available online describing the expected impact of these effective charities, right? So you can read what give what basis their assessment of the high effectiveness of the against malaria foundation on the answer is they base it on.
Yes. And you
Jack (Tracing Woodgrains): can see that, for example, the 5, 000 marginal value is something that you there's a reason why they can't just go to Bill Gates and say. Okay. Check it out. 20 billion to this and 5, 000 times all of that. Each of those 5, 000 will save a life. bam, There are scaling issues.
There are, prioritization issues. There are all sorts of things that get in the way of that. the 5, 000 thing, it's a tagline. It's effective. It's saying your marginal dollar usually will do that, but there's a reason that. Everything isn't just devoted to this one cause and [00:54:00] saying 5, 000. So even
Matthew (Bentham's Bulldog): that's a simplification.
That their impact is 5, 000 to save a life currently. Now it's true that if you bought, if you made it so that every person who, their, their economies of scale, it might be that it, once you increase it dramatically, then it stops being that effective. But so what, that's not the case.
It hasn't yet been increased. you note that You know that, a lot of these charities, there are a lot of cases, there are a lot of horror stories of charities like trying to blunder about in the third world and things going very wrong. EA is, the charities that EA recommends are ones where they specifically look in quite, high detail at making sure they don't do that.
For people who are curious, Holden Karnofsky has a piece called something like the, I'll try to find it and link it below if I remember, called something like the lack of controversy surrounding, the desirability of highly effective charities or something like they're among people who have looked at the developmental aid literature.
There is not significant dispute about whether, for example, distributing anti malarial bed nets is a good thing. and
Jack (Tracing Woodgrains): yeah, whether it's a good thing, but whether it's better than any other thing per a moral system that is not. Shared by most of those people. That's an entirely different question.
Matthew (Bentham's Bulldog): so it's certainly not the case that developmental economists will all by themselves tell us, giving to effective charity, giving to effective anti poverty charities is the best thing you could do even better than giving to animal charities or having children, right? They're not in the business of doing that.
And I, I think it's a good thing that they're not in the business of doing that. and. I wouldn't even, if I were recommending where to give, I wouldn't even, I'd recommend animal charities over the GiveWell top recommended charities, because I think they're even more effective. But the point is, it's very effective, and the world would be a much better place if a lot more people gave to these effective charities.
no, I think,
Jack (Tracing Woodgrains): I'm going to keep [00:56:00] pushing back on the word effective. You use the word effective a lot. And every time you say the word effective, what you mean is very much in line with your own ethical principles. very much.
Matthew (Bentham's Bulldog): I don't think what I mean by effective is averts lots of suffering at small cost, right?
I'm in favor of averting lots of suffering at small cost. But I think any plausible moral theory will be in favor of averting lots of suffering at small cost. It won't necessarily think that's the only thing that matters. If you're not a utilitarian, you'll think that, there are all sorts of other things that matter.
Rights matter. You have special obligations. But, you'll think at least that's one of the things that matters. And so giving to these effective charities is significantly valuable. I want to just briefly address a point that you made earlier about, you said, okay, if you say that the, one person gets credit for saving a life, And then the person who got them into effective altruism gets credit for saving the life.
Then it's the number of lives saved doesn't add up to the actual number of lives.
Jack (Tracing Woodgrains): and the foundation and so forth. everyone's counting, I saved X number of lives. And a lot of those lives are being double and triple counted. And quadruple and quintuple and sextuple count.
Matthew (Bentham's Bulldog): but when people are thinking at the margins, What you want to look at is your counterfactual impact. So you want to look at the impact of what would happen if I hadn't been around in terms of, the world, right? And so if that's the case, then it actually, is that multiple people get to count life safe.
So here's an analogy. Imagine that, there are, you can, there are, There, there's a nine person panel and every person can vote, can spend 10 to vote, or they can spend 10 to get a vote. And if they, if the, they either vote yes or no, and if the yes wins, then you save a random life somewhere.
Suppose it's tied 4 4. So suppose I anticipate it being tied 4 4, the amount of value that the 10 would have, that me giving 10 would have, would be saving 10, even though the same, the reasoning is symmetrical across the other people. All the other people are such that by spending 10, they'll save a life.
When multiple people's actions are [00:58:00] collectively responsible for one action, it can, be the case that in the counterfactual sense, you have multiple people, each of whom save one life, where it's the same life, because there are multiple people for whom if they hadn't done what they did, then one fewer person would be alive.
Jack (Tracing Woodgrains): Yeah, and you can look at things like that, but it's, I think, I have no problem with people saying I have saved a life with this and people taking credit and getting credit and saying, it's awesome to save lives. It's awesome to do good. I don't think that impact approach as a whole holds up.
So expanding further with that, look at me compared to say, Duskin Moskowitz. Within your need, blind, utilitarian, or distant blind, distance, blind, utilitarian framework, and capability, blind, utilitarian framework. your approach in terms of looking at morality is the most moral thing you can do more or less is give money to these charities.
In other words, this billionaire entrepreneur who has absolute most optimistic for my own earning potential. a hundred times more money. Then I could ever possibly earn in my life and, maybe, lightning strikes and somehow I get extraordinarily wealthy, but I don't really see that happening.
This person can, with a fragment of that, a small fragment of his worth completely overwhelmed and completely obviate. Every good thing I could possibly do, could look and say, okay, this 5, 000, whatever, I'm giving 20 million to the same thing. And that 5, 000 is completely swallowed up at a distance.
However, within my frame and within my approach, [01:00:00] looking at my marginal impact, and looking at taking at it from a, what is in your power? What is within your sphere of duty? What is this within your sphere of responsibility sense? Instead, again, these stories that I'm talking about, my looking at, issues within the effective altruist community, my, exploring say things like this FAA scandal or some other stories that I'm touching, that, or digging up information on that Duskin Moskowitz doesn't know about, Duskin Moskowitz doesn't care about, Duskin Moskowitz isn't thinking about.
They're nowhere near his sphere of Attention, responsibility, his time would be utterly wasted on these. but it is, I think very clearly more impactful, more from your frame, but it's derived from taking it in principles from my own frame of saying, yes, in terms of what you should aspire to do, aspire to earn to give, or aspire to donate this percent of your income to all this charities and everything.
I think that it just doesn't provide a very useful guide, for the average person and for someone like me and saying, this is, how you will do good because as soon as you spread their impacts out to a global scale and say you're equally responsible to everyone in the world, yada, the great majority of people, Are just specs are just utterly insignificant when you're focusing on their spheres of responsibility and their spheres where if they don't do something, no one will their spheres where they have specific duty to people and specific duty to things.
Every single person has a lot of very clearly important and vital nearby duties that as they fulfill those duties effectively, [01:02:00] collectively. They make the world a much better place than if everyone abstracts out their responsibility to the whole world.
Matthew (Bentham's Bulldog): Yeah. so I, maybe I'm struggling to see what the point is, but I agree that it's both true that if you believe in special obligations, you'll think that most important sort of like thing that you do in your life will be, will, involve your relationships towards other people, your relationships toward your, friends and family.
And then it would also be valuable to give to, to effective charities, though the value of giving to effective charities is much less great, it's like the value of what you do in terms of getting to effective charities is much less great than the value of what does.
Jack (Tracing Woodgrains): I'm going to pause and be annoying again and say, when you say effective charities, you mean.
Charities evaluated by distance blind utilitarians as being the most useful. It's simply not just effective. you can keep saying effective all you want. Effective is measured per someone's value system. And you are smuggling your value system in every time you say effective. Every time you say effective, you're smuggling it in.
Matthew (Bentham's Bulldog): When I say effective, Read in and I can, try to not use the word because we do keep getting hung up on that. But what I mean is charity is that, that both avert lots of suffering and that create lots of wellbeing at small cost. So just when I use the word effective, pretend that had been the word in the sentence.
Jack (Tracing Woodgrains): sure.
Matthew (Bentham's Bulldog): and the, charitable donations of the charitable impact that you'll have over the course of your life. Will probably be very small compared to what, Dustin Moskovitz does on a Tuesday, but and I agree, it's it's a slightly depressing fact that in terms of what are my altruistic impact on the world?
what Bill Gates does altruistically in half a day is way more than maybe I'll do in my life. And that's depressing, but okay, so what, [01:04:00] sometimes, the truth hurts.
Jack (Tracing Woodgrains): but I think you can say the truth hurts at that. Or you can say there are special duties. There are obligations that I have and situations. That I am in that Bill Gates has no clue about Bill Gates has no reason to have a clue about Bill Gates will never have a clue about and the things I do in those situations matter directly, immediately and powerfully in the lives of specific people such that I can have a profound impact starting from that frame and in some cases that will mean that I will expand out such that my obligations become more global and that, or I will, Devote myself in particular to understanding this particular global issue.
I will dive in, I will become the world's expert on it, and I'll be able to funnel other people's resources as they give me more power to direct them to that thing.
Matthew (Bentham's Bulldog): I was intending to suggest that even if you buy that frame where you think you have, special obligations and that the good that you do is of For, the world broadly, that's a very different kind and perhaps even much less important than the good that you do towards those in your local circle.
I was suggesting that even if you, think that's the case, the case for, tithing 10%, say, and giving to effective charities. I don't think, I don't think it undermines that at all. I, think, and this happens a lot with veganism as well, where when you're talking about veganism, the claim that, people like me will make, is that, the vast majority of animals that people eat, about 99 percent of them, came from these, horrible factory farming conditions, where they were, up to their knees in shit and filth, and they got, horribly mutilated, and they were in cages their entire life, and so you shouldn't do that.
I, we And at least get rid of 99 percent of your meat consumption and then the thing that everyone wants to talk about is okay But you know in these rare cases that you can imagine where you know, it's a backyard chicken Okay, it's an okay in that case and you know You can have an interesting discussion about whether it's okay to do it [01:06:00] in that case but it's like the important thing the thing that is worth discussing is where the advocated standard of conduct diverges Most sharply from how almost everyone lives their life, right?
If it turns out that you should reduce your meat consumption by 99%, that's a very significant implication. And similarly, if it implies that almost everyone in the United States should start giving a lot to these give well, top charities, I think that's very important. We can argue about the philosophy.
We can argue about whether you should, go to the conferences and call yourself a social movement. I, I think the important thing is the way EA recommends most people change their standard of conduct. And that I think, and we discussed that earlier, but I think it's just, clearly unambiguously good.
and no part of obligations, I think undermines that case. One iota, or I don't know if that's correct. It's the word iota. One bit.
Jack (Tracing Woodgrains): and, This is so this is the difficulty of talking across frames is that, for example, you say the case for veganism is so strong that everyone should obviously reduce their meat consumption 99 percent because these animals are suffering so forth.
And immediately what I want to respond to that is. a lot of things again about the question of nature of obligation and in particular, looking at when is a life worth living, the question of are most of these animals like that still want to stay alive, that still in these situations, are not in such a position that they actively seek to destroy them.
Not, and there are, exceptions here, but broadly speaking, like I think there are serious problems with factory farming, and this is an area that I could learn a lot more about and could dive a lot more into, and I haven't particularly, but, I think that suffering, so, this focus on suffering versus pleasure, for example, the suffering pleasure dichotomy, [01:08:00] completely, mistakes the signifier for the signified, and obfuscates a focus on life versus death, and things, Other things that are more core to me, such that you're saying, obviously the suffering focus is incredibly important and leads you to all sorts of this conclusion, I'm saying, yeah, look, that's downstream of a lot of assumptions that you hold that I don't downstream of a lot of ethical frameworks that you hold that I don't and well, we can find some areas of alignment on these and some areas of agreement on these, coming to it at that. You cannot give coherent advice to me from your frame in a way that doesn't lead down a thousand rabbit holes, because we have very different axioms and very different ways of looking at the world.
And so when you're, just, you're implying a much more certain frame, Then, and then assuming everyone will agree with this frame in ways that just don't hold in practice,
Matthew (Bentham's Bulldog): but I don't I think you're much too pessimistic about the possibility of people with different moral theories coming to agree on practical courses of action.
You and I, we have very different moral theories. Nonetheless, we, both agree the Holocaust was very bad. We both agree, I'm sure you and I would agree on the vast majority of judgments. it's, not clear. No, I don't think we would. I do think that probably there are an infinite number of moral judgments.
So there isn't a fact of the matter about it. It would be like undefined. But, anyways, but the point is, I think the things that EA's recommend. Where it's give so that kids don't die. I think they're just utterly trivial. Any moral theory that says that kids, that give so kids don't die of malaria, any moral theory that says that is insignificantly valuable is just clearly false.
Jack (Tracing Woodgrains): you keep saying, this is trivially true. This is trivially true, but then you're saying, give to malaria, but actually I think you should give to animal welfare things because that's higher [01:10:00] impact. But actually I think this or that, and a Christian can say in response, no, actually I think you should tithe to my church and you should spread my religion.
And this is the highest impact thing that you're saying. Thing that follows downstream from many of my moral assumptions is so trivially true and so emphatically true because my moral assumptions are so obviously, powerful lurking in the framework, lurking in the background. it just doesn't hold in practice.
But one thing that I can say in terms of commonality and in terms of a lot of my focus is on culture building and a lot of my focus is on the power of culture. Some of the ways that I think we do find a lot of commonality are downstream of frameworks that we share and frameworks, cultural, effective culture building that both of us, are influenced by.
for example, one thing that I admire about your approach a lot is that, you have this very rigid utilitarian framework that I think, Gets in the way of a lot of, clear conversation with some things. you are heavily influenced by the specific slate star codex ecosystem. And part of what that means is that you're very willing to talk with people who have very different frames than yourself.
You're very willing to, associate with people who have very different frames. like you'll go on and talk with someone like Richard Hanania, talk to someone like Walt Bismarck, and you. can reach process.
Matthew (Bentham's Bulldog): That one may have been a mistake, but...,
Jack (Tracing Woodgrains): no, I don't think it was. I think that, it was profitable to watch your exchange.
it was compelling to see like where you guys bounced off each other so forth. And I think that people got value out of it anyway. So that's something that you and I share, for example, that then you look at the recent controversy around, The manifest conference that both of us attended, where a lot of effective altruists who share a different angle on that, suddenly [01:12:00] are raising a question, should this thing be, in disgrace, according to our movement, should this conference for letting some of these, speakers who they find controversial kind of talk about it?
Should this be in disgrace? Should this be excluded? where because you and I are downstream from the Slater codex ecosystem, we come down on one side of that question because they are influenced by a progressive social justice ecosystem. That's much more common in the non rationalist parts of the EA sphere.
They come down on a different side of that question. And I think that the things that we find in common and the things that you and I agree on that some of these wouldn't are directly influenced by the building of local cultures and the built the,
Specifically, the unusually effective work of one person, Scott Alexander, building a local culture that has influenced many people like us. and yeah, so in terms of finding commonality, that's one thing that I think has been, directly and obviously impactful for us and, lack the framework to address nearly as frequently and nearly as clearly.
As some of this like charity stuff.
Matthew (Bentham's Bulldog): Yeah. So I, I agree. I think I agree with, most of that. just discussing something you said earlier, you were suggesting, that I'm being inconsistent where I say, it's so trivial. You should give to the anti malarial bed nets.
And then I say, but I think giving to animals is higher impact. Maybe a fundamentalist Christian, giving to the Christian charities is higher impact than that. My claim is not that it's trivial that giving to, effective, giving to the Ganz Meyer Foundation, for example, is literally the highest impact, thing that you can do.
In fact, I don't think that it's literally the highest impact thing that you can do. I think that giving to, say, animal charities has higher impact than that. Maybe there are some cases where taking a certain high impact career is even more impactful than that. My claim is that the thing that's trivial is that giving [01:14:00] to, the Against Malaria Foundation, for example, is significantly valuable.
And so either you should give a lot of money to the Against Malaria Foundation, assuming you're financially able to and so on, or you should give to something else that's of similarly great value. Now, with, if, you're a sort of, if you think that non Christians go to hell, for example, and you're a religious Christian, then, from within your framework, Don't give to the Against Malaria Foundation, give to the People Spreading Christianity, because, reverting hell is way more important than the other stuff.
But if you don't have that view, if you're a secular person, at least, ostensibly secular, then I think, yeah, I think that you should either give to the Against Moriarty Foundation or one of the other high impact charities, depending on, depending on what your philosophical assumptions are.
but at the very least, yeah, I guess what I would or something better.
Jack (Tracing Woodgrains): What I would say is, I think that the trouble is once you collapse your position to something that I could agree with, it suggests that I am already an effective altruist. And I don't think that's consistent with what effective altruism actually is.
But when I collapse it to something I can agree with, what I hear is you should be very willing to sacrifice your time, your energy, and your money in pursuit of causes you believe will make a meaningful difference in important ways. The trouble is. Many, more people than effective altruists hold to that.
I think I live in a, I'm not, by any means perfect with it. I'm not by any means holding myself up as some moral paragon, but I aspire to live in an extremely principled, focused way towards trying to understand how I can do good, how I can build effective culture. And work with that. but when I'm giving my time, my money, my attention to things, what giving that time, money and attention looks like, I think, for example, Scott Alexander's grants to people in [01:16:00] the ACX community for, moon shots within that community.
That seemed potentially interesting. I think that's a fantastic way to give to charity or someone devoting their time and attention to building a you know. a local culture that is effective and that helps the people within that culture and then spreads beyond that. I think that's worthwhile.
Someone becoming, a public interest attorney, helping clients who are, in dire straits, for various reasons, I think that's admirable. Although even there, I think there's a case for, becoming an attorney in different set of circumstances. Basically, as soon as you collapse it to this broad, give your time and energy to meaningful things.
Meaningful. I can agree with effective as defined by you and effective altruism. suddenly all those moral disagreements that you want to set aside and that you want to say, Oh, everything's trivial. Everything's trivial. You don't have to worry about any of this. Suddenly they come raring their heads as we have to have really thorough disagreements about.
What is most effective for any individual and why, and while you can, because they broadly align with your instincts and broadly align with your intuitions, except that the, EA recommendations are the best possible uses for your time and money, I find much less impact for me for those recommendations because I don't think they're taking into account in meaningful ways, all sorts of relevant moral considerations from my angle.
Matthew (Bentham's Bulldog): Yeah. Yeah. So we have been going for about an hour and a half and I think the longer, a podcast goes, the fewer people watch it, but I guess I'll just say finally and after this you can have the last word and then maybe we should wrap up. as I described before, I think there are three, three different things masquerading under the name effective altruism.
There's first of all the actually existing social movement. So and then the question of whether it's, a good, it's existence is good. Then there's the concrete list of recommendations that EA's give, which are [01:18:00] like, Give to the Against Malaria Foundation, take a job as a congressional staffer, maybe take a job earning to give where become a high frequency trader and then give a lot of money.
And then the third is, just the philosophical side, where you should try with your money to do good effectively. I think the, third, it is trivial in a, philosophical sense. It's obvious that we should do it, but it's not trivial in a practical sense in that almost no one actually does it.
So you have lots of people who vaguely want to do good in the world, but you have very few people who, when they're doing good, actually look carefully into how much, how the things that they're doing, how much good they do.
I'll, take my grandparents as an example. very lovely people, but my sense is basically the way they give to charity is they hear a nice story on NPR about some charity that sounds like it's doing a good thing. And I think, that's a nice charity. And then they give some money to that charity. What EAs say is you shouldn't just look, you shouldn't just think about it in terms of, you hear a nice story, give there, you should actually look in a really clear and detailed way at what charities you think are doing the most good and what do the most good.
That'll depend on some of your moral views. If you're a religious Christian, then you'll think maybe like Bible distribution does a lot of good. If you're not, you won't, but at least, you should try to think very carefully about how to do the most good. I think that bit is trivial. The second claim that's trivial is the concrete recommendations that are given by EAs that, you should give to the Against Malaria Foundation.
I think that one is not quite trivial. Maybe, I, think, maybe, there, it's you should recognize that those are at least good recommendations to follow, but maybe you think there are better things to do with your time and money. So maybe slightly better than, if you're a religious Christian, slightly better than, say, give it, becoming a high frequency trader.
And giving to effective charities. Maybe it's better to become a deacon or something, and then you bring, more people into heaven, and that's infinitely valuable or something. I think, you could have a reasonable view when, [01:20:00] I obviously, this is not, my view, but there's at least a consistent view on which that would be a higher impact.
but if that's true, you should at least recognize that sort of we want more people to be following this advice, a little sequel, it's good if more people follow this advice, even if in some cases it's not perfect. And then the third thing is just the question of whether the actually existing effective altruism movement as a whole does, does more good than harm.
And I think for that, the fact that it saves about 50, 000 lives a year, the fact that, it's, responsible for huge numbers of people giving to charities, taking high impact jobs, reducing the number of animals in factory farms, improving conditions of lots of animals on factory farms. I think that is clearly unambiguously good.
So those are the three propositions that I feel confident in. I feel willing to defend, there are certainly legitimate criticisms that one can raise at the margins of, okay, this EA organization isn't very high impact. This one has creepy, weird cult things going on, in the Bay Area, the Sam Bankman-Fried guy seems like he's up to no good.
But I think, those core claims are at least, I think, Very plausible and while you and I have lots of different philosophical assumptions about, about a wide variety of things, I don't think that they're, the things that you and I disagree about, I don't think they're, relevant to any of those three core claims.
So I guess those, that's my final words on the topic. yeah. which are you, there? you're frozen.
It's good that there are a lot of EAs. It's good that the EA movement exists to be trivial. And a lot of the people who call themselves critics, they they have lots of criticisms at the margin, but it's weird to be a critic of X if you're very glad that X exists and you agree that sort of like. all else equal it's more people that
It seems like you don't agree with that, but I'm just saying this is true of a lot of the critics, and I think you're at least sort of. Anyways, sorry, go ahead.
Jack (Tracing Woodgrains): Yeah, I appreciate those points and to respond to each of those points in turn. I think that when it comes to effectively evaluating how to do good in the world, I think, and I believe EA agrees with [01:22:00] this, most people are not equipped to do it.
Most people are not equipped to sit down and say, Hey, Is this effective? Is this effective? Is this effective? Is this effective? Is this effective? They don't have the time. They don't have the energy. They don't have the, mental capacity, whatever it is. They are in a difficult position. So you are always going to need to rely on someone's recommendations for that.
In many areas, you're going to need to rely on some sort of culture. People are reliant on cultures to tell them how to do good. And not only are people reliant on cultures to tell them how to do good, they are reliant on cultures to make them do good. With that, I think where EA is admirable is in building a culture that encourages people to do good.
I think it's comparable to, for example, Mormonism in this, that the great majority of Mormons, do give 10 percent to charity and do, give a tremendous amount of time and efforts to volunteer efforts to make their communities better places, so forth, they take all of these things very seriously. They do so entirely out of the effective altruist framework.
They do so from a completely different ethical frame, moral frame, and I think they make some severe mistakes along the way. But in terms of building a culture that tells people how to do good, I think many organizations do that. I think many traditional religions do that. And I think that it's an admirable thing to do.
And broadly speaking, one thing that we are desperately in need of are more effective, more. Capable secular cultures at building this do good by default thing. I think EA is building that for secular people in a way that it hadn't been built before, but it is a prototype for building that for secular people that can be iterated on, can be examined, and that ultimately I think a different movement could accomplish it much more effectively or much more meaningfully per my own value system, More, directly towards the specific organization of EA.
Like I've said, I think that there are a lot of really [01:24:00] good people in Effective Altruism. I think it does a good job of attracting people who I respect, people who are sincerely committed to doing good in the world, and people who are working to build meaning. I think that they are uncommonly receptive to criticism in meaningful ways, uncommonly kind and generous and thoughtful.
I respect them as people, and I like getting along with them, and I appreciate the graciousness with which many of them treat me. I also think that because they keep this sort of multi pronged thing that you keep, because they don't really often see themselves as, Their duty is to build a culture.
Their duty is to build a specific organization, a specific culture. And they try to do this decentralized thing. That's here are the minimal premises, except this, certainly you must accept this, certainly you must accept this, that they have a lot of important blind spots that, can and should be addressed with that.
And that broadly speaking, it is a mistake to affiliate yourself or to claim to affiliate yourself or to align or to claim to align with a movement where you have deep, far running fun. Deep, far running, foundational moral disagreements with. it is, and should be absolutely trivial to say that in terms of evaluating an organization, in terms of aligning yourself with an organization, because you are reliant on a culture that will teach you how to teach you how to do good and to guide you towards good things.
If you suspect that the culture is wrong about many fundamental things, you should expect that many of the things downstream from that in that culture go in. Directions, unlike the directions that you would go in a better environment. And I think that the responsibility of someone in a position like my own is to learn from EA to, look for the ways I can do good in my sphere and the things that I can take from that movement and in building more meaningful movements and more meaningful cultures, that align with my own instincts.[01:26:00]
Matthew (Bentham's Bulldog): All right, great. Jack, thanks so much for coming on. I think this has been a very interesting conversation. I want to all the nice things,
oh, what? I just pressed a button and something weird happened on the screen.
All the nice things you said about EA. Basically, that's what, that's how I feel about you I think you're, very nice, charitable, lots of interesting things to say, but your values are all wrong and so on. But, yeah, I guess my criticisms of you would be different. But anyways, yeah, thanks so much for coming on. I think this has been a very interesting conversation.
maybe I'll at some point write an article about, about, about your objections. Because yeah, I think this has been very thought provoking. For people who want to find more, he has a blog. lots of interesting stuff. yeah, thanks so much. I guess I'll stop the recording now. bye.
Thanks for having me on!
As a sort of shownotes, becaue I have not yet written my own thoughts on the topic in convenient longform, here are a few writers who inform my perspective on effective altruism and who I broadly endorse:
1. Zvi Mowshowitz. To the extent he and I disagree on any given topic, I generally endorse his opinion over mine. He's written a lot about EA in his time, but his criticism of the EA criticism contest (https://forum.effectivealtruism.org/posts/qjMPATBLM5p4ABcEB/criticism-of-ea-criticism-contest) and his book review of Going Infinite (https://thezvi.substack.com/p/book-review-going-infinite) stand out to me.
2. Erik Hoel. See "Why I am not an effective altruist," which Matthew isn't wild about but which I quite like. https://www.theintrinsicperspective.com/p/why-i-am-not-an-effective-altruist
3. Nuño Sempere, who provides a cogent structural criticism of how the movement functions in practice. https://nunosempere.com/blog/2024/03/05/unflattering-aspects-of-ea/
Interesting discussion! The main thing that jumped out at me: I'd like Jack to clarify which disagreements are fundamental vs instrumental. For example, his response to the "surely it's better to save 100 children's lives than to give one blind American a seeing-eye dog" seemed to be an instrumental response: "economies are complicated, maybe keeping the money in the US somehow does even more downstream good". But that isn't a disagreement with utilitarian principles! First we should all agree that saving many lives is better, *all else equal*, than merely providing one seeing-eye dog. *Then* we can get into the instrumental question of what actually ends up saving more lives. (And I liked your response there, that there wouldn't be any premature deaths left if ordinary economic activity was as life-saving on the margins as effective charities are, given the relative scales of the two kinds of activity.)
I also wonder whether it could have been helpful to appeal to the concept of "beneficentrism" to clarify the sense in which EA values are undeniable. You just have to think that it'd be a good thing for more people to aim to have marginally greater impartially beneficent impact. That's compatible with special obligations. Just add a bit more impartial beneficence on top of whatever else you think is important. Does Jack really want to deny that?