30 Comments

Thanks for having me on!

As a sort of shownotes, becaue I have not yet written my own thoughts on the topic in convenient longform, here are a few writers who inform my perspective on effective altruism and who I broadly endorse:

1. Zvi Mowshowitz. To the extent he and I disagree on any given topic, I generally endorse his opinion over mine. He's written a lot about EA in his time, but his criticism of the EA criticism contest (https://forum.effectivealtruism.org/posts/qjMPATBLM5p4ABcEB/criticism-of-ea-criticism-contest) and his book review of Going Infinite (https://thezvi.substack.com/p/book-review-going-infinite) stand out to me.

2. Erik Hoel. See "Why I am not an effective altruist," which Matthew isn't wild about but which I quite like. https://www.theintrinsicperspective.com/p/why-i-am-not-an-effective-altruist

3. Nuño Sempere, who provides a cogent structural criticism of how the movement functions in practice. https://nunosempere.com/blog/2024/03/05/unflattering-aspects-of-ea/

Expand full comment
Jun 28Liked by Bentham's Bulldog

Interesting discussion! The main thing that jumped out at me: I'd like Jack to clarify which disagreements are fundamental vs instrumental. For example, his response to the "surely it's better to save 100 children's lives than to give one blind American a seeing-eye dog" seemed to be an instrumental response: "economies are complicated, maybe keeping the money in the US somehow does even more downstream good". But that isn't a disagreement with utilitarian principles! First we should all agree that saving many lives is better, *all else equal*, than merely providing one seeing-eye dog. *Then* we can get into the instrumental question of what actually ends up saving more lives. (And I liked your response there, that there wouldn't be any premature deaths left if ordinary economic activity was as life-saving on the margins as effective charities are, given the relative scales of the two kinds of activity.)

I also wonder whether it could have been helpful to appeal to the concept of "beneficentrism" to clarify the sense in which EA values are undeniable. You just have to think that it'd be a good thing for more people to aim to have marginally greater impartially beneficent impact. That's compatible with special obligations. Just add a bit more impartial beneficence on top of whatever else you think is important. Does Jack really want to deny that?

Expand full comment

I'm honestly still a bit baffled by Bentham's answer there, which seemed to imply that the natural consequence of scaling EA activities up to the size of the technocapital machine is the end of death at comparable proximate costs the whole way up. The Machine has done a great deal to improve lives around the world in virtually every regard! "End all death" is not (yet!) on the table, but in terms of what might get us there, I would certainly place advancement of the Machine over donations to the AMF.

For people in a position to direct policy, I think it tends to make sense to think in utilitarian terms. That does not, however, describe most people, and an individual considering their charitable/moral obligations should examine their duties first and foremost.

It's trivially better to save 100 children's lives than to give one blind American a seeing-eye dog, but that's a cartoonish hypothetical that doesn't really get to any of the interesting or worthwhile disputes. I'm going to change it somewhat: is it better to donate $100k to the AMF or to spend $100k saving your child's life? I would say, emphatically, that it would be a dereliction of the duty taken on in raising a child to spend elsewhere in that case.

In general, I confess I find these philosophical abstractions angling towards "so now you agree our values are obviously correct, right? right??" to be quite obnoxious, though I recognize they come from a place of good faith and it's mostly a different mode of conversation. Do I really want to deny that your values follow naturally from premises shared by both of us? Yes! Inasmuch as your values depart from mine, it's not a matter of finding the right wording to explain how your values naturally follow from my premises.

Rather than searching for the right concept to explain why your values are undeniable if I simply look at it right, I suspect it would be more fruitful to demonstrate real understanding of my values and my objections. In this case, my basic answer is that I sincerely and strongly believe that the general welfare is best promoted by people who have a clear understanding of their duty to the near, since everyone is dramatically better off for having a few people who care about them and work towards their good much more passionately than towards everyone else.

It's not, in short, that I disagree that people should aim to do things that are good for the world. I disagree, rather, that agreeing with that pushes me anywhere closer to your values than I described in the podcast. As one gains power, their sphere of responsibility expands to match their capacity. If one was all-powerful, their duty would be impartial towards all. As is, most people appropriately focus on groups smaller than the whole world, and inasmuch as they focus on those broader groups, they do so by giving power to cultures they are within that have demonstrated capacity to be trusted with those duties.

Expand full comment
author

Want to clarify: obviously if EA upscaled average effectiveness of EAs would go down. My claim was that obviously current average EA effectiveness outpaces investing in the stock market--if it didn't then we'd have ended death in a year.

Expand full comment

Right, I get that, and that specific claim is what I find incoherent. A marginal death from malaria is not the same metric as all-cause mortality and does not have the same solutions. It feels sort of like when leftists say "Elon has enough money to end homelessness in the entire United States." To phrase it uncharitably, I hear you saying "Oh yeah? You think investing money in things other than EA charities makes the world better, huh? Well, then, why haven't those other things ended all death?" It's a cartoonish parody of the good we should expect to happen from a strong economy, and not having ended all death says nothing whatsoever about EA effectiveness versus the Machine's.

It's good to eliminate deaths from malaria, but there are a great many good things, many of them depend on each other, and it should not be at all obvious to any given individual that, say, donating $5000 to the AMF is worth more than investing $5000 into their pension plan.

Anyway--not to start the whole back-and-forth up again, I just want to emphasize that I do understand your claim, even as I reject the idea that the calculations are anywhere near as simple or obvious as you imagine them to be.

Expand full comment
author

Thought experiment: you're about to sell some good that will expire in value very soon. You see a drowning child. You can either sell the good and put it in the stock market--for about 5k--or you can save the child (the time it takes to save the child will make the good lose its value). Is your claim really that those would equally improve the world?

Expand full comment

Yes, I would ruin my expensive suit to save a child inscrutably drowning in a pool next to me who only I could reach; no, I would not ruin my expensive suit to save one of an infinite number of distant children in an infinite number of distant pools who an infinite number of distant people are more proximate towards than me—and no, I don’t think it’s trivial or even correct to think that everyone taking that distance-blind utilitarian approach would ultimately serve the end of leaving the world as a whole better off than them making proximity-based decisions like “I will first pursue my own welfare and the welfare of my family and community” that spiral broadly out towards lifting the whole world out of poverty. But here I simply repeat myself.

Expand full comment

You seem to use "values" to mean both values and instrumental empirical beliefs. I think it's very important to distinguish the latter two. (We may well have *some* value disagreements, but it's very hard to tell how much if you conflate them with instrumental empirical disagreements.) For example, when you say "I sincerely and strongly believe that the general welfare is best promoted by..." this sounds to me like a mere empirical disagreement, not a disagreement of fundamental values. A utilitarian could affirm the very same view. See: https://www.utilitarianism.net/types-of-utilitarianism/#multi-level-utilitarianism-versus-single-level-utilitarianism

To test your "sphere of responsibility" values, revisit the "destroy all wild animal life" case, with a twist. Suppose you discover that someone *else* has flicked the "kill" switch, and *you* can switch it *back* (within the next minute). Is it "crazy" to think that you should do so? If not, then exerting power over the situation cannot be your true objection. A better answer is that "destroying all wild animal life" irreversibly narrows our options, while preventing the mass extinction broadens our options, and a wise utilitarian heuristic is to broader rather than narrow options when you don't have robustly decisive evidence about which option is better. (If you think it makes a difference that the first switch-flicker exerted power inappropriately, suppose instead that it was a wild gust of wind that flicked the switch into the 'kill' position.)

Expand full comment

Yes, if I somehow was placed in the bizarre situation of having clear, personal, pressing, and unique responsibility over all animal life, of course asserting power over that life in order to maintain the status quo over someone trying to destroy everything would be the correct option. It’s a hypothetical that assumes its own conclusion and doesn’t come close to addressing the true divide.

Asserting power over something you are willing to take responsibility for, and capable of taking responsibility for, is not incorrect. Most people are neither capable nor willing to effectively assert authority over broad spheres, nor do they have a duty to do so. If an ordinary person is placed in an extraordinary situation, as with your contrived hypo, they must do their best, but that has little bearing on their true moral duties in normal circumstances.

Expand full comment
Jun 30Liked by Bentham's Bulldog

"If an ordinary person is placed in an extraordinary situation ... they must do their best, but that has little bearing on their true moral duties in normal circumstances."

I actually feel like this is central to my endorsement of EA. Compared to almost all other people in the world and throughout history, moderately-affluent Westerners are in a very extraordinary situation. I don't usually think of myself as having vast amounts of wealth and power, but on a global scale I do. 100 years ago, the typical person in the US had much less ability to help people far away.

For me, this change in the circumstances of "normal" people leads to reevaluating what we should think of as our duties. I come down thinking we should exercise that power to a moderate extent (e.g. donating 10%), and to do so carefully.

Your discussion of duty and responsibility is much more compelling than most EA critiques I've heard, but (from my frame) it doesn't feel strong enough to tell me my best available option is to *not* exercise this power I have, in light of my extraordinary situation. But I admire your willingness to put forward and defend an alternative frame / value system, and I'll certainly continue thinking about it!

Expand full comment
Jul 12·edited Jul 12

I consider myself part of EA and would spend 100k to save my child.

I'm more curious about the community part. Does shallow pond example changes much if you are in home town or tourist in another city of America/some city of France/some city of Egypt/some city in Africa?

Expand full comment
Jun 27·edited Jun 27Liked by Bentham's Bulldog

Heck yeah! Looking forward to listening to this a fan of both of you. I think the phrase "gay ex-Mormon furry" is overused though. Out of those three descriptors only seeing the community benefits of Mormonism seem to be highly important to Trace's outlook.

Expand full comment
author

Counterpoint: it's funny!

Expand full comment

A great conversation! I think I disagreed with almost every point Jack raised, but it is always interesting to hear someone with radically different foundational assumptions

Expand full comment

Just tangentially related, I find it odd that so many rationalists and EAs seem to like Blocked and Reported. I agree it's fairly funny, but mostly it's just culture war, and making fun of mentally ill people and being mean to strangers because the Internet just makes being mean to strangers be really easy and fun. There are worse culture war people of course.

Expand full comment
author

I think Jesse and Katie are pretty smart and absolutely hilarious!

Expand full comment

Agreed, but once you have these two good qualities you have to decide what to do with them. Talk about history and be funny, go on game shows and be funny, be an actor and be funny...

And they choose: talk about pointless culture war issues, mentally ill online people and viral tweets, and be funny. Which is among the worst choices you can make.

(The actually funniest Podcast is The dollop)

Expand full comment
author

Part of what makes them funny is the subject matter!

Expand full comment

Hmm, I don't feel TracingWoodgrains performed well here. I am most definitely not an utilitarian, yet I'm perfectly happy to call EA charities effective, because, yeah, they're great at saving lives efficiently, and that's something any ethical system should care about.

Expand full comment
founding

Have you listened to Will MacAskill talk to Sam Harris on his podcast about SBF?

Expand full comment
author

No, I don't think so.

Expand full comment
founding
Jun 27Liked by Bentham's Bulldog

You should really give it a listen. Will explains his connection to SBF and he makes a good case that SBF was not conducting his business in a way that he expected to maximize utility for the world. I don't think the fraud was motivated by EA concerns.

Expand full comment
author

I'll check it out!

Expand full comment
founding

Will be important for future conversations like this ;).

Expand full comment

Note that the episode was widely panned; I personally strongly agree with Zvi that MacAskill's take reflects poorly on him. https://www.lesswrong.com/posts/cbkJWkKWvETwJqoj2/monthly-roundup-17-april-2024#Variously_Effective_Altruism

Expand full comment

Also relevant is Will's recent interview on ClearerThinking podcast. In short, Will says he didn't know all the bad stuff SBF was doing. But he did also see some warning signs, e.g. SBF did not take disagreement well. I would have to re-listen to get full nuanced picture

Expand full comment

Is there a podcast version of this?

Expand full comment
author

Isn't the audio recording a podcast link?

Expand full comment
author

(I do not know how tech works)

Expand full comment

I think Garrison wanted RSS feed that he coild use for podcast listening app

https://substack.com/podcasts

Expand full comment