Treat Like Cases Alike
Especially in your head
Here are two things that intuitively both seem true to most people:
You have license to pick which charity you give to. You don’t have to give to the best one.
If you have two options to save lives, you don’t have the license to save the fewer rather than the greater, because you feel like it.
However, these demonstrably conflict. Some charities save lives. If you must always perform the action that saves more lives, then you are not permitted to donate wherever you want. You must give to the charity that saves the most lives. Unsurprisingly, I think 1 is false and 2 is true.
1 seems pretty obvious. Suppose that two people are going to be hit with a bus and ten people are going to be hit with a train. You can either pull the ones who are going to be hit with the bus out of the way or the ones who will be hit with the train. Suppose additionally that you feel a strong emotional attachment to saving people from busses, but not from trains, and that you once saw with your grandma a story about people being saved from buses. Does this give you moral license to save the two rather than the ten?
No, obviously not. Thinking that you should save the ten instead of the two is not the calculative logic(s) indicative of late stage capitalism. Nor is it colonialism, nor does it get doing good all wrong, and nor is it an attempt by those nefarious members of the TESCREAL bundle to swindle you into thinking that they care about people, so that they can turn your grandmother into biofuel to run simulations of digital shrimp.
Should we think of the above 1 and 2 as two plausible principles that conflict, so we have to give something obvious up? No, I don’t think so. I don’t think there really are two principles that conflict. The first principle stops being intuitive when you vividly hold in mind the second. When you vividly think about the fact that charities save lives, and that choosing between charities just is choosing between saving more lives or fewer, then it no longer seems like you have license to give to less effective charities. The second claim, in other words, provides greater context that dissolves the intuitiveness of the first.
There are many other domains where one fact can dissolve another’s force, rather than simply outweighing it. The fact that someone had a knife at the crime scene is evidence that they committed the crime. But once you learn that it was a butter knife, that dissolves the evidential force of the first fact. The first fact no longer gives you evidence (unless the murdered party in question is a stick of butter, which is unlikely, because they are not typically homicide victims).
It may be intuitive that you should invite over your friend Maxwell for dinner. But when you learn that Maxwell frequently murders people with a hammer (fact check: true), that doesn’t just conflict with the intuition that inviting Maxwell over is the right thing to do—it fundamentally dissolves the original force of the intuition. The original intuition is not outweighed, but eliminated. It no longer has force.
I think this is an error that afflicts lots of discussion of effective charities. People have all sorts of heuristics for thinking about saving lives. They have all sorts of weaker heuristics for thinking about charity, such as that it’s up to you where to give, and there aren’t extremely strong reasons to give. Then, even after being informed that charity does save lives—at least the best charities—they continue to think of charity as being like charity, rather than being like saving lives. This would be like continuing to mentally classify the guy with the butter knife as “the man with the knife at the crime scene.”
Intuitively it seems like saving lives is a big deal. If you can save someone’s life, you really ought to. Someone who saves a life every year is a hero, and if you can save a life every year, you should. This is the core intuition behind the drowning child argument: if you can prevent terrible things from happening, without giving up anything of comparable value, you should.
We don’t have the intuition that charity is as big a deal. But that’s because we don’t intuitively grok that charity saves lives. Social norms around charity do not treat it as a very big deal. They maybe politely suggest you should give some, but they don’t treat it as a life and death matter.
Even though it, quite literally, is a life and death matter. Each time you give a few thousand dollars to effective charities, one fewer child dies. The sensible thing to do, then, is to revise our intuitions about charity. It is more important than we intuit.
Something similar is true of Longtermism. The core idea behind Longtermism is amazingly intuitive. Future people matter. One of the reasons not to trash the environment is that it would be bad for those who have not yet been born. The future could have a lot of people. So if future people matter anywhere near as much as present people, then the main impact of our actions is on the far future. In fact, well above 99.9% of the expected impact of our actions comes from their impact on the far future.
And yet intuitively, we don’t think of actions to benefit the long-run future as being that important. Our intuitions aren’t in accordance with our reflective judgements. Even as we reflectively endorse the overwhelming importance of the long-run future, it doesn’t weigh on our decision-making adequately. Our direct intuitions about Longtermist interventions don’t change by a factor of a million, even after philosophy tells us that they’re a million times more important than we thought before. So if you want your direct intuitions to remain accurate, then when thinking about Longtermist interventions, you will have to imagine them as being many times more important than you’d have been otherwise inclined to believe. You will have to manually override your instinctive apathy.
It is plausible that in expected value terms, giving small amounts of money to Longtermist organizations is better than saving hundreds of lives. The expected impact on the future is just so massive. And yet when we think about giving to these organizations—about whether morality can demand that we prioritize this over causes that are salient to us—our intuitions are not calibrated. We do not fully appreciate the scale of the good we can do. The sensible moral accounting isn’t reflected in our intuitions.
My proposal for dealing with this is to treat like cases alike. Suppose you are deciding whether to take a job at a Longtermist organization. Suppose, in addition, that you think taking a Longtermist job is as good as saving many thousands of lives. When thinking about your decision, you should try to think as if your action was saving a bunch of lives.
If you think giving to the Against Malaria Foundation is as good as saving children from ponds, then when deciding how much to give, you shouldn’t think about the Against Malaria Foundation. You should try to imagine that you were deciding how many children to pull from ponds. If your brain thinks of giving to charity as some extraneous dollop on top of ordinary morality but thinks of saving lives differently, when you decide how much to give, you shouldn’t think about charity. You should think about saving lives.
You should think about actions in terms of their closest parallel in morally relevant respects. If you can’t talk your brain into seeing how important giving to life-saving charities is, you shouldn’t think of giving as going into the “giving to charity” bucket. You should think of it as going into the “saving lives” bucket. For that is what you have trustworthy intuitions about—intuitions that you reflectively endorse.
If you think that eating meat is worse than kicking a puppy, then when you are deciding whether to eat meat, you shouldn’t, in your head, think about the ordinary socially-approved practice of eating meat. You should think about kicking puppies. You should imagine that the thing you were considering doing was kicking a puppy. The fact that society approves of one of those actions and not the other tells us little about their respective wrongness.
Our brains are constantly rebelling against us, wanting us to slip back into the dull fog of social approval—never considering anything to be different morally from how society treats it. Even when it is. To see the moral status of what is in front of one’s nose requires a constant struggle.


I don't agree that point 2 is intuitively true to most people. I think it's a normal part of human psychology to value some lives more than others -- e.g., if I have to choose between saving my wife or saving a group of ten strangers, I'm going to save my wife.
> If you have two options to save lives, you don’t have the license to save the fewer rather than the greater, because you feel like it.
This would require me to spend infinite time preventing soil nematode suffering, so I reject it.