Evaluate The Version Of An Idea That Most Convinces You
Not just the version expounded by a typical proponent
If you spend a lot of time listening to critics of effective altruism, one thing that’s notable is how much time they spend criticizing things that are simply not effective altruism. They crow endlessly about various controversial bits of utilitarian philosophy—arguing, for instance, that we have special obligations or rights. While this is all interesting, and might be a threat to utilitarianism, it simply doesn’t touch the core claims of effective altruism—that one should aim to do good effectively.
Certainly one way that a person can arrive at effective altruism is by starting out a utilitarian and concluding that doing good effectively is a good way to maximize utility. But it’s not the only way, and the core effective altruist claim is much less controversial than the core utilitarian claim. The effective altruist isn’t committed to utilitarianism, but only the much more modest beneficentrism—that doing good effectively is important and should be a major aim of one’s life.
Similarly, when arguing about the existence of God, people spend a lot of time talking about things that don’t have to do with the mere existence of God. When arguing about God’s existence, it’s common to raise the problem of hell, or argue that arguments fail to establish the God of Christianity specifically. I’ve had people claim, on more than one occasion, that my arguments, even if correct, don’t show Christianity is true, as if this was news to me, despite the fact that I think Christianity isn’t true.
Why do people do these things? I think there’s a quite simple answer: people conflate the best version of a view with the typical version of a view. When considering a view, they don’t consider the most convincing version, but only the version expounded by a typical proponent.
If, when considering arguments for God, you treat them as if they were arguments for the entire worldview of typical theists—most of whom, in many settings, are Christian—then you’ll be left severely disappointed. They mostly don’t establish that! Generally, when arguing for Christianity, people start by arguing for the existence of God, and then segue into presenting historical evidence for Christianity.
A lot of effective altruists are utilitarians. For that reason, people treat criticisms of utilitarianism as criticisms of effective altruism—for they feel they’re attacking, in some broad sense, the effective altruists’ worldview.
Of course, you shouldn’t always evaluate the best version of a person’s worldview. If you’re arguing with, say, a young Earth creationist, you shouldn’t ignore their arguments for young Earth creationism and pretend that they’re raising the best arguments for theism (it would be very weird if you responded to a YEC claiming that the second law of thermodynamics rules out evolution by saying ‘well the best arguments for theism is the anthropic argument, but it relies on the false self-indication assumption.’)
When arguing with people, you should address what they’re saying—not what other people like them tend to say or what you think the best version of their arguments might be (this was apparently one annoying thing Parfit did: when asked stupid questions (and yes, there are such things—for example, can I eat pianos?, or is the self-sampling assumption true?), he’d always steelman to such a degree that he’d answer some sophisticated question that wasn’t asked). But in the same vein, you should address what they are actually saying, not other things that they think. If a person argues for effective altruism on the grounds that we can and ought to do a lot of good, you shouldn’t veer off into an unrelated tangent and discuss utilitarianism.
But when thinking about ideas yourself, you should address the version of the view that you find most convincing. When a person adopts a view, they remold it in their image. My theistic view is very different from the typical theists’ view. In the days when I was an atheist, because I fell prey to confirmation bias—seeing theism as an intellectual enemy to be defeated—I spent much time considering the typical implausible versions of the theistic view and too little time considering the version of theism that would be most likely to move me.
For example, I spent a lot of time thinking about how ridiculous the contingency argument was—if modal rationalism is right (which I was and remain very confident in), if the possible worlds can be figured out a priori, then simply declaring God to be necessary, without a deeper explanation, will not do. When I became a theist, however, I concluded that God is either contingent or necessary in a way that could be figured out by an ideal reasoner. The version of theism that ultimately moved me wasn’t one I spent very much time considering as an atheist, and I spent too little time thinking about the really convincing arguments for theism like the anthropic argument (for years something like it had been in the back of my mind, but I never gave it much thought).
When evaluating a view, you should think about the version of the view that you’d be most likely to adopt if you concluded the view was right, and then think hard about the best arguments for the view. Even if other people have other arguments for the view, if they require buying into things that you feel confident you won’t accept, then you’re unlikely to change your mind.
Note, this isn’t just about accepting the version of a view with the most intelligent proponents. Alexander Pruss is very smart, but he’s a classical theist, buying into a worldview radically different from my own. While Pruss might be one of the theists I’d be most worried about debating when I was an atheist, he’s not one of the theists most likely to convince me of his worldview (though he has, of course, written many articles that have given arguments for theism that I find convincing).
If you’re a theist seriously investigating atheism who is very confident in moral realism, for instance, the version of atheism you should seriously consider is one on which moral realism is true—even though many smart proponents of atheism deny moral realism. Radical belief change rarely happens all at once—it occurs through small steps, so generally you should try to evaluate the versions of alternative views that are most similar to your current worldview. If you’re a naturalist, don’t spend all your time considering weird versions of Thomistic metaphysics—probably start with something nearer to the fine-tuning argument, or consider the ways that theism solves puzzles (e.g. why anything exists at all) that you find genuinely challenging to think through. If you find the doctrine of hell barbaric—as well you should—the versions of Christianity you should think through should be primarily universalist versions.
Of course, you should spend time reading what the smart proponents of alternative views say. You shouldn’t just think for five minutes about the other view, and think about the version that’s most convincing. Genuinely grappling with the best versions of other views is vital! But you should primarily think through the versions of them that’s most likely to convince you—unless you want to dogmatically cling to the beliefs you happened to start with.
This is, I think, among the most important implications of the Scout mindset. Seriously thinking through ideas doesn’t come simply from addressing their best proponents, but from seriously grappling with the versions of them that could conceivably move you.
2 somewhat relevant examples:
1. In political discourse about the economy, lots of people give an argument that basically says something like relative inequality has all these bad effects, and we should orient our economy to minimize the difference between what people have. I don't find this convincing at all, so in my mind I substitute those arguments with a version that has much more to do with effective altruism and absolute living standards. Someone might read my posts and think I'm very far to the right, but in reality EA type arguments move me much closer to the center-left than typical inequality arguments do.
2. One time in a debate about population genetics, a prominent person said that there weren't meaningful differences between groups because it's not like some groups have the "engineer gene, banker gene" etc. Basically, having genes that are specific to professions. That struck me as an example of not evaluating the argument by the strongest version of the argument, which to me wouldn't say there are profession-specific genes, but general traits that could influence population outcomes.
Just 2 examples.
This is a convincing recommendation for the good faith engagement that tends to be lacking, especially in digital chatter. We do often tend to see arguments for one proposition in light of other worse arguments for the proposition. I've never noticed this before, so thanks!