The Smartest Person on the Internet is Often Egregiously Wrong
But you should still mostly defer to him--and other smart people, even though they're often totally wrong
Preliminary note
The original cover image makes was a picture of Nathan Robinson’s book; and it made it seem like I’m saying Nathan Robinson is the smartest person on the internet. This is obviously false.
Praising Alexander
All of the smartest people on the internet agree that Scott Alexander is the smartest person on the internet. My smartest friend once described him that way—though I think he might no longer share this view.
David Friedman—who started a substack that you should immediately read—is another of the smartest people I’ve met—much as one would expect of Milton’s progeny. He describes Scott in the following way
Slate Star Codex is a blog run by a young psychiatrist with a very wide range of interests and an extraordinary amount of intellectual energy, posting under the name of Scott Alexander. In recent years, more than half of my online time has been spent reading and posting on it — one reason I have neglected this blog.
Scott posts thoughtful, intelligent, entertaining, often original essays. Some are book reviews, including reviews of two of my books. Some consist of carefully reading a collection of scientific articles on some topic and summarizing the results. One old one looked at articles on Alcoholics Anonymous and concluded that both the claim that research showed it worked and the claim that research showed it did not work were false, none of the studies adequately measuring its effects. A recent one, published before official sources switched from telling people not to wear masks to telling people to wear masks, analyzed the existing literature on the subject and concluded that, while the size of the effect was uncertain, wearing a mask substantially reduced the chance of spreading the disease if you had it and probably reduced by a little the chance of getting the disease if you didn't
…
All of this — I could go on for pages describing past articles — is only part, and not the largest part, of the reason I read the blog.
Jason Crawford, who I’ve never spoken with but seems smart (probably?!), though I’ve read little of his stuff, describes Scott in the following way.
Scott Alexander is my favorite blogger. I’d like to recommend him to more people, but it’s hard to know where to start, since he’s written over 1,500 posts. A little while ago a friend asked me to make a list of my favorite pieces of his. So, here is a beginner’s guide to the writings of Scott Alexander.
What makes the blog so good?
Scott writes with a rare combination of insight, humor, incisive clarity, relentless questioning, and (often) exhaustive data analysis. He asks big questions across a wide variety of domains and doesn’t rest until he has clear answers. No, he doesn’t rest until he can explain those answers to you lucidly. No, wait, he doesn’t rest until he can do that and also make you laugh out loud.
At his best, he hits some strange triple point, previously undiscovered by bloggers, where data, theory, and emotion can coexist in equilibrium. Most writing on topics as abstract and technical as his struggles just not to be dry; it takes effort to focus, and I need energy to read them. Scott’s writing flows so well that it somehow generates its own energy, like some sort of perpetual motion machine.
I like to think that I’m pretty good at writing. I’m good enough that I convinced myself to quit my day job and to write instead of coding or managing (which I’m actually qualified for and which can definitely make you more money). But I’m not nearly as good a writer as Scott.
Of the substacks I recommended as 2022 was ending, all but three of the ones that recommended any articles recommended Scott’s substack.
I’ll add my testimony to raving about how great Scott is. He is insanely smart, informed on a ridiculously wide range of topics, and always simultaneously informative and entertaining. Of my favorite articles ever, most have been written by Scott. He may be the writer who has most shaped by thinking, and he’s almost certainly my favorite person to read. Though he rarely resorts to polemicism, when he does, he’s the best polemicist I’ve read—see Untitled, for example.
Auditing Alexander
Given that Scott writes about so much, I generally have no ability to audit him—I don’t know the subject matter so I can’t really check if he’s wrong. For example, I haven’t read Bobos in Paradise and have no idea if his analysis is generally correct, I can’t spell Semaglutide much less audit Scott’s analysis about it, beyond taking B12 I have no special knowledge about supplements, and so on. Thus, my reaction to most of Scott’s articles is “hmm, that sounds plausible,” before moving on, forming no super strong opinions about random things that I know nothing about.
There have, however, been a whole host of things on which I’ve been able to audit Scott, and the results have not been great. Let’s start with his most recent article about Ivermectin, replying to a detailed critique of it. Let me preface this by saying that I don’t know how to analyze studies very well, I was just okay at high-school stats, and I know nothing about Ivermectin. So this seems like a surprising article to audit.
I’ve only been able to audit Scott here because he audited himself by replying to a critic and admitting to a bunch of errors. Some representative quotes.
After looking into it, I think Alexandros is completely right and I was completely wrong. Although I sometimes get details wrong, this one was especially disappointing because I incorrectly tarnished the reputation of Biber et al and implicitly accused them of bad scientific practices, which they were not doing. I believed I was relaying an accusation by Gideon (who I trust), but I was wrong and he was not accusing them of that. I apologize to Biber et al, my readers, and everyone else involved in this.
Still, several of Alexandros’ points were entitely correct, and I appreciate the corrections.
OE Babalola (I incorrectly wrote this name has “Babaloba” in the original)
I did err in saying the Carvallo paper was retracted
Alexandros points out that I used the wrong statistical test when analyzing the overall picture gleaned from this studies. He’s right.
I probably overestimated how important it was.
A few things are notable here. For one, you shouldn’t use this to trust Scott less than other people. Scott, unlike most others, is willing to admit to his errors. Very few people would release an article that take around an hour to read admitting to a bunch of mistakes on a complex empirical topic. Second, this is non-representative—when Scott gets everything right, no one releases a 21-part series responding to him, prompting him to reply.
But nonetheless, these are some pretty egregious errors. If Scott will make sufficiently huge errors—and he’s the smartest person on the internet—then how many errors are being regularly made by media sources or books or journalists? The answer is a lot.
There are a few other things I can audit Scott on. I’m quite informed about effective altruism. Scott’s written about it quite a bit, and there are no significant errors. So that’s one audit that he passes with flying colors.
But then there’s his book review of What we Owe the Future. That was—there’s no other way to put it—just bad. If you’re curious why, you can see my friend Parrhesia’s devastating review of it. I’ll just note one quote quote from it:
I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox. I'll just nod my head and say "Yes, I guess that sized civilization MIGHT be nice."
Delightfully written and witty. And yet… anyone who knows about population ethics, should recognize that this is an utter nonstarter implying. It implies tons of heinous nonsense—it’s just the person affecting view, which MacAskill tears to pieces. There’s a reason philosophers don’t generally accept the person-affecting view—there’s just no way to make it work, as I explain in the article that was previously linked. If I didn’t know about population ethics, I’d have no gripes with the article. But because I do, I recognize that Scott is utterly wrong.
Let me give a final example of an audit of Scott—his review of manufacturing consent. I thought that Scott’s article was really good when I first read it—this was before I had thought very much about foreign policy and the potentially sinister behavior of the media. Then I chatted with a friend of mine, and he tore it to shreds. I then read an article conclusively tearing it to shreds.
This is not an exhaustive list. But my audit of Scott leads to an alarming number of enormous errors—one that completely call his article’s thesis into question. This is alarming.
If you read the last section and think ‘oh well I guess this Scott Alexander guy is just wrong about tons of stuff, I’ll just believe my preferred thinker on everything’ you’re totally missing the point in a very big way!
I intentionally chose Scott as a person to audit because he’s so consistently right—witty, brilliant, and persuasive. But this problem applies generally to most smart people.
Take Noam Chomsky. I can’t very easily audit Chomsky on most of foreign policy—the little that I’ve had audited has been confirmed, but it would be a pain to check all ten billion footnotes and read replies to each. But there are a few things I’m confident about that Chomsky’s just off the rails on. One of them I’ve discussed here. The other is his comments on animal rights which are just downright absurd.
There’s no way I could ever win an argument against Chomsky on foreign policy, but when I investigate a few of the things that I know about, they often turn up lacking. This doesn’t mean you shouldn’t trust Chomsky. It just means, well, pretty much everyone is egregiously wrong about some things.
The world is complex. It’s hard to know what to think about various issues. There is no easy way to understand China policy, for example. So, when a smart autodidact like Scott comes along and says “here’s what we should do in our policy towards China,” lay people think “hmm, that’s very insightful—this guy’s brilliant,” and the person who wrote his Ph.D thesis on U.S. policy towards China thinks “OMG, he sucks, he got this obscure death toll wrong, didn’t even know there was a controversy about it, got some dates wrong, and his overall thesis is totally backward.”
Sam Atis talks about this a bit in his case against public intellectuals. The basic point is that public intellectuals tend to be smart, persuasive, and good at writing. Successful public intellectuals aren’t those who are actually right about education or foreign policy—they’re the ones who can convincingly convince people who don’t know very much about education or foreign policy that they’re right. This is why Dinesh D’Souza is a well-known “public intellectual,” but no one has heard of Derek Parfit.
I know pretty much nothing about the causes of the obesity epidemic. My knowledge about obesity doesn’t go much beyond “exercise and eat vegetables to avoid it.” But a bit over a year ago, I read SlimeMoldTimeMold’s (henceforth referred to as SMTM) claim that obesity was caused by environmental contaminants—especially lithium. I read their entire series, and found it fascinating. I gave about 50% odds to them being basically right—their evidence was utterly convincing. The 50% credence I gave to them being wrong was a result of higher-order evidence—it just seems like such a theory would be widely accepted if confirmed.
Then, I read Natália Coelho Mendonça’s reply—and it was pretty devastating. I now have more like 5% credence in them being basically right. She showed quite persuasively that the things that they took to be facts are at least contested, and, in most cases, probably not true. For example, one piece of support for SMTM’s theory was the claim that lab and wild animals are also getting more obese. But Mendonça persuasively argues that this is not true.
This is a particularly striking example. SMTM’s cumulative case was, on its face, one of the most persuasive abductive cases that I’d ever read for anything. And yet it was pretty thoroughly dissected by Mendonça . It turns out that it’s not particularly difficult to make overwhelmingly powerful cases for things that are almost certainly false. Given this, we should expect people to get things wrong a lot—it’s just really hard to know the truth.
The SMTM example doesn’t just show, in a limited sense, that SMTM is wrong. It shows that one can be very convincingly wrong—when our knowledge institutions select for being convincing, you should expect people to get a lot wrong.
There are lots of people who don’t get a lot wrong. When I read lots of philosophy papers, for example, it seems like they’re not making significant errors. But these papers are boring and limited. They argue for things like “one random possible solution to the Frege Geach problem doesn’t work because it has this other problem.” But that’s not at all interesting. If you take 30 pages to talk about one small solution to one problem, while your paper is thorough, it’s just not interesting.
I remember when I was in middle school, I basically thought that everyone who disagreed was confused. I was a libertarian at the time—and a pretty extreme one. When I would patiently explain my (in hindsight mistaken) analysis of the minimum wage, and people wouldn’t immediately agree with me, I’d just assume they were confused. Politics was a puzzle to solve—an easy one at that—and the people who couldn’t solve it were just wrong. I remember being disappointed when I saw some person who I agreed with philosophically explain that they supported a welfare state. It just seemed like they were ignorant of basic facts.
It seems like some people have not outgrown this puerile impulse. Nathan Robinson, who I’ve previously criticized, recently released a rather odd tweet.
This exemplifies my thought in middle school. I thought that you could just write down the reasons that non-libertarians are wrong about everything, and every rational person would be thoroughly convinced. The idea that the rights arguments can just be swiftly crushed, and that there’s no reply that a right winger would give to one’s own arguments, is the worst of this thinking.
But if we’re not like Robinson, if we recognize that there are intelligent responses that people on the other side will give to all of our arguments, then we should recognize that we—like others—are going to get a lot wrong.
Oh honey, please don’t think I’m advocating that you think for yourself, that would be absurd
You might have read that last section and thought “oh no, all these smart people—public intellectuals and so on—are egregiously wrong, I’ll just think through these issues myself.” If you did this, you would be wrong—you should avoid thinking for yourself at all costs.
For more on this, if you’re interested, I’d recommend reading the article “Is Critical Thinking Epistemically Responsible?”
In it, Huemer argues that critical thinking is not reliable. As he notes, if you think through things yourself, you’ll be wrong at a much higher rate than the experts.
Hypothetical: Suppose you would like to form an opinion about moral realism, a topic which you have not yet had time to study carefully and have no firm opinions on. A student from your university, who took a critical thinking class and got an ‘A’ in it, sincerely tells you that he has recently studied the subject, applying all the critical thinking lessons he learned, and he has come to the conclusion that non-cognitivism is the correct metaethical theory.
Q: Should you now believe non-cognitivism?
Suppose you answer “yes, believe non-cognitivism”. Then you’re giving up the critical thinking philosophy right away, since you yourself would be violating its central advice.
Anyway, the answer is obviously no, and everyone knows it. No one would adopt a controversial philosophical view on such flimsy grounds. Notice that this means that we’re saying that the student’s judgment is not strong evidence of the truth of non-cognitivism. Indeed, it is almost no evidence at all (you should change your credence in non-cognitivism barely, if at all). And notice that that means that we’re saying the judgment of a student in such a situation is not reliable. In other words: critical thinking is unreliable. So why do we (professors of philosophy) tell people to do it? If you wouldn’t rely on that student’s judgment, how can you advise the student to rely on it?
Also, as he points out, we agree in lots of contexts that critical thinking is irresponsible. People who represent themselves in court cases about complex legal disputes are irresponsible. People who diagnose their own treatment for a complex illness—those who don’t defer to doctors—are irresponsible.
The reason why people like Scott Alexander, Noam Chomsky, SMTM, and more get things wrong isn’t that they’re dumb or bad at thinking. It’s that the world is complex. It’s hard to know what to do or what to believe.
These problems apply to you just as much as they apply to other people. If others make errors because they’re bad at reasoning, then you shouldn’t defer to them. But if they make errors because it’s really fricking difficult to not make errors, then you should expect yourself to make lots of errors.
I’ll appeal to myself here. While there are no specific issues on which I think I’m currently wrong, it is almost certainly the case that I’m wrong about lots of things. One piece of evidence for this is the following: if I look back at what I used to think, I was almost always egregiously wrong. Nearly two years ago, I had a chat/debate with Michael Huemer. Now, while I think that I agree with utilitarianism and the permissibility of taxation, nearly everything I said was badly confused, embarrassingly. My least egregious error was my mustache. And my earlier Friedman chat was even worse—I hadn’t internalized the lesson of “Beware the man of one study.”
Experts and other smart, informed people will very often be wrong. But you’ll also very often be wrong. I haven’t changed my mind on a ton of things since starting the blog, but there are a few things that I’ve just been egregiously wrong about. Two examples that come to mind are the following.
(If you want to see the refutation of the second article, see here).
So, while other people don’t pass the audit test with flying colors, neither do I. This should actually make you more likely to defer—if the world is too complicated to figure out by yourself, then you’ll be lost absent deferring to the consensus of smart people.
So, even after the audits, if I need to know about Semaglutide, I turn to Scott Alexander.
On the issue of Critical Thinking, whilst it's true that for most people engaging with Critical Thinking the result is that they either begin to hold pretty crazy/outlandish beliefs or they use the stuff they learnt to unjustly reinforce their own beliefs by deflecting valid criticism in a manner that is superficially deep but mostly amounts to treating people extremely uncharitably.
There is still the question of how do you know which experts to defer to and how do you know if and when you should use your own judgment. The best way to illustrate the second issue is by presupposing that you are good at thinking or a Scott Alexander type, and figuring out what relevant fact out their in the world proves this to be true, you then realize that even people who aren't good at thinking would also be able to point to "things they also regard as such facts" and so how in principle would you be able to internally reason to the conclusion that you are better than average.
Even if you point to stuff such as your reputation on Metaculus or your fancy Harvard degree or your past 5 year performance on the stock market or your IQ or your training as a clairvoyant or the number of twitch followers you have or the fact you have a fields medal or the fact you are the pope or the fact that you were contacted by Jehovah 1 whilst watching late night television etc. How do you know which of these objective metrics is the correct way of determining if you are an expert or worthy of thinking for yourself, after all people who choose the wrong metric thought they chose the right metric.
It seems their really isn't a way to internally reason to the conclusion you are better at thinking than average, or at least the correct way of reasoning to such a conclusion wouldn't motivate people who are bad at thinking to either accept they are bad or good at thinking. As such you sort of have to take it for granted you are better than average if you think you are, you can also apply the same style of reasoning to which experts you defer to.
I don't mean this to be condescending or paternalistic, but this post is surprisingly thoughtful and mature. For anyone, especially someone so young.
Just FYI, Scott is right about population. I know you are in the thrall of straight utilitarianism, like you were libertarianism before. It took me until my 50s to get out completely: https://www.losingmyreligions.net/