1 Why believe simple things
One of my favorite philosophers, Michael Huemer, has a paper titled “When is Parsimony a Virtue,” in which he (shockingly!) muses about when parsimony (the simplicity of a theory) favors that theory. In the sciences, people generally prefer simpler theories—as Ockam famously suggested in what has become known as Ockam’s razor, when a theory is more complicated, it is less likely to be true (or, in Sinhababu’s formulation, “if you invoke unnecessary entities, I'll cut you with my razor.”) But why is Ockam’s razor true? Why believe simpler theories are more likely?
Huemer gives a variety of reasons, but one of the fundamental ones is that more complex theories have more degrees of freedom. They have more moving parts that can be manipulated to accommodate data. To illustrate this, suppose that I claim that the only cause of weight gain is eating more calories of food. This is easy to test—if we see cases where people gain weight without eating more food, the theory is proven false. There’s only one causal variable in the theory, so I can’t engage in shenanigans when it’s disproven. Thus, because nearly all possible ways the data could turn out would disconfirm the theory, if the data turns out the way the theory predicts, it’s very good evidence for the theory. If the theory is true, there’s only one way the data could turn out, so if the data turns out that way, it’s very good evidence for the theory.
But suppose that instead, the theory was that weight gain is caused by a confluence of factors including calories of food eaten, exercise, amount of movement, function of the lipostat, and grace of fairies. This can accommodate a much wider range of data—every time the lipostat, calories, exercise, and movement can’t explain things, you just blame it on the fairies. Thus, when data comes in that is consistent with the theory, it’s barely any evidence for the theory—if a theory does not make a narrow range of predictions about what will happen, then its predictions coming true won’t be evidence for the theory—if I predict “things will happen,” because God exists, when things happen, that is not good evidence for God. A theory that explains everything explains nothing. This can also be shown mathematically, which Huemer does.
If a theory is very simple, it’s easier to both prove and disprove. If it predicts the data, then that’s very strong evidence for the theory, but if it the data diverges from the theories’ predictions, it’s great evidence against the theory. In contrast, more complex theories, ones with more degrees of freedom that are more easily manipulated, are harder to prove or disprove. Any data can be argued to be either evidence for or against the theory, depending on one’s interpretation of it.
2 Social justice and neoreaction
If one only watched the Tucker Carlson show, they’d believe that social justice activists, en masse, believe that various mundane things like math and logic are racist. It is not hard to find some isolated examples of social justice activists and left-wing academics saying these things, though they are, of course, not as common as one would expect based on the Tucker Carlson show.
But why is this? Why is it that there are a bunch of social justice activists saying utterly absurd things? One does not find conservatives saying that trees are anti-American, or that math is dishonorable to the troops, or that logic is in opposition to recognition that America is the greatest country in the world. So why is it that, of the people saying this particular subset of loony things—namely, that very mundane things are deeply objectionable—all of them are far left?
Conservatives would probably have various explanations such as that left-wingers are just dumber. But I don’t think this is true—most smart people are left-wing. The explanation is a good deal more specific.
There are, of course, various sociological explanations of this. For example, far-left academics just as a factual matter are much more interested in critique than construction. Other similar explanations could be given. But I think that there is a more fundamental issue at the root of the problem, and this also explains to a significant degree which fields, people, and claims you should trust.
It’s really hard to imagine what a right-wing explanation of logic being anti-American would look like. Seriously, try to write out an explanation of why logic is anti-American! It can’t be done! There just isn’t really anything to say. But if you wanted to say physics was racist, you’d say something like this
But deep conceptual shifts within twentieth-century science have undermined this Cartesian-Newtonian metaphysics1; revisionist studies in the history and philosophy of science have cast further doubt on its credibility2; and, most recently, feminist and poststructuralist critiques have demystified the substantive content of mainstream Western scientific practice, revealing the ideology of domination concealed behind the façade of ``objectivity''.3 It has thus become increasingly apparent that physical ``reality'', no less than social ``reality'', is at bottom a social and linguistic construct; that scientific ``knowledge", far from being objective, reflects and encodes the dominant ideologies and power relations of the culture that produced it; that the truth claims of science are inherently theory-laden and self-referential; and consequently, that the discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities. These themes can be traced, despite some differences of emphasis, in Aronowitz's analysis of the cultural fabric that produced quantum mechanics4; in Ross' discussion of oppositional discourses in post-quantum science5; in Irigaray's and Hayles' exegeses of gender encoding in fluid mechanics6; and in Harding's comprehensive critique of the gender ideology underlying the natural sciences in general and physics in particular.7
Here my aim is to carry these deep analyses one step farther, by taking account of recent developments in quantum gravity: the emerging branch of physics in which Heisenberg's quantum mechanics and Einstein's general relativity are at once synthesized and superseded. In quantum gravity, as we shall see, the space-time manifold ceases to exist as an objective physical reality; geometry becomes relational and contextual; and the foundational conceptual categories of prior science -- among them, existence itself -- become problematized and relativized. This conceptual revolution, I will argue, has profound implications for the content of a future postmodern and liberatory science.
This is a quote from Alan Sokal’s famous paper in which he hoaxes social justice academia. He intentionally submits a paper that is comprised entirely of jargon-filled nonsense—and it gets accepted. This was an attempt to expose the shoddy standards of non-rigorous post-modernist fields, which replace substance with jargon, and arguments with linguistic tricks. Hilariously, this was replicated by various people later, in the Sokal squared hoax, who showed that the problem is more widespread than just one journal.
Conservative academia does not have the same type of verbose jargon that social justice academia does. This is partly because there is not much conservative academia. But there are some right-wing non-academics who do have a similar writing style to those social justice academics—Curtis Yarvin is a good example. If one reads his writing, it’s a rambling screed, hopping from point to point. He never just makes a clear point; one needs to read 50,000 words to get a faint hint of what he is saying, and even then it slowly wafts into one’s brain, like the distant smell of a bakery, not in the form of arguments, but in the form of ideas that he hints at before changing subjects slowly entering one’s brain. Reading Yarvin, one gets the distant fragrance of an argument, but very rarely gets a real argument for anything.
Yarvin is roughly as much of a crackpot as many of the social justice people in academia. He hints (not argues, of course) that there’s some deep problem with modern society without explaining exactly what it is. He just makes abstruse references to swimming squid monsters, but can never explain the alleged deep rot in modern society (perhaps because it does not exist, and if it did, would require empirical evidence, something Yarvin is apparently incapable of producing). While Yarvin would not, as a matter of disposition, argue that logic is objectionable, he could do so, in his smug, sneering style of writing 50,000 words on a topic that eventually gives vague hints of what he is trying to prove.
Yarvin’s writing, like that of the social justice activists, has a seemingly infinite number of degrees of freedom. He doesn’t specify exactly what he’s arguing, certainly never giving clear explanatory models. He engages in deliberate obscurantism to make his points harder to parse through (he’s admitted to this at some point, but I can’t remember the exact quote). Yarvin, like the social justice academics, can say anything, because he is not clear. He does not give precise arguments or narrow explanatory models; he instead pelts one with a 200,000-word screed that contains clever-sounding literary references, a veneer of sophistication, and assertions in various places that his main points are right. His points are not argued for, so much as assumed, yet it’s hard to realize the lack of an argument in the barrage of the millions of words of bullshit.
Steven Pinker once remarked that good writing describes events, while bad writing thingifies events, and then refers to the thing. For example, bad writing will refer to
MacAskill’s morally repugnant call for an increase in the number of sweatshops in the Third World [as] merely the artifact of a utilitarian ideology incapable of recognizing exploitation as a moral or social problem.’
MacAskill gives reasons for his view. But rather than engaging with the reasons, his view will be described as part of a broader pernicious ideology, which is then argued against. What exactly the ideology is will not be defined.
But this is the way that social-justice academics often argue. They refer to some concrete thing like math as part of the broader “knowledge-making process,” and then describe that as “having always been impossible to disimbricate from systems of whiteness.” And systems of whiteness are racist, by definition, though it’s not clear what the definition is.
Similarly, the neo-reactionaries like Yarvin very often refer amorphously to ideas that encompass a wide class of particular things, before making sweeping judgments about them. As Scott Alexander says
I have heard the following from a bunch of people, one of whom was me six months ago: “I keep on reading all these posts by really smart people who identify as Reactionaries, and I don’t have any idea what’s going on. They seem to be saying things that are either morally repugnant or utterly ridiculous. And when I ask them to explain, they say it’s complicated and there’s no one summary of their ideas. Why don’t they just write one?”
Part of me secretly thinks part of the answer is that a lot of these beliefs are not argument but poetry. Try to give a quick summary of Shelley’s Adonais: “Well there’s this guy, and he’s dead, and now this other guy is really sad.” One worries something has been lost. And just as well try to give a quick summary of the sweeping elegaic paeans to a bygone age of high culture and noble virtues that is Reaction.
So too is the experience of reading social justice academics. It’s like reading poetry that culminates bizarrely, without any supporting reasons, in sweeping metaphysical conclusions that one has no reason to accept.
In a different article, Scott highlights the trickery that neo-reactionaries often engage in with the word demotism. They talk about demotism as ‘rule of the people’ which includes Nazism, socialism, communism, and fascism. They argue against Democracy by claiming it’s just part of the murderous demotist system.
This argument is really stupid. Even if most of the Demotist category is bad, this is not a reason to think that Democracy, in particular, is bad. If I made up a set that included Curtis Yarvin, Stalin, and Hitler, and called it the shememotism, it would be idiotic to say “shmemotism is a pernicious group aimed at undermining the world, therefore, Yarvin is bad.” Perhaps the dumbest argument in the world is “I can make up a set that includes [thing I’m arguing against] and mostly bad things.” And yet both neoreactionaries and social justice academics LOVE to make this argument.
It’s very common to argue against, for example, gene editing based on it being eugenics. This is a classic example of thingification—rather than thinking about the specifics of the case, one classifies it with other bad things, and then can just declare it bad—no further argument needed!
When one can engage in argument by thingification, they have infinite degrees of freedom. One can argue that anything is either good or bad by just making up a set that includes it and other either good or bad things. So one shouldn’t trust these arguments—they’re not deduced logically, and because they predict everything, they predict nothing.
When one has infinite degrees of freedom—when just asserting that something is a tool of whiteness or something is considered an argument—they need not be taken seriously. If one’s arguments are unconstrained by truth or facts or logic, because they can whip up verbose jargon for any insane conclusion, their arguments should not be trusted.
This goes a long way towards explaining why, in college debate, these arguments are so common. Because one can use critical literature to say anything, they can use it to argue against obviously good things. Thus, if the alternative team argues for obvious things, like that debate should be about the topic or that the government should pass some obvious policy, it’s hard to substantively argue against it. On the other hand, it’s easy to make some abstruse reference to systems and the alleged impossibility of progress, which is why debaters do it!
This is why neoreactionaries and social justice academics tend to be these people that are the harshest critics of effective altruism. Effective altruism is a social movement about using evidence and data to make the world better as effectively as possible, having demonstrably saved over 100,000 lives. It’s really hard to substantively argue against effective altruism—it’s just very obvious that we should try to do lots of good.
But lots of people have a very negative emotional reaction to effective altruism. It implies that they should give lots of money to charity—money that they’d like to spend on themselves. The people who are guided by reason generally reply by saying “ugh, this is unfortunate, but the EAs are right as a practical matter—I really ought to be donating.” But if one is guided by emotions and uses arguments as a crutch to justify what they initially felt, then they’ll try to conjure up arguments against effective altruism. Those who have left reason behind entirely, preferring the simulacra of reason, an appearance enabled by their use of footnotes and clever-sounding jargon, who can use that jargon to argue for anything, when they have a negative emotional reaction to effective altruism will argue against effective altruism. Thus, this degrees of freedom theory predicts that both social justice academics and people like Yarvin would be harsh critics of EA—and of other things that have compelling rational arguments for them, but that are emotionally unappealing. This prediction turns out to be borne out—Yarvin has criticized EA and futarchy, a complex system of government proposed by Robin Hanson that is emotionally unappealing but has clever supporting arguments.
Similarly, social justice academics love to criticize EA—often based on very little argument (see here for a devastating dissection of a supposedly Serious Scholarly work attacking EA). If one can argue for anything, then one’s arguments are untrustworthy—they just signal that one intuitively doesn’t like an idea and thus conjured up arguments against it. But when one doesn’t have to talk about anything concretely, they can argue for anything. Thus, we should dismiss the pontifications of those social justice academics on most topics, especially after the Sokal Hoax, which exposed their deeply shoddy scholarship.
3 Who to trust?
Suppose that you read the abstract of a report that says that some hypothesis is true. How confident should you be that the hypothesis is, in fact, true? This will, of course, depend on what the hypothesis is and what field it comes from. Here, I will of course not give an answer to all fields, but I will describe which ones one should place more stock in.
Whether you should trust a conclusion of a report will depend on how likely it is that the report would conclude that if the conclusion was true vs if it was false. If, for example, a machine that only says true things tells you that some conclusion is true, you should, of course, believe it.
One surefire piece of evidence that you should believe something is that the person writing it came in with the opposite view but then was convinced by the data. It is rare that unconvincing data is enough to overcome motivated reasoning. This is one reason that, for example, Hanania’s article about the social media causes misery hypothesis is convincing; he came in with the expectation that he would debunk the theory, and he turned out to be unable to do it. David Friedman once pointed out that one reason he’s somewhat concerned about nanotech is that smart libertarians, who generally abhor regulations, have endorsed regulating it after studying the topic, so if the evidence for it being concerning was enough to overcome their anti-regulatory priors, it must be rather convincing.
Another thing to keep in mind in evaluating the trustworthiness of a field is the number of degrees of freedom of papers in that field. This is a reason to trust physics a lot—physics is entirely constrained by the empirical data. One cannot just postulate crazy things without being soundly empirically disproven in the domain of physics. Similar things are true of mathematics and various other hard sciences, and also many soft sciences, to some degree.
A decent test of the trustworthiness of a field is whether you could bullshit an undergraduate paper in the subject without really understanding it. It would not be hard to bullshit a paper on history, because history has more degrees of freedom, though it would still be somewhat difficult. It would be easy to bullshit a gender studies paper, or a paper on a book you haven’t read (as long as you’ve read the cliff-notes version). In contrast, it would be nearly impossible to bullshit a calculus project or computer science project.
When one is more constrained by objective reality, you should trust their findings more. This, unfortunately, has rather dismal implications for the trust that one should place in philosophy—both analytic and continental, though continental to a much greater degree. Analytic philosophy is much clearer, often more reliant on formal arguments, while continental philosophy often relies on more verbose musing on vaguer topics like the meaning of life—for a more thorough rundown, see here. Most arguments in philosophy make no reference to empirical evidence—and it’s not hard to come up with clever arguments for totally crazy conclusions. Someone wrote an entire, surprisingly tricky-to-refute Ph.D thesis arguing that every proposition is true (a proposition is a sentence that can be true or false). Philosophy publishing encourages novelty, which incentivizes saying new things, even when they are implausible, as long as one can come up with clever arguments for them. As a result, the mere fact that something is published in a philosophy journal should give one very little reason to believe it.
Still, analytic philosophy isn’t completely untrustworthy. Very often, analytic philosophy will convince people of conclusions that they didn’t previously believe because the arguments are convincing. I think that, while analytic philosophy papers are often off track, analytic philosophers tend to mostly have pretty okay views. Very often, when one tries to think of arguments on particular topics, they discover very convincing arguments for them—the convincing arguments then rationally convince others. There are many surprising conclusions that analytic philosophers have discovered—for example, that a world with lots of barely happy people can be better than one with 10 billion people living great lives.
Continental philosophy, as well as other jargon-filled disciplines, is much more dubious. Because one doesn’t need to give concise arguments with premises and conclusions for their views, they can argue for nearly anything. This results in mere reification of one’s preexisting beliefs—just as poems rarely rationally convince people, so too is it rare for continental philosophy to convince people through reason alone.
The grievance studies—queer studies, gender studies, critical theory, and the like—should not be taken seriously for this reason. Because they are not constrained by accordance with objective facts, often preferring to focus on narratives and personal stories, they merely serve to reify the biases of the authors. Imagine that there was equally strong evidence for the following two hypotheses: first that overweight people earn less money because they tend to be lazier and second because there are systems of oppression. Does anyone doubt that the second hypothesis would be the only one seriously claimed by fat studies departments? Of course not; the field just serves to reinforce social justice platitudes.
Whether history is trustworthy depends on what is being argued for. If one is making a concise claim about Babylonian pottery, that is likely to be true, for people rarely have strong priors in that. However, if one is making a sweeping claim about history as a whole, this is dubious—it’s easy to find half a dozen historical events that confirm one’s theory of history. The more sweeping the claim, the more degrees of freedom it has, and thus the less one should trust it. This has been confirmed by empirical evidence—Tetlock finds that people who think of things more in terms of sweeping systems are less accurate.
One should also place little trust in political science. Part of this is based on empirical data—we have strong evidence from Tetlock that political pundits don’t do better than chance at predicting events. But part of this is that the political system is complicated and has dozens of different actors—thus, it’s not hard to find a ton of arguments for any specific claim about politics one wants to make. Political scientists seem much less likely to be convinced of surprising conclusions than, say, philosophers.
Similarly, one should place little trust in the things said by people debating. If, for example, someone on CNN or Fox News quotes a study, this tells us little beyond the fact that they googled “STUDY WHICH SAYS I’M RIGHT ABOUT THIS TOPIC,” and were able to find at least one study. Political pundits can generate arguments on any topic, for there are numerous think tanks that churn out propaganda on all sides.
Thus, one should trust fields more constrained by objective reality. The more a field requires empirical data, clear evidence, and makes predictions about the world, the more it should be trusted. One should place far more faith in the average physics paper than the average queer studies paper or, much as it pains me to say it, the average analytic philosophy article.
One confession: you shouldn’t trust the things I say about utilitarianism. I’ve always had utilitarian intuitions—non-utilitarianism has never seemed a serious option to me. Thus, you really shouldn’t trust the things I say on the topic, because I clearly have confirmation bias—even if utilitarianism were not best supported by arguments, you should expect me to conclude that it is.
However, even if one shouldn’t be convinced by the mere fact that a source says something, they can still be convinced by the arguments. Suppose, for example, one thinks that Milton Friedman is almost always wrong. If Friedman gives a convincing argument for some conclusion, you may not be convinced by the fact that Friedman asserted it, but you may nonetheless be convinced by the argument. Hopefully, the arguments I give for utilitarianism, and on other topics, are convincing. But, of course, that’s what an unconvincing person would say :).
Thus, the next time someone quotes an article from the journal of queer studies saying “social justice platitude about topic X is true,” the correct response should be to ignore it. These journals do not track the truth, and thus the conclusions of their reports are not worth taking seriously.
The Enlightenment idea that we cannot have direct experience of things, only mental representations of them, has a lot to answer for. Because, as soon as reality is situated in the head it becomes subjective. This seems to be how 'theorists' benefit from infinitely malleable interpretations of even basically obvious facts.
Love that you lump the theorists of the reactionary & progressive traditions together. They both drive me nuts.
> This is one reason that, for example, Hanania’s article about the social media causes misery hypothesis is convincing;
FYI: "Hanania's article" links to https://www.richardhanania.com/p/how-i-changed-my-mind-on-social-mediahttps://www.richardhanania.com/p/how-i-changed-my-mind-on-social-media, which is the same URL twice, which doesn't go anywhere.