One of my good friends is a top ~3000 chess player in the world. He often plays the sorts of chess games where both players have hours on the clock. I recently remarked to him that I found it puzzling how he uses those three hours. My chess playing ability wouldn’t be improved one jot by having 3 hours on the clock rather than one—all my chess-related thinking never takes more than around 30 minutes at most.
He told me that for unskilled chess players (like me) this is normal. It’s only when one really gets chess—sees the broad structure of the game—that extra time becomes valuable. When you see how chess works, you understand strategically the sorts of moves and responses that will be relevant, and can sit around for an hour thinking through all the details of just a single move.
I think philosophy is very similar.
When I posted my article about the importance of shrimp welfare, lots of people triumphantly declared that they simply didn’t care about shrimp. They seemed to think that simply reiterating that you don’t value X ends a moral debate about whether to value X. This is a bit like thinking that repeating “I like Trump,” is a convincing response to an argument against Trump. They couldn’t seem to conceive of what it would be to argue that one should value X even if they don’t actually.
I suspect one reason so many people think that ethics is subjective is that they think that it’s beyond rational thought in this important sense. While arguments and analysis can help elucidate politics, math, and other domains that there are truths about, people think that ethics is beyond persuasion. It involves both sides simply declaring their views, and that’s the end of it. I can recall myself making this error when in 7th grade I declared to my Bar Mitzvah tutor (who was also randomly quite libertarian, and with whom I often had extended political discussion rather than practicing for my Bar Mitzvah—ah, those were the days) that I found abortion debates tedious, because the debate simply came down to my thinking fetuses don’t matter and them thinking fetuses do.
Now, anyone with a basic familiarity with the contours of the abortion debate knows that this isn’t so. When philosophers argue about abortion, they give arguments for thinking that the fetus is or isn’t a person. Philosophers, by the way, easily run circles around everyone else when discussing abortion, though often non-philosophers are too confused to recognize that they’re getting trounced when they disagree with philosophers.
The overwhelming majority of people think that meat eating is permissible on the grounds that what one eats is a personal choice. Yet one can only sustain this assumption if they think that there can’t be arguments for the wrongness of eating certain things (namely, the corpses of animals who were tortured for months and then killed).
One source of this error is the longstanding failure among non-philosophers to recognize the role for thought experiments. Non-philosophers often object to thought experiments on the grounds that they’re unrealistic. For example, I recently objected to the notion that we should only care about the interests of biological humans by noting that this would imply that it would be okay to harm intelligent aliens or elves or orcs. In response, people noted that elves and orcs and aliens are not real—a fact which came as a real surprise to me.
But this is totally irrelevant. It’s a point of basic logic that if only humans matter, non-humans don’t—and this implies that were there to be elves, harming them for slight benefit would be permissible. The fact that they aren’t real is totally irrelevant to this matter of basic logic. And if you do think that it’s relevant, then you’d have to think that whether utilitarianism is right will depend on various bizarre extrinsic features of the world like whether there are actually people fat enough to stop trains, and that the search for extraterrestrial life is highly relevant to how we should treat animals. If there are smart aliens, factory farming is wrong (for it would be wrong to harm those smart aliens), but if not then it’s not.
Something has gone wrong if you need to know if there are aliens to figure out how we should treat cows and pigs.
It’s notable that this is an error made entirely by lay people. While philosophers might caution against unrealistic thought experiments because it’s easier to err when thinking about more remote scenarios, philosophers never reject to a thought experiment merely because it’s unrealistic. It’s only normal people who make this error, ignorant of the ways of philosophy, unable to figure out the twists and turns of the dialectic, the way I am unable to figure out the twists and turns of a game of chess.
You can rarely convince normal people of this. This is in part because they distrust the very notion of philosophical expertise. As a result they often confidently adopt views—like that unrealistic thought experiments are de facto illegitimate—that are as fringe among philosophers as thinking the earth is flat is among astronomers. When people are bad at philosophy, they are generally ignorant of this fact, and they don’t treat the fact that all serious philosophers reject their view as substantial evidence against it. While one who gets the wrong answers to math problems can see that they’re bad at math, one who errs philosophically never comes to know it, unless they’re clever enough to figure out that they’ve gone wrong. To quote a rant that Richard gave around 18 years ago (when I was 2):
The Problem with Non-Philosophers
... is that they don't "get" reason. They don't know how to do it; they don't even realize why one should want to.* I generalize, of course, there are some exceptions. But for the most part, even intelligent non-philosophers** seem to lack the mental discipline required to follow a clear and logically rigorous argument. And that's a tragedy. It's something every kid should learn in school.
There are generally more ways of doing things wrong than doing them right. This is true in philosophy, and because most people’s errors are never corrected, confused thinking on philosophical topics is far more common than correct analysis. While in any position, there are only a few moves that good chess players might consider, the unskilled masses (of which I count myself as a member) might make any of a vast range of terrible moves. When presented with an intuitive counterexample, while one skilled at philosophy might try to undercut the intuitive basis of the counterexample or provide arguments for accepting the apparent counterexample, non-philosophers might say any of hundreds of confused things—and usually, you can’t talk them out of it unless you have hours.
When I speak of philosophers, I refer of course, only to the analytic philosophers. The continentals are even more philosophically confused than the average person. They vomit up paragraphs of inane gibberish word-salad, rather than argument. Next time you read Foucault, try to figure out how many premises his argument has; you won’t be able to do it. The judgment that the continentals suck, by the way, is held by a sizeable share of serious analytic philosophers—Chomsky, Williamson, and numerous others.
One hint that you’re confused on a philosophical subject is if you couldn’t write a long piece—20,000 words, say—that responds to many objections. When you fully understand a topic, you come to see how deep the rabbithole goes, how complex the interlocking network of related ideas is. You come to see that a short article, or even a fairly long one, isn’t adequate for fully addressing the topic. If instead, you come to think that the full analysis of some contentious philosophical topic—say, whether meat eating is wrong—can be settled in a few sentences, you have almost certainly gone wrong somewhere.
Another good indicator is if your ideas are fringe among philosophers. If when you present them to philosophers, the philosophers always find them clearly wrong, you should begin to grow suspicious. While you might be right and they wrong, it’s far likelier that they are right and you are wrong.
This holds true, even if your ideas make a good deal of sense to you. Philosophical confusion is not like other types of confusion. When one is philosophically ignorant, they tend to find that everything makes sense to them. The followers of Ayn Rand, though perhaps the most confused bunch in world history, have a worldview that makes perfect sense to them internally. The philosophically confused do not know what they do not know. Like one in a dream, only in hindsight does one who was philosophically in error realize that the things they believed confidently were a hazy mess of ill-conceived bullshit.
Curious to connect this perspective with David Chapman's recent writings on the badness of philosophy, e.g. here (https://meaningness.substack.com/p/philosophy-doesnt-work).
More broadly I wonder what Bentham's Bulldog makes of a meta-observation, which is that super smart philosophers make extremely compelling arguments on both sides of every controversial issue. If I'm a not-so-good reasoner, I can look at that and throw my hands up and give up on the whole enterprise, instead opting for my basic intuitions, which I can be sure some philosopher will take up and defend more than adequately. I'm sure there is some brilliant philosopher somewhere who has a very compelling argument against shrimp welfare, even if most of the people replying to the recent essay on that topic didn't offer any such arguments.
1) As a decent chess player I like the chess analogy. I can only confirm what your friend said: when calculating/evaluating a position I am constantly thinking about things that sub 2000 players wouldn't even consider
2) I know you are not a fan of history of philosophy, but this all shows how right Plato was when he came up with the allegory of the cave thousands of years ago