39 Comments
User's avatar
malloc's avatar

It’s a good heuristic to only be swayed a little by arguments because rhetoric can easily outstrip our skepticism. It takes a LOT longer than the space of a conversation to evaluate the truthfulness, merits and demerits of a belief… and there’s always the fact that the person persuading you has some agenda. Sometimes it’s simply to make other people more rational to make the world better but usually it isn’t.

Same reason when a salesperson puts a timer on a deal, the right move is almost always to walk away.

This does mean we tend to be wrong about anything abstract, including anything displaced in time. Since it’s necessary to have some good abstract beliefs, we acquire them anyway but they end up going through cultural evolution through memetic competition instead of directly updating… which is slow.

Expand full comment
Lance S. Bush's avatar

> It’s a good heuristic to only be swayed a little by arguments because rhetoric can easily outstrip our skepticism.

Simple but powerful point. I think it's actually wise of many undergraduates to be resistant to philosophers mostly rejecting this or that position. I don't consistently have more confidence in the conclusions of academic philosophers than I do of nonphilosophers.

Expand full comment
Dominik's avatar

Nice article.

I guess the main reason why it's so hard to talk people out of psychological egoism is not because they are intellectually confused, but because they have a clear prudential interest to believe it: I cannot count the times I put my job (which I love to death) on the line because I did what is right, instead of doing the thing that is most convenient for me. But IF I believed that the only difference between a "moral" and an "immoral" person is that they have different desires (as opposed to thinking - as I do - that behaving immorally is not based on desires but on irrationality), then I never would have done that - I would have done whatever gets me further in my job. Or in other words, psychological egoism justifies ethical egoism and being an ethical egoist is prudentially beneficial - THAT'S why people believe it. (This doesn't necessarily contradict anything you say in your blog post, I just wanted to add it because it's important to understand the mindset of the psychological egoist)

Expand full comment
Philip's avatar

There's a similar sort of person who is a moral anti-realist, and tells you that their realization of moral anti-realism did nothing to change their behavior. I guess they're trying to tell you that moral anti-realism doesn't preclude them from having values or endorsing/disapproving of various acts or something like that. But it occurs to me that if coming to disbelieve in the moral facts did nothing to your behavior, you were an extremely immoral person during the time you did believe in them!

Expand full comment
Dominik's avatar

That seems right to me. The fact that something is genuinely wrong and not something I don't like motivates me a lot in everyday life

Expand full comment
Lance S. Bush's avatar

Why would that mean that they were an extremely immoral person when they did believe in moral facts?

I'm an antirealist, but don't recall a time where I was a moral realist. I suspect I never was one. I don't think being a moral realist would change my behavior, since I don't care about whether something is stance-independently right or wrong.

Expand full comment
Philip's avatar

If you became a moral realist, you would care about something being stance-independently right or wrong, this being the definition of moral realism.

Expand full comment
Lance S. Bush's avatar

It is not part of any conventional definition of moral realism that you would care about it if it were true. I understand moral realism to be the position that there are stance-independent moral facts. The existence of such facts doesn't necessarily entail that one would care about them upon believing them unless one's position includes additional positions beyond "there are stance-independent moral facts." Sometimes realists drop the stance-independent clause, and they typically include a semantic thesis (cognitivism) and often include an epistemic condition (we can know some of the stance-independent moral facts) but I've rarely seen anyone define it to require caring about the facts in question.

Expand full comment
Philip's avatar

Sorry you’re right, I was smuggling in some assumptions about your carings. I will say that since you’ve never been a moral realist, I’m not sure if you should be so confident in advance about how you would react to the knowledge of the moral facts. But it’s true that coming to know of them does not entail that you would care about them.

Expand full comment
Lance S. Bush's avatar

I am probably even more confident than you think I am, and I believe I am in a good place to judge, because I'm that confident that non-naturalist realism is an untenable position. I don't think having been a realist makes much of a difference. I think it's hard to overstate just how implausible I think moral realism is.

Expand full comment
The Water Line's avatar

I think you miss another potential explanation. Philosophers follow fads too, and if almost all philosophers reject a view, someone who hopes to join their ranks will come to reject that view too just to fit in.

Expand full comment
Lance S. Bush's avatar

Yes. This is something I proposed here: https://www.lanceindependent.com/p/the-philpapers-fallacy-part-7-of

>This category will be brief, but deserves greater development. Consider the results of the PhilPapers survey at any given time. To what extent do the proportions of philosophers endorsing a particular view reflect a field that is settling down and gradually converging on particular conclusions? What if, instead, what we observe at any given time reflects a host of social and cultural forces that have little to do with the field moving towards any particular “end of history,” and instead reflects current fashions or trends? Panpsychism is popular right now. Will it be popular in twenty years? Virtue ethics was largely ignored, only to enjoy a renaissance in the past few decades. Prominent figures and ideas serve as lightning rods around which discussion gravitates: Rawls in political philosophy, for instance.

>To what extent, at any given time, is the state of the field, and the views held by those within the field, reflect transient waves of popularity for particular topics and perspectives on those topics? I’m not sure, but it would not surprise me if the answer is “quite a lot.”

Expand full comment
nelson's avatar

This is why I studied mathematics rather than areas more subject to discovering totally different takes in 20 years. Calc is still calc. Math will teach more rigorous thinking.

Expand full comment
Richard Y Chappell's avatar

If the question is "why are philosophers so much better (on average) than non-philosophers at philosophical thinking?", just reiterating the correlation doesn't tell us whether selection or treatment effects are larger.

I definitely think selection effects are a large part of it. From the first day of class (and certainly upon reading the first assignments), it's often pretty clear which students are cut out for philosophy and which are going to struggle no matter how long they study.

That said, the basic "toolkit" of philosophical distinctions and moves (the metaphysics-epistemology distinction, de dicto / de re disambiguation, testing for self-defeat, etc. etc.) are certainly helpful. But I suspect you need a base level of philosophical talent to understand how and why to apply these acquired tools. I'm sadly skeptical that it's something most people could feasibly learn.

Expand full comment
Bentham's Bulldog's avatar

It's certainly right that a lot of it has to do with innate talent. For example, some particularly precocious undergraduates...no, modesty prevents me from continuing.

I'd imagine most people would become at least okay at philosophy if they studied it in depth. Not great, but much better than is typical. But I'd imagine that the reason philosophers are so much less confused than, say, the typical linguist is that philosophy improves thinking.

Expand full comment
Richard Y Chappell's avatar

It would be an interesting test to take the 40% of your class (already an elite group, compared to the general population) who remained convinced of psychological egoism even after hearing Railton's lecture, and predict how many of them would become "good at thinking" if they completed a full degree as philosophy majors. (My guess: zero. Could be wrong, though!)

It'd also be interesting to test at the higher end, e.g. randomly assign prospective graduate students to either philosophy or linguistics, and see what difference it makes. It could be that the training is what makes the difference at the higher end.

Expand full comment
Bentham's Bulldog's avatar

I'd guess more than zero .

Expand full comment
Lance S. Bush's avatar

That would be interesting. I'd definitely be open to collaborating on longitudinal research of the long term changes associated with studying academic philosophy. This would be a challenging and time-consuming project, but very informative. It's also in line with where my own research interests are going. It'd also require collaboration and I think it'd be best, given the stakes, if it were adversarial: i.e., if people with different biases and expectations were involved, especially for those most relevant to whatever measures were employed.

It might prove very challenging to measure being good at thinking in a philosophical context.

Regarding those who remain convinced of psychological egoism: I doubt it'd be zero. A lot of very capable people are incorrigible about a variety of beliefs and attitudes. I'm not sure that'd generalize to this case, and there's cases where I wouldn't think it would (remaining convinced the earth is flat, just to give an extreme example).

Not sure if random assignment for graduate study would work. There'd be an enormous number of challenges with that:

(1) Ethical issues. Not clear it's best to assign student randomly except under highly constrained circumstances. Graduate school is challenging and considering the student's mental health would be an important consideration.

(2) Cost. This might be extremely expensive

(3) Sample size: Might be very hard to get a large enough sample size for optimal conclusions.

There'd be other challenges as well, not the least of which being that I doubt you could convince many academic programs to go for something like this.

Expand full comment
Lance S. Bush's avatar

I think selection effects are a big factor in determining who ends up going into philosophy, but I worry about making judgments about who is or isn't cut out for "philosophy." Here's one problem: the people making these judgments were themselves likely subject to considerable selection effects; so long as the field has a self-reinforcing conception of how philosophy should be done, this can create a kind of constant cycle of people only being cut out for philosophy if they do philosophy the way it's conventionally done by the people currently doing it.

I felt considerable discouragement from my skepticism of conventional philosophy. I stuck it out anyway, and I have retained a broadly critical perspective on most mainstream philosophy and on the metaphilosophy underpinning much of what's done in the field. There's a decent chance I'd have been flagged as someone who isn't cut out for philosophy. I disagree. I wasn't cut out for a particular style and approach to philosophy, but I don't think that's the only approach.

Academic philosophy may be self-limiting if it doesn't adopt a broader and more encouraging approach towards different metaphilosophies and approaches to the field.

Expand full comment
Tam's avatar
Aug 7Edited

You have a belief in the "correctness" of philosophy and I can't quite get there. My answer to "why aren't philosophers confused about these various types of ideas" tends more towards "you all just agreed and are conforming to agreed upon beliefs / types of beliefs."

Now, listen, I think I'm probably wrong, and here's why. I'm a mathematician (by training, not trade). And there are certain things everyone who has studied math decently knows - for example that 0.99999... (repeating forever) = 1, or that there's not some good mathematical system that allows division by 0. But it's hard to convince a layperson of these ideas, and they might (and absolutely do) have the same responses to this as I have to some of your philosophy stuff. ("Math is just a club where you all agreed on this but if you weren't in a mental straitjacket you'd see otherwise.") Sometimes you can't even convince a layperson of something extremely basic like, "Just because A and Not A are both possible doesn't mean it's a 50/50 chance." More concerningly, I once could not convince a professor in another discipline to stop teaching her students that a p value is "the probability that the finding is wrong," even when I presented examples where this is not the case. (For example, if I ask you to guess the result of a coin flip and you get it right, I could find that you're psychic with p = 0.50, but there's certainly not only a 50% chance that you're not psychic.)

Yet when you talk here, it's like, "Goodness is simple," and the arguments for that don't make sense to me (or at least, don't sound any more convincing than anything else) even if I don't know the best way to argue against them, and there's very much a vibe like, "Oh no, all of us philosophers agree that goodness is simple." Well OK then! I guess I'll take my 0.9999999... and go home.

So are these situations the same (I'm being the same kind of ignoramus I have encountered about math) or not? Damned if I can tell. (Hopefully not literally.)

Expand full comment
Lance S. Bush's avatar

>But on the whole, they’re a pretty non-confused bunch.

What are you basing this on, aside from the example you gave? Given how much philosophers disagree with one another, and the state of the methods of the field, I think there's a good chance that confusion rates are exceptionally high at least about some things. For instance, I think many philosophers have misconceptions about language and psychology.

Your example highlights one way in which philosophers may not be confused. But even if there are broad categories of issues about which philosophers aren’t particularly confused, that isn’t a good indication that they’re not confused about other things. After all, if a person can demonstrate that they’re good at one thing, this isn’t good evidence they’re good at other things, unless there’s some reason to think their skill at one thing transfers over to or is an especially good indication they’d be good at the other thing.

Another concern with this example is that, while I’m sympathetic in this particular case, I may not be in others: there may be instances where I judge, having studied philosophy myself, that most philosophers rejecting this or that view are wrong to do so and are themselves misguided. It’s not at all clear to me that philosophers are consistently and reliably better at reaching reasonable conclusions than nonphilosophers. That most analytic philosophers agree on this or that position just does not strike me as a good indicator that they’re right, especially when my own assessment of the arguments in question has led me to endorse a minority position.

I’m curious about a few things:

(1) How high are endorsement rates of psychological egoism among nonphilosophers?

(2) How incorrigible are those beliefs?

(3) How incorrigible are their beliefs in general?

(4) How incorrigible are philosophers?

Convergence among philosophers alone is not necessarily a good indication that they are “correct.” There may also be systematic differences in the framing of certain philosophical issues, such that minimal induction into or education on those topics still results in disparities or intransigence to change among nonphilosophers not because they’re bad at reasoning, but because they still are conceiving of the philosophical issue differently than philosophers.

Another issue may be that insofar as philosophers converge on similar positions as one another, there may be reasons why other than that philosophers are reasoning well. Social factors, selection effects, and systematic, shared bad reasoning may be at work. I do think this is the case on, e.g., the hard problem of consciousness, where I think popular belief in qualia/phenomenal states among philosophers is an indication of systematic failures in the field that are largely absent from nonphilosophers.

I think this generalizes: I think training in academic philosophy may cause systematic errors in reasoning that are not present among nonphilosophers and that studying philosophy, while it may select for or improve some forms of reasoning (assessing validity, counterfactual reasoning), may make philosophers worse in reasoning in other ways.

In short: I think the answer to why philosophers aren’t ignorant in various common and important ways is that philosophy equips them to avoid these mistakes, but, in practice, does so often at the expense of trading these common errors for a different set of errors.

You say >Second, philosophy improves thinking. When I was a wee lad—about four years ago—I was super confused about all sorts of things. I had ridiculous objections to ethical intuitionism and was unable to see that my moral views also rested on intuitions. Many of the things I thought even just two years ago—which you can see in my early articles—were crazy, false, and bizarre. They were ill-formed, muddled, and wrong.

With respect, I believe you’re still super confused about all sorts of things. If anything, if you endorse ethical intuitionism, I think you may even be more confused now, about this, than you were before. My moral values aren’t based on “ethical intuitions” and I doubt whether that even reflects a defensible psychological category. This is precisely one of those things I think many contemporary analytic philosophers are confused about, and perhaps more confused than nonphilosophers.

This intuition talk is a serious aberration and is one of the worst aspects of contemporary philosophy. There is some defensible discussion on “intuition” if one’s narrow and careful enough to use the term, but many philosophers aren’t careful at all, and its use is too sloppy and inconsistent for you or others to be so confident that you “have” intuitions and that they serve the roles you and others claim that they serve.

I’ve seen little general indication that people who study philosophy become better at thinking in some generic or universal sense. They probably become better at certain kinds of thinking, but this may or may not come at the cost of becoming worse at thinking in other respects. Many philosophers strike me as objectionably formalistic, narrow in their thinking, insistent on dichotomies, dazzled by linguistic pseudoproblems, focused on practically irrelevant esoterica, suffer a curse of thinking that makes communicating clearly with nonphilosophers more difficult, and so on.

Also, simply because people look back on their previous thoughts and think “How could I have been so confused? How could I have thought that?” doesn’t entail they were so confused, or think any better now. That sense that you were mistaken or confused could itself be the result of mistakes and confusion. I’m not convinced philosophers who think they’ve become enlightened actually have been.

You say: >A philosopher friend of mine recounted a funny story that took place when he was at a history lecture. One of the historians rather pompously said “see, as historians, we’re not so naive as to believe in eternal truths.” My friend pointed out that that belief itself is an eternal truth—did eternal truths previously exist but then stop existing?

They “pointed out” that it was an “eternal truth”? How did they point this out, if it’s not even clear it’s true? Such a statement may or may not have been intended to express an “eternal truth.” Philosophers often infer more than what is actually present in other people’s remarks, and then criticize them for it. That may have been going on here. I don’t see anything wrong with what that person said, yet you say:

>But non-philosophers constantly say stuff like this. They just don’t seem to get it.

I don’t think you’ve convincingly shown these historians were confused or didn’t “get it.” Instead, it looks like you’ve offered a dubious and uncharitable interpretation of a remark that projected far too much philosophical presumption onto it as if they’d identified some kind of error, when I doubt that such an error occurred.

>But philosophy does seem to have a unique ability to improve the quality of one’s thinking.

That’s an interesting hypothesis, but I’d be more interested in clarification of what, specifically, these improvements are, and what arguments and evidence support this claim.

Expand full comment
TheKoopaKing's avatar

>>My friend pointed out that that belief itself is an eternal truth—did eternal truths previously exist but then stop existing?

>They “pointed out” that it was an “eternal truth”? How did they point this out, if it’s not even clear it’s true? Such a statement may or may not have been intended to express an “eternal truth.” Philosophers often infer more than what is actually present in other people’s remarks, and then criticize them for it. That may have been going on here. I don’t see anything wrong with what that person said...

Yeah, that struck me as one of those everybody on the bus stood up and clapped moments - "You say we should be skeptical everything, but should we even be skeptical of skepticism?!" - as if that retort engaged in good faith with and debunked all the skeptical arguments in history on behalf of the asker, rather than forcing an uncharitable interpretation of a position to make it seem self undermining.

Expand full comment
Ape in the coat's avatar

Philosophers are plenty confused. About their own subject, no less. Whole branches of philosophy consist of nothing but confusion.

But the kind of confusion philosophers usually have is different from confusion of a layman. Laypeople are confused about philosophy because they do not deeply think about it and reason based on vibes whether something is weird or not. Philosophers are confused because they take ideas and their implications seriously but do not have proper way to distinguis between valid reasoning and invalid reasoning, nevermind sound and unsound one.

Expand full comment
Lance S. Bush's avatar

>Philosophers are plenty confused. About their own subject, no less. Whole branches of philosophy consist of nothing but confusion.

I agree. I'd also say that I don't think error is the default for ordinary people. I think Bentham wildly overestimates the benefits of studying philosophy and grossly underestimates ordinary competence. The whole thing seems like a bizarre, self-flattering characterization of the field.

Expand full comment
Andries's avatar

I was very much trained in the analytic tradition of philosophy (especially Rylean and Wittgensteinian plain language philosophy), and initially bought into the idea that what good philosophy does is to dispel 'confusion'. Lay people and philosophers are 'confused', and the therapeutic task of the philosopher is to unmask and thus dispel this confusion, without thereby needing to posit/subscribe to an alternative theory, which sooner or later will appear confused from the point of view of a later, better theory. Though still finding the later Wittgenstein fascinating and valuable, I now reject this view of philosophy. Let me give an example of what I find to be a very good philosophical theory: Dennett's distinction between the physical, design and intentional stances (a sort of a replacement for, or update of, Aristotle's distinction between various 'causes'). This gives a framework which gives us a much improved way of distinguishing between different sorts of explanatory tasks, I would say. (What Dennett says about the intentional stance also gives us a framework in which Wittgenstein's philosophical psychology can be re-read - with the necessary adjustments - in a systematic way). However, the history of philosophy makes me very confident that later Dennettians, non- or anti-Dennettians, will find much that needs to be rejected or revised in how Dennett describes, and distinguishes between,

Expand full comment
Lance S. Bush's avatar

There are continued efforts to refine Dennett's perspectives. Here's one good example: https://www.tandfonline.com/doi/full/10.1080/00048402.2021.1941153

Expand full comment
Patrick D. Caton's avatar

I guess everyone is solipsistic at their core, especially philosophers

Expand full comment
Martin Greenwald, M.D.'s avatar

It's interesting to see how you are simultaneously quick to say how "crazy, false, and bizarre" some of your previous ideas were, yet you seem quite confident now that many of the ideas you hold are "obviously true" (as you like to say). I wonder how obviously true those same things will be in another 2 years from now?

Do the things you see as obviously true right now somehow feel 'more true' than the things which once felt obviously true but which you now see as obviously false?

Expand full comment
Bentham's Bulldog's avatar

I think for the last ~1.5 years I've been good at philosophy, so while I've thought some false things, I don't think things that are crazy and bizarre. Even my false views from a year ago, like atheism, weren't crazy the way my views in high school were.

Expand full comment
Martin Greenwald, M.D.'s avatar

I guess the meta point I'm getting at is that one of the things which makes for a good philosopher, and makes for better progress in philosophy, is to be able to hold beliefs with some degree of lightness, skepticism, or humility. Maybe the way you describe things as "obviously right/wrong" etc, with that kind of forceful certainty, is more rhetorical/style or something. But the fact that you can see at least some of your prior beliefs were wrong might give you pause re: the current beliefs you hold with such certainty.

Expand full comment
Lance S. Bush's avatar

What do you think makes someone good at philosophy?

Expand full comment
Lance S. Bush's avatar

>Daniel Dennett never seemed to realize that he had thoughts and feelings (note, Lance S. Bush

this is a joke—no need to write a 5,000-word article about this).

Remarks made as jokes can still serve, whether intended or not, to denigrate and mock people whose views you disagree with. Also, many times critics of Dennett who make these remarks don't appear to be joking. If you have to explicitly say it's a joke, it's probably not a very good one.

Expand full comment
Bentham's Bulldog's avatar

I have to say it's a joke explicitly because otherwise you'd be likely to write a long essay about it :)

Expand full comment
Lance S. Bush's avatar

I may write a long essay anyway out of spite.

Expand full comment
Dylan Richardson's avatar

I don't think this is the kind of knockdown example that you think it is. Much of what is behind the agreement of philosophers on psychological egoism is simply terminological refinement. That's an important process, but it's largely a work of social consensus, not inherent expertise at philsophisizing. More akin to knowledge than ability.

Expand full comment
TheKoopaKing's avatar

I think some training in philosophy is better than none - particularly when it comes to shitting on people who graduated from the Matt Dillahunty school of "arguments aren't evidence" and "atheism is a lack of belief in God so the 'burden of proof' is on the theist." But academic philosophy rests entirely on nonsense like calling things "counterintuitive" and making implicit appeals to how "intuitive" most people would judge a view despite having 0 empirical evidence that that is the case.

Expand full comment
normality's avatar

This is not super related to your topic, but the coincidence of us both using "invincible ignorance" around the same time made it irresistible for me to drop a recent post of mine. It's about how we give our inner critics an authority that they haven't earned, how we fail to review the soundness and track record of their arguments, and how we can be better off if we do such reviews. https://renormalize.substack.com/p/psychological-autoimmune-disorders

Expand full comment
JustAnOgre's avatar

>he way that people form religious beliefs is so intellectually irresponsible that their conclusions are almost guaranteed to be false

Because these are not really beliefs, not about beliefs. The simple fact that most religous people are in the religion they were born in alone suggest they simply want to be in the community they were born in. The community requires people to say some shibbolets. They are not required to believe it - no one can see inside people's heads - they are mere required to say that they believe it.

Expand full comment
Beckett's avatar

But isn't it a kind of philosophical confusion to think that religious beliefs must be justified like scientific beliefs, that the arguments and counterarguments must be considered? It seems similarly confused to think that Christians believe in "the existence of god", and then behave in certain ways, as opposed to recognizing that Christian belief in part consists in certain kinds of behavior. When Caplan says that religious beliefs are intellectually irresponsible, I am inclined to exclaim that religious beliefs are not intellectual beliefs, are quite unlike the beliefs Caplan has about economics.

see http://www.marcmarenco.com/uploads/6/9/6/3/69632375/norman_malcolm_no_password.pdf

Expand full comment