0
I recently wrote an article arguing that the brain has a basic capacity for rationality that is not just mechanical. The basic argument is relatively simple; we can know facts that do not plausibly affect the behavior of the atoms in the brain. The fact that the world will remain inductive in the future can’t plausibly explain how we come to know that the world is inductive. But if the reason you believe something has nothing to do with its truth then that gives you a reason to give up the belief; if the reason you think Jim is hungry is because Fred, a notorious liar and fraud, told you so, then you should give up your belief, because Fred’s testimony bears no relation to the truth. This argument in the moral case is also defended by Korman and Locke who express the sentiment well; if you have an intuition that God exists, but you know you only have this intuition because computer programmers directly imbued it into you, that seems to undermine your justification for inferring that God exists on the basis of this. In a similar way, if our beliefs about morality, modality, mathematics, metaphysics, logic, and epistemic normativity have nothing to do with their the truth of those intuitions, we should abandon our belief in them.
This is not a standalone article. You should only read this article after reading my other article on the subject which presents the main argument. This article provides a few considerations that help to bolster the core case, but they are not where most of the evidence comes from. The core case was expressed in the last article; this is just a supplement. But I think this article does help to bolster the case; various plausible views provide explanations of our non-physical knowledge, reasons to think we have it, and reasons to think that absent it we cannot know many of the things that we think we know.
1
Of the considerations that I’ll raise in this article, this is the one I find by far most convincing. This argument might be called the phenomenological argument and claims that just by reflecting on the experience of gaining knowledge about the non-natural facts, you can just see that NR is right. Think about what goes on when you conclude that pain is bad. You think about pain with your mind, and using your intellect, just see that it’s bad—that it’s the kind of thing that is not worth existing. Or consider how you come to know the fact that in all possible worlds, there can’t be contradictions. To conclude that, you think about what it would be for some contradictory state of affairs to obtain, and can just see that it can’t obtain.
Obviously this is influenced by the facts themselves. You think pain is bad by reflecting on its actual properties and grasping its actual badness. You think contradictions are impossible by thinking about what it would be for a contradiction to obtain, and see that it can’t. But clearly merely possible worlds can’t change the atoms in our brains! Therefore, what we do when we think about possible worlds and grasp their properties must not be reducible to purely mechanistic facts about the atoms in our brain.
Just as with consciousness, you can see that the reason you think you’re conscious is because you’re directly acquainted with your consciousness, you can see with your mind that certain basic mathematical facts are true.
This is the picture that common sense gives. When you ask why most people think that 1+1=2, if they haven’t been indoctrinated by philosophy, they’ll say something like “I can just grasp its truth by thinking.” But if that’s true of any non-natural fact, then NR must be true!
2
Philosophers are fond of appealing to evolutionary debunking arguments, which purport to show that people’s beliefs are mere byproducts of non-truth-tracking processes. A debunker might argue, for example, that our zealous opposition to incest, even in cases where it’s stipulated that it has no negative consequences, is best explained by evolutionary factors. This is generally taken to undermine the belief; if there is an explanation of why you would come to have some belief that is unrelated to its truth, then that seems to undermine your justification for the belief.
But suppose that we deny NR. This should undermine almost every debunking argument. The denier of NR agrees that the fact that our beliefs are true doesn’t explain them and that there is a full evolutionary explanation of all of our moral beliefs. But if all of our moral, modal, mathematical, metaphysical, logical, and epistemic knowledge is unrelated to the truth of the facts, then debunking arguments mostly stop working.
Notably, there might still seem to be some debunking arguments that work. If the reasoning a person uses when they believe something is faulty—e.g. they believe there are infinity prime numbers because the idea of finite prime numbers makes them angry—then there might be a successful debunking. So if one can show that we form beliefs on the basis of emotions or other features unrelated to reasoning, that would still be a successful debunking. But, for example, the fact that we believe in special obligations on account of purely evolutionary facts would not be a successful debunking account.
With this in mind, we can provide a forceful argument for NR:
If NR is false, paradigmatic debunking accounts fail.
But paradigmatic debunking accounts don’t fail.
Therefore, NR is true.
By paradigmatic debunking argument, I mean debunking arguments of the standard sort, akin to the one I gave before. It seems that explaining a person’s moral beliefs in terms of purely Darwinian mechanisms does serve to discredit their intuitions. This is the core assumption behind the very intuitive debunking argument. But if NR is false, then such a move is illegitimate.
The obvious premise for the opponent of NR to reject is 1. But 1 has considerable intuitive appeal. In addition, there are plausible abductive grounds for it. It tends to be non-consequentialist intuitions that we can give evolutionary explanations of, but these tend to be the least reliable. This is obviously extremely controversial, but if you accept it, it is best explained by 2 being true, which gives us good reason to adopt NR.
3
Tomas Bogardus is quite good at coming up with paper titles that explain succinctly what his paper is about. In the last article, I referenced a paper of his titled Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument which argued for the position in the title, and this one is similarly well-titled, being called Knowledge is Believing Something Because It's True (this one is coauthored with Will Perrin). Bogardus argues that this view is the best account of knowledge; it avoids the otherwise general formula for generating Gettier cases and accurately explains our intuitions about cases. We can even modify the view to be justified, true belief that’s believed because it’s true.
Bogardus says that a view is believed because it’s true if and only if the fact that the belief is true features prominently in the explanation of why one believes it. For example, I know there’s a table on the basis of seeing it because the fact that there is a table explains why I see it.
I’m not sure if Bogardus is right—it’s a topic I need to do more research about, though I have weak leanings towards it. But it’s at least a reasonable view.
But if this is true, then denial of NR is incompatible with knowledge about any of the non-physical things I describe. Because deniers of NR hold that the reason you believe the things you do about such domains has nothing to do with their truth, it wouldn’t count as knowledge. But plausibly we do have knowledge of those domains, so either Bogardus’s view of knowledge or NR has to go. Which one goes will depend on various plausibility judgments.
Worst of all though, most accounts of knowledge, even ones other than Bogardus’s, will plausibly undermine our ability to know things that don’t relate to our beliefs. Many suppose, for example, that to generate knowledge, a belief-forming process has to be sensitive, in that it has to be dependent on the truth of the proposition. But this is undermined by denial of NR. Thus, plausibly most views we could have about knowledge would procedurally bar our ability to know about any non-natural domains unless NR is true.
Edit: if you think that safeness is the additional constraint required by knowledge, you get out of this puzzle. Richard pointed this out in a comment, and so I’ve now corrected claims of safeness.
4
This is probably the weakest evidentiarily because it probably doesn’t work, but on account of there being some debate over its success, it’s worth mentioning. In 1961, Lucas gave an argument that appealed to Godel’s incompleteness theorem to argue that the brain must be doing things that a mere computer couldn’t do. Penrose has argued for a similar thesis in Shadows of the Mind and The Emperor’s New Mind. This argument has come under criticism from many including Chalmers, Putnam, and Searle. The SEP page summarizes:
Various philosophers and logicians have answered the critique, arguing that existing formulations suffer from fallacies, question-begging assumptions, and even outright mathematical errors (Bowie 1982; Chalmers 1996b; Feferman 1996; Lewis 1969, 1979; Putnam 1975: 365–366, 1994; Shapiro 2003). There is a wide consensus that this criticism of CCTM lacks any force. It may turn out that certain human mental capacities outstrip Turing-computability, but Gödel’s incompleteness theorems provide no reason to anticipate that outcome.
Ouch!
Still though, it has some defenders. Penrose has written responses to most of the critics. And Penrose is no one to disregard. As his Wikipedia page notes:
He is regarded as one of the greatest living physicists, mathematicians and scientists, and is particularly noted for the breadth and depth of his work in both natural and formal sciences.
And there are many other versions of Penrose’s argument that rely on different assumptions. As the IEP page notes:
Finally, there are some alternative anti-mechanism arguments to Lucas-Penrose. Two are briefly mentioned. McCall (1999) has formulated an interesting argument. A Turing machine can only know what it can prove, and to a Turing machine, provability would be tantamount to truth. But Gödel’s theorem seems to imply that truth is not always provability. The human mind can handle cases in which truth and provability diverge. A Turing machine, however, cannot. But then we cannot be Turing machines. A second alternative anti-mechanism argument is formulated in Cogburn and Megill (2010). They argue that, given certain central tenets of Intuitionism, the human mind cannot be a Turing machine.
These other versions have tended to be subject to less scrutiny. So it’s not terribly implausible that one of these arguments succeeds. The fact that there’s some probability of this provides even more evidence for NR.
And Penrose has an entire theory of consciousness called the Orch Or theory that’s intended to explain how we come to know facts about mathematics that a mere computer could never know. Penrose argues that this is supported by physical and biological evidence. I haven’t investigated this thoroughly, but if true, it would mean the correct theory of the neural correlates of consciousness is one that potentially enables the brain to do non-mechanical things. Such a result would be significant.
There are even more reasons to think that computationalism is false. These provide some support for NR, by undermining the main competitor, but aren’t sufficient to prove it, for there are views that reject both NR and computationalism about the mind. Objections to computationalism are summarized well here. The biggest worry is the following:
A recurring worry is that CTM is trivial, because we can describe almost any physical system as executing computations. Searle (1990) claims that a wall implements any computer program, since we can discern some pattern of molecular movements in the wall that is isomorphic to the formal structure of the program. Putnam (1988: 121–125) defends a less extreme but still very strong triviality thesis along the same lines. Triviality arguments play a large role in the philosophical literature. Anti-computationalists deploy triviality arguments against computationalism, while computationalists seek to avoid triviality.
The various arguments against computationalism are very controversial! In this article, I cannot hope to get to the bottom of them, but how one evaluates those could provide even more support for NR.
So to recap the case, to avoid NR without embracing global skepticism one would have to:
Dispute the principle that if you know that your intuition that P is unconnected to the truth of P you should give up your belief in P based on your intuition that P, which seems like a straightforward application of bayes theorem. To do this, they’d have to respond to the ingenious arguments of Korman and Locke.
Provide some plausible mechanistic account of how we happen to have so many non-natural intuitions that are accurate. This seems like a bizarre coincidence.
Address Bogardus’s argument that the philosophy of peer disagreement gives us no reason to trust our moral intuitions over the moral intuitions of those relevantly like us in nearby possible worlds who intuit the opposite of what we do, in which case we should give up our non-natural beliefs, because if natural selection had turned out differently, we wouldn’t have believed them.
Refute the argument in section 1—that we can just phenomenologically observe the truth of NR by seeing how we come to have our non-natural beliefs.
Give up on most evolutionary debunking arguments and reply to the abductive argument for successful evolutionary debunking arguments.
Provide some account of knowledge that enables cases where one’s belief that P isn’t dependent on the fact that P to still count as knowledge. This account should avoid Gettier cases and be intuitive.
Address the various other anti-mechanism arguments that have cropped up over the years—for example, from Penrose, Lucas, Cogburn and Megill, and McCall.
Until someone does this, the prospects for deniers of NR seem rather dim.
> "“I can just grasp its truth by thinking.” But if that’s true of any non-natural fact, then NR must be true!"
How does that follow? I think we can grasp truths just by thinking. But I don't think that our grasping truths involves non-natural causes shifting atoms around in our brains. Rather, I think that grasping truths is an epiphenomenal process: there are neural underpinnings that (together with the psychophysical bridging laws) give rise to our conscious understanding or "grasp" of various abstract or otherwise non-physical truths. To count as knowledge, the connection has to be non-chancy in the right kind of way. But (as I argue in Knowing What Matters) beliefs can be reliable/non-chancy in this way without needing to be literally caused by their truth-makers. A kind of structural isomorphism to mathematical facts can reliably yield mathematical knowledge, for example, without needing the numbers themselves to do the causal work. It's neither magical nor mysterious that computers can reliably do arithmetic, after all. We're different in that when our brains do arithmetic, it produces in us some *conscious understanding* of the mathematics that is (presumably) missing in computers. But I don't see any basis for thinking that introspection on this process reveals non-mechanistic causes operating on our brains.
Surely it's not the merely possible world itself that changes atoms, but rather the psychological projection of that possible world (which is itself a movement of physical particles) that changes atoms. In the same way that my thinking about Gandalf can influence my behavior, although the non-existent Gandalf himself can't influence my behavior.