Contra Chappell On Knowing What Matters
Our moral beliefs have to be in some way tied to their truth
A brief note: I find Richard’s view here a bit confusing, so it’s possible that I’m misrepresenting him slightly. I think I got the broad strokes right, but I might have botched the minutiae slightly.
Richard Chappell has a nice paper called Knowing What Matters, in which he talks about how we know what matters. He disagrees with a lot of earlier realists and argues that we don’t need an independent reason to think that our moral beliefs are truth tracking. Just as we don’t need some independent reason to think that our beliefs about the external world are reliable—we should trust that things are as they appear in the absence of further reasoning—we don’t need some external reason to trust our moral intuitions. We should instead have a default trust that things are as they appear. Once we have this default trust, responding to the evolutionary skeptic is easy—it doesn’t matter how our belief that kindness is good developed, if they’re justified by virtue of seeming true, even when subjected to rational scrutiny.
The evolutionary debunking argument against moral realism says that our moral beliefs are unreliable because they developed as a result of evolutionary pressures, meaning their development is unrelated to their truth. If you discovered that some moral belief was the result of hypnosis, for example, then you should give it up—your intuition that it’s correct is unrelated to its truth, so it would be pure coincidence if your intuition turns out to be correct. Chappell, in response, denies this; he thinks that even if some beliefs developed as a result of hypnosis, this might give you a good reason to really reflect, but rather than declaring them debunked, you should reflect on your reasons for them. If you have a good reason for them—which you do if you have an intuition that they’re right that survives critical reflection—then this discovery wouldn’t undercut your justification for believing in their reliability.
There’s a lot to like about the paper. I think that my response to the evolutionary debunker is similar to Chappell’s—and potentially less contingent on the details of evolutionary theory than the solutions of most realists. Before we proceed, let me describe what I think is the best reply to the evolutionary skeptic. I think that humans—especially philosophically informed ones who think hard about issues—can be good at thinking. Part of being good at thinking involves being accurate at ascertaining the relations between one’s various beliefs—a good thinker will recognize when their beliefs conflict and will update accordingly, as well as understand when one can prove beliefs from others. But part of good thinking is just having the right intuitions—the physicist Ed Witten, for example, is especially good at thinking because of his fanatic ability to have correct mathematical intuitions, grasping the truth of some equation decades before it’s proved. But if you think that we are good at thinking and part of being good at thinking involves having right intuitions, then we should think our intuitions are right, by virtue of the fact that they were developed by us and we can think well, in a rational manner.
If this is true, then I think we can set most evolutionary debunking arguments aside. The evolutionary debunker claims that our belief that pain is bad developed from evolutionary pressures, and that this serves to debunk it. But if you think we’re smart, rational, and able to think well, then by introspection, we can figure whether our belief that pain is bad is true or false. It turns out that when I reflect, it strikes me as obviously true. Thus, we don’t need to investigate the causal history of our beliefs when we have some mechanism to update on false beliefs.
I think that this account of good thinking is right. One who can grasp deep arithmetical truths just by thinking is a better thinker than one who is not. I am a better thinker than my four-year-old self and, as such, I now recognize that there are things in life more valuable than dark chocolate. One who can grasp with their intellect that the shortest distance between two points is a straight line is more intellectually impressive than one who cannot.
This account of good thinking allows us to explain how our moral beliefs being right is not mere coincidence. The explanation of why I have accurate moral beliefs is no more extravagant than the explanation of why I have accurate modal beliefs; I can think clearly and grasp what’s true given that I have a functioning brain.
Crucially, we can check our beliefs. When a belief is false, this can usually be proved in multiple ways. So the fact that various beliefs—like that pain is bad—withstand scrutiny gives us reason to trust them. There’s a reason that we can use rational argument to disprove many of our moral beliefs, but not the belief that pain is bad; that reason is that pain is actually bad, and many of our other moral beliefs are wrong. It would be miraculous if pain weren’t actually bad but there were no good reasons to think that other than the debunking account. It would be a shock if this falsehood did not lead to other implausible and surprising entailments.
Because of this account, I think that Chappell gets a lot right. He’s right that it doesn’t much matter how we came to have the moral beliefs in the first place—because we can reason and think carefully, we can grasp the correct moral beliefs. He also correctly identifies the irrelevance of evolution—it doesn’t matter whether our moral beliefs come from evolution or some other process. Thus, the odd fixation on evolutionary debunking, in particular, is odd.
But I think Chappell is wrong that we can have default trust in our intuitions on his view. On my account, we can have direct acquaintance with the prima facie plausibility of various moral principles. When I conclude that pain is bad, the evidence for this is not just that I have an intuition that pain is bad, but instead that I can grasp what pain is, and I can see its badness with my intellect, the same way that I can see that the shortest distance between two points is a straight line (assuming space is non-curved, you pedants!). The evidence is not that I have an intuition that P, but instead my particular ability to grasp the contents of P. I have more evidence for the badness of pain that I would if some third party informed me that if I thought about it, I’d intuit that pain was bad.
But I think that Chappell’s account undermines our ability to have default trust in our intuitions. Suppose that one claims that we should have default trust in our visual faculties but they also prove that our visual faculties wouldn’t be made reliable by evolution. Well then we shouldn’t have default trust—they’ve given a reason to not have default trust in our visual faculties by proving that they’re unreliable.
In a similar way, if one comes to believe that 435 x 43 = 18,705, they could say one should have default trust in their intuitions, but if they also claim that they came to believe that by guessing the answer, that undermines the case for default trust. You can have default trust in some process if you have a reason to think it will get the right answer, but if your theory entails that it getting the right answer is pure coincidence, then that undermines default trust.
Chappell seems to suggest that because the moral truths are necessary, talk of getting the right answer being a coincidence if misplaced. But this isn’t true—it’s necessary that 435 x 43 = 18,705, but if I got the answer by guessing, it would still be unreliable.
Chappell seems to suggest that a solution like mine is impossible, that you can’t have a reason to think that some moral intuitions are reliable without making substantive claims about which moral claims are right—that you can’t have reason to think that your moral belief that pain is bad is reliable unless you have a good antecedent reason to think that pain is bad:
Parfit thus faces a dilemma. What he really needs is a purely formal reason for thinking that (individual-level) evolution, more than other natural causes, would likely have led us morally astray. But absent our substantive normative assumptions — from the perspective of a moral ‘blank slate’ — there would be no basis for any such judgment. On the other hand, given our substantive normative assumptions (and the psychological fact that those are our normative assumptions), such a claim is trivially unfounded.
But this isn’t true. Let me illustrate this with two examples:
Suppose you know some entity has good eyesight because evolution favored creatures like it who saw things as they are. You might have a reason to trust its visual appearances because you have some external reason to think its vision is reliable, even without making any substantive claims about what exists. You might be justified in thinking there’s a bear if it sees one, even if you had no preexisting reason to think that there was a bear.
Suppose you know that Richard Chappell is a smart guy and is usually right. Because he’s a careful reasoner, you know that most of his philosophical beliefs are right. In this case, even if you haven’t explored whether some view is correct, you might have a reason to think Chappell is right. You can have an internal reason to trust the process without making any external assumptions of reality.
Thus if evolution favored beings who were smart, rational, and good at thinking—which we have ample evidence for—we can be justified in believing various moral claims. They were produced by our ability to think well, which serves to vindicate, not undermine, their reliability.
Richard tries to defend that his view is relatively natural. He suggests that, though this may strike people as circular, using our first-order normative views to vindicate their reliability is no different from using induction to verify induction. We can imagine a counterinductionist who thinks that the fact that induction has always held in the past gives him positive reason to think it won’t hold in the future. But I think that this person isn’t thinking well; it’s obvious that counterinduction is irrational. The person who thinks you should do things that have always been wrong before is foolish, and so you can discard their views. But if you think we’re not foolish—which I do—then you can have a purely internal reason to trust your own views. This view has all the advantages of Richard’s view, allowing us to vindicate our moral beliefs, without any of the epistemic costs.