1 Introduction
Over the years, I’ve given quite a few arguments for the existence of God. The three that I find most formidable are the arguments from psychophysical harmony, fine-tuning, and anthropics. Slightly behind these arguments, in convincingness, are a few other arguments which have significant force but aren’t quite as decisive as the above three. These arguments are: the arguments from skepticism, nomological harmony, consciousness, and moral knowledge. It shall be the last of these arguments that I’ll discuss in this essay.
A brief disclaimer before I dive into the argument: it’s not really distinctly about moral knowledge. It’s more broadly about all kinds of knowledge of things not directly experienced. So even if you deny that we know moral truths, the argument may still succeed. Moral knowledge is helpful for illustrating how naturalism might strip away our justification for believing various truths, but if the argument works, it’s not just moral skepticism that the naturalist is committed to. They’re committed to a rather more pernicious form of skepticism about nearly all the inferences that they would ordinarily wish to make.
One other brief disclaimer: when presented with this argument, a common reply is the following: “you are saying that we need God for moral knowledge. But many people are quite moral without believing in God. My friend Steve, for instance, helps out at the local soup kitchen every week, and yet he doesn’t believe in God. In fact, I know a great many theists who aren’t very nice, and seem not to have much moral knowledge. So how can this argument be right?”
This reply is profoundly confused. Theists do not claim that one must know God exists to have moral knowledge. Rather, they claim that God best explains why humans have the ability to know about morality in the first place. As an analogy, one might argue, based on fine-tuning, that God is needed for life to have come to exist. This does not, of course, mean that life needs to believe in God to exist—paramecia are surely atheists!
So then how does the argument go? Well, it starts from the assumption we have moral knowledge (again, this is just illustrative, we can drop this assumption and make the argument about other things). We know, for instance, that you shouldn’t take little babies and hit them with hammers, at least barring exceptional circumstances. We know that you should subscribe to this blog, but not to
’s. We know that it’s wrong to set people on fire.But here is a curious fact: when explaining the evolution of morality, evolutionary biologists make no reference to moral facts. When explaining why we believe that it’s wrong to torture babies, evolutionary biologists’ explanations never mention that it really is wrong to torture babies. This is for a rather simple reason: on the standard naturalist picture (which evolutionary biologists tacitly assume while in the lab) moral facts are not the sorts of things that can move around atoms in our brains. The fact that slavery is wrong does not directly shape selection pressures in ways that cause people to believe it.
This leaves us with a rather curious situation. The moral beliefs that we hold seem to have been formed wholly without reference to their truth. Yet despite this, we came to have moral knowledge—our moral beliefs are accurate. This would be a great coincidence, rather on the order of us believing true and highly specific details about a distant star, even though we can’t observe the star with a telescope or detect it in any other way.
How do we come to know moral facts? Some, of course, we infer from other moral facts. You know it’s wrong to murder Alice because you know the more general fact that it’s wrong to murder people. But for us to know those, we need justification for those earlier moral facts. The most basic moral facts are known through intuition. We know that, say, excruciating agony is bad by thinking about what it’s like to experience excruciating agony. Through this process, we come to intuit that it’s not good at all!
However, as we’ve seen, on the standard evolutionary story, it seems that our moral intuitions aren’t explained by the facts they’re about. The reason we think that slavery is wrong has nothing to do with the fact that it really is wrong. This leads to two distinct problems:
It seems an awfully huge coincidence that our moral beliefs ended up so accurate. We could have possessed a wide range of moral beliefs very different from the ones we ended up with. It’s quite a big coincidence that the ones we happened to have turned out mostly right. This would be rather like if a person drew a random image, only to find that it precisely corresponded with the figure of Abraham Lincoln, though they had never seen Lincoln and hadn’t the faintest clue what he looked like.
Worse, it seems this decimates our justification for trusting our moral intuitions in the first place! If you believe X based on some direct experience Y, it should be that the truth of X explains why you experience Y. You’re justified in thinking there’s a table based on seeing it—but if you knew that you were hallucinating, so that you’d see the table whether or not it was really there, then you’d no longer have justification for thinking there was a table. But if our belief that, say, slavery is wrong isn’t explained by the fact that slavery is wrong, then by the above reasoning, we’d lose our justification for thinking slavery is wrong.
Thus, the naturalist faces two challenges. First, they must explain how we come to have moral knowledge, even though the moral facts don’t explain our beliefs about them. Second, they must explain why it is that—wonder of wonder, miracle of miracles—our moral beliefs ended up mostly right.
One might wonder: how does the theist avoid this? Certainly they should not deny evolution. I can see two basic ways for a theist to go.
First, they can simply hold that God sets up the evolutionary process so that we have mostly correct moral beliefs. God wants us to know the truth about morality, so he makes sure to design the initial conditions of the universe so that when we evolve, we’ll be capable of understanding morality. In this case, our moral beliefs are explained by the facts they’re about. They are reliable just as a calculator is reliable—because an agent made it so.
Second, they can hold that God gives us some direct faculty of intuition that allows us to grasp moral truths—as well as other non-natural truths. This faculty allows our moral beliefs to be influenced by the moral facts! On this picture, we are agents, not mere mechanism. We have an ability to reason that does not reduce to mere computation.
What I’ve presented so far was a pretty informal overview of the argument. In the next section, I’ll be a bit more precise.
2 Being more precise
Sometimes, a person can be justified in believing some fact X, even if their beliefs aren’t caused by the truth of fact X. For instance, I think the sun will rise tomorrow. I’m presumably justified in holding that belief! But my belief is not caused by the fact that the sun will rise tomorrow. If, unbeknownst to me, God had scheduled the second coming for later this afternoon (for is it not written we know not the hour nor the date?), I would still have believed that the sun will rise tomorrow.
To handle cases like this, let’s differentiate between basic justificatory experiences and derivative justificatory experiences. A basic justificatory experience for a belief is one where the experience would be part of the ultimate justification for the belief. So, for instance, if I see a table, and think there’s a table, my seeing the table is a basic justificatory experience. If someone asked why I thought there was a table, and I was reasoning correctly, I’d say “because I see it!”
In contrast, a derivative justificatory experience for a belief is one where the experience does not feature directly in the explanation of the belief. If someone asks why you believe the sun will rise tomorrow, the answer given by someone reasoning correctly will not be “I directly intuit that it will.” After all, that can’t give you extra evidence, because you’d intuit that it will rise tomorrow whether or not it will. Instead, your explanation would be “it’s risen every time in the past, and the most likely future pattern is for things to continue as they always have in the past.” You intuit the sun will rise tomorrow, but this is effectively as a shorthand for intuiting that things are likely to continue as before. Your intuition that the sun will rise tomorrow should not give you extra justification for thinking it will beyond the uniformity intuition that it’s based on.
With all that wind-up out of the way, the principle, put in pointlessly precise philosophese, is as follows:
An experience E with contents C does not provide basic justification for believing C to some subject S if S knows C doesn’t explain E.
Slightly more simply: if you have an experience with X as the contents, and you know that the reason you have the experience has nothing to do with the existence of X, then you’re not justified in believing in X on the basis of it. If you see a table, but know you’re hallucinating, so you’d see it even if it wasn’t there, you wouldn’t get any basic justification for believing that there’s a table. If you intuited that some gumball machine had 1042 gumdrops, but you knew that some aliens hypnotized you to intuit that whether or not it was true, you wouldn’t have justification for thinking there were 1042 gumdrops.
Now, getting the exact details about a principle like this right is tricky. But something like it has to be right. Otherwise, we’re unable to explain why the person who knows she’s hallucinating a table does not have justification for thinking there really is a table.
So now let’s see what this principle implies about moral knowledge under naturalism. Well, our moral beliefs aren’t explained by the facts they’re about. Thus, by the above principle, our moral intuitions can’t provide basic justification for holding any moral beliefs. But all of our most foundational moral beliefs are based on intuition—thus, this principle means we can’t have any justified moral beliefs under standard naturalism.
3 Why this obliterates almost all knowledge, not just moral knowledge
In fact, the situation is quite a bit worse. Not only do we have no justification for moral beliefs, we’ll lose justification for nearly all of our beliefs. But in order to show this, I will first have to broaden the principle to:
An experience E with contents C does not provide basic justification for believing C over some other proposition B to some subject S if S knows C doesn’t explain E any better than B.
In short, if there are two beliefs that equally well explain some experience, that experience can’t give basic justification for the first over the second. For instance, suppose that aliens flipped a coin. If it came up heads, they put a table in the center of the room. If it came up tails, they rewired my brain so that I would hallucinate the existence of the table whether or not there was one. In this case, the fact that I see a table would give no justification for thinking it came up heads—even though if it came up heads, my belief would be explained by the table.
Or to give an example involving intuitions, suppose that aliens flipped a coin. If it came up heads, those dastardly aliens picked a random number, and hypnotized me to intuit that that number of gumdrops was in the jar. If it came up tails, they figured out how many gumdrops were in the jar, and then hypnotized me to believe that particular number of gumdrops were in the jar. If I intuit that there were 1120 gumdrops in the jar, this does not give me justification for thinking it came up tails. This is so even though if it had come up tails my belief that there were 1120 gumdrops would be explained by the fact that there were 1120 gumdrops.
But if we accept this then the situation is quite bad for the naturalist. Sound reasoning is inevitably based on priors—how likely you think hypotheses are before you’ve considered the world. But our beliefs about priors aren’t explained by the facts they’re about. Your belief that, say, there’s a low chance the sun will explode tomorrow is not actually explained by there being a low chance that the sun will explode tomorrow. On naturalism, therefore, we lose justification for such beliefs.
Consider two hypotheses:
Tomorrow physics will continue as before.
Tomorrow physics will break down wildly and become completely different. However, every day before tomorrow, physics will work consistently in the ways described by the laws of physics.
Both hypotheses predict we’d have all the same intuitions. Both predict identical worlds prior to tomorrow. Thus, on naturalism, it seems we have no justification for believing 1 over 2 based on intuitions. Given that intuitions are the things that shape our priors, we cannot suppose that 1 has a higher prior than 2. However, if we can’t rule out 2 based on priors, because 2 explains all the available data strictly as well as 1, we have no basis for preferring 1. Naturalism, therefore, by default leads to insane skepticism.
The same points apply, by the way, to:
Modal beliefs: these are beliefs about which things are possible and necessary. Examples of modal facts include: unicorns are possible, married bachelors are impossible, and things must be themselves. But one’s modal beliefs are all based on intuitions that we’d have even if they were false, on the naturalist story. The naturalist’s explanation of why we think there can’t be married bachelors makes no reference to either the fact that there can’t be married bachelors or any other fact from which we could infer the impossibility of married bachelors.
Logical beliefs. The soundness of modus ponens isn’t the sort of thing that can move around atoms.
Metaphysical beliefs. We’re presumably justified in thinking that a thing can’t cause itself and that something can’t have a color without a shape. But how, on the naturalist picture, are we justified in holding those beliefs? The fact that things can’t have colors without shapes doesn’t explain why we believe it to be true.
Epistemic beliefs. The fact that it’s reasonable to believe what’s supported by the evidence can’t move around atoms in our brains.
How does theism avoid this? The same way it accounts for moral knowledge! It holds that there is a God who molds our beliefs so that they are accurate—explained by the facts they’re about. Perhaps we’re even given the capacity for directly intuiting the aforementioned truths.
5 Loose ends
So far I’ve argued for a principle which, if correct, would eliminate moral and inductive knowledge if naturalism is true. The naturalist can, of course, get around this argument by simply positing that we have some faculty of intuition—a faculty which allows us to directly grasp reasonable priors and moral facts. This is, in my judgment, the best way for the naturalist to go. Unfortunately, such a faculty is quite unexpected on naturalism while much more expected on theism. The existence of such a faculty would, therefore, be strong evidence for theism.
There have been other ways that people have pressed the moral knowledge argument. One way I’ve presented it in the past is as follows (NR is the notion that one has a direct faculty of intuition that allows one to “see” the facts about morality and reasonable priors):
For A to give you a reason to believe B, then A must be more likely if B is the case than if B is not the case.
If NR is false then the non-natural facts being the case does not make it any more likely that it would appear that there are non-natural facts than if they weren’t the case.
Therefore, if NR is false then the fact that there appear to be non-natural facts gives us no reason to believe that there are non-natural facts.
If the fact that there appear to be non-natural facts gives us no reason to believe that there are non-natural facts, then we have no reason to believe that there are non-natural facts.
But we do have reason to believe that there are non-natural facts.
So NR is true.
In short, if atheism is true, then the odds that we’d intuit that torture is wrong are no higher if it really is wrong than if it’s not. But if this is so, then our intuition that torture is wrong doesn’t give us evidence that torture is wrong.
A second way naturalism might vitiate moral knowledge—pointed out by Tomas Bogardus—relates to peer disagreement. If naturalism is true, then there could have been creatures with equally rational faculties who had different moral beliefs from our own. Perhaps if snakes had evolved powerful intellects, they would have thought harming others for trivial reasons was right. But it seems we have no basis, on naturalism, for thinking we’re more likely to be right than the intelligent snakes! Thus, naturalism once again threatens our moral knowledge.
A final argument is one from analogy. Moral knowledge, on naturalism, seems analogous to the following case, presented by Crummett and Swenson, inspired by Korman and Locke:
On the basis of clear and distinct intuitions, Neora believes in an all powerful deity. Later, Agent Smith convinces her that she is part of a computer simulation. He tells her that the designers had a terrible time building a simulation inhabited by conscious cognizers but that—through a great deal of trial and error—they found that they could achieve this result only by rendering the inhabitants strongly disposed to believe in an all-powerful deity. Without such beliefs, the simulations would break down before they even got going. Neora believes everything he tells her. And she believes that the deity (if it does exist) had nothing to do with her religious intuitions and associated beliefs. Despite believing all this, she doesn’t abandon her belief in an all-powerful deity.
On naturalism, it seems the situation concerning moral knowledge is pretty similar to the above case! Because Neora’s beliefs are based on an intuition that she’d have had even if there were not an all powerful deity, she loses justification for thinking there is such a deity.
Note additionally that even if the naturalist can explain how we come to have moral knowledge in this dismal epistemological state, they are still left without an adequate explanation of why, by chance, so many of our moral beliefs are accurate.
Several objections can be given against the moral knowledge argument. None, I shall argue, is successful.
A first objection raised by my friend
: if the argument is right then we don’t have moral knowledge. But you can’t start by assuming from the outset that you have moral knowledge. Rather, you see if there’s good reason to think there is moral knowledge. If the argument succeeds, then it simply shows that what we atheists previously believed to be the basis for moral knowledge is not a sound basis for moral knowledge. Silas gives the following analogy:I know that the time is *checks watch* 12:58, and I’m very confident about this. But a clock-skeptic now comes up to me and tells me that my watch actually doesn’t have any batteries and is stopped. He knows this because the store I bought it from sells watches without batteries already in them (for some reason), and I never put one in myself. I know this fact about the store, and I also remember pretty distinctly never having put batteries in.
But luckily I see the skeptic’s challenge for what it is: a great argument for watch-God. You see, it’s super obvious that the time is… well, by now it’s probably 12:59. Anyways, it’s super obvious that this is the time—in fact I know this is the time. And you see, if the clock-skeptic is right, that would mean that my temporal beliefs were not properly tracking the time, and I would not be justified in thinking it’s 12:59. If watch-theism is true, on the other hand, watch-God would want me to have correct temporal beliefs, meaning he would have ordained it that my watch somehow did have a battery, or that I had correct temporal intuitions, or whatever. This means that my temporal knowledge gives me strong evidence in favor of watch-theism.
Moral intuitions, Silas claims, are like gauging time from the watch. If you lose justification for thinking the watch is accurate, then you should no longer treat the time as a decisively established fact.
I have two worries about this argument. First, I think just by reflecting, we can directly know that our beliefs are based on truth. When I reflect on excruciating agony, for instance, I can see that I come to believe it is bad because it is bad.
Second, we have direct justification for believing in the falsity of skepticism as Silas admits. But the moral knowledge argument undermines inductive knowledge, not just moral knowledge. Thus, simply biting the bullet is not acceptable.
As an analogy, if a theory of physics told us that nearly everyone is a Boltzmann brain—thus removing our justification for thinking that we’re even justified in trusting the theory of physics—that would be a major defect of it.
A second objection: perhaps the naturalist can still hold that our moral beliefs are explained by the moral facts. This is especially true if they’re a moral naturalist.
Reply: no, on naturalism the complete causal story of how we came to have our moral beliefs is given to us by evolution. There is simply no need for invoking the moral facts themselves. And it’s even harder to see how our justifiable beliefs in priors are explained, on naturalism, by the justifiability of the priors themselves.
Here’s one way to see what’s needed for explanation of the relevant sort. For A to explain B, in the sense required, one aware of A would have to believe B likelier than they would believe it to be if they weren’t aware of B. There being a table explains why you see it, because the odds that someone would give to you seeing a table would go up after learning there was a table. But on naturalism, one wouldn’t think we were any likelier to believe torture is wrong after learning that it really is wrong!
A third objection: perhaps all that’s needed for knowledge is a generally accurate faculty. If some process usually gives you accurate beliefs, you should trust it when it tells you P, even if P doesn’t explain why it tells you that.
Reply: no, this is false. If your eyesight is generally good, but you know you’re hallucinating a table, then that no longer gives you justification for thinking there’s a table. The situation is similar on moral naturalism.
6 Another way of precisifying the intuition (warning: horrendously confusing)
There are lots of cases where it’s obvious that an experience must be explained by what it’s about for it to justify belief in what it’s about. It’s extremely clear that if you knew that you were hallucinating a table, your seeing a table wouldn’t give you justification for thinking there was a table. But making this intuition precise—and avoiding counterexamples—is tricky. For this reason, I’ll present a second formulation of the core intuition. Strap in, this one is complicated! You can skip this section if you want!
Let a component of reasoning be an experience providing non-inferential justification, pattern non-inferentially connecting experiences to beliefs, or belief.
Define the content of a component as in the case of a belief, the content of the belief, in the case of a pattern connecting experiences non-inferentially to beliefs, the legitimacy of the connective pattern, and in the case of an experience that justifies a belief, the belief justified by the experience.
The principle: every component of reasoning must be explained by truth of the content of the component of follow from a chain of components of reasoning that ultimately bottoms out in components of reasoning that are explained by their contents.
To simplify, when one reasons, there are a variety of different components of their reasoning. There are experiences they have—intuitions, things they’ve seen, things they remember. Additionally, there are patterns they use to connect their experiences to beliefs. For instance, a person might see a pear, and this might generate in them a belief that there is a pear. These are each referred to as components of reasoning.
These components of reasoning have what might be referred to as contents. These are what, in some sense, they are about. The content of a belief is simply what the belief is about. The content of my belief that Paris is the capital of France is the fact that Paris is the capital of France.
The content of an experience is what it’s an experience of. If I see a pear, the contents of that experience is the pear itself.
Lastly, the contents of a pattern connecting experiences to beliefs is simply the legitimacy of that pattern. For instance, if I infer that there’s a pear on the basis of seeing it, there would be a connection between my seeing the pear and belief that there is a pear. The content of that connection would be the legitimacy of thinking there’s a pear on the basis of seeing it.
This principle claims that when one reasons correctly, each component of one’s reasoning ultimately bottoms out in that which is explained by its contents. This sounds rather abstract, so perhaps an example would be helpful.
Suppose, to return to the earlier example, I believe that there’s a pear on the basis of seeing it. This belief follows from a combination of an experience (the pear) and a pattern connecting experiences to beliefs (that upon seeing a pear, one should think there is a pear). Let us see, therefore, if each of these components bottom out in explanation in terms of their contents.
With the pear itself, the explanation is quite straightforward. I see a pear because there is one! The appearance has the pear as its content, and it is explained by the pear. So far so good!
The remaining question is, therefore, whether the pattern connecting the appearance of the pear to belief in the pear bottoms out in things explained by their truth. Well, it seems that the pattern can be explained by a deeper fact—that vision is generally reliable. This belief is held because it’s true. Thus, the pattern of reasoning is legitimate—inferring that there’s a pear on the basis of seeing it can be justified with reference to that which one believes in because they are real: the accuracy of vision.
Consider a more tricky example: that of my belief that the sun will rise tomorrow. Clearly this is something I know. Yet how is it to be justified?
Well, this belief bottoms out in facts explained by their contents. Assume that I believe induction is reliable because it really is. If not—if my belief in the reliability of induction isn’t explained by the truth of induction, and neither does it follow from other beliefs that I believe because they’re true—then it would seem that I should not trust induction.
Assume additionally that I have accurate memories of the past. In this case, if we combine my accurate memories of the sun rising in previous days with the fact that the future is likely to resemble the past, then we have a justification for thinking the sun will rise tomorrow.
However, this makes it quite tricky to see how one is justified in holding moral beliefs. The reason we have our moral beliefs, on the naturalist picture, is not because our moral beliefs are true. Neither can our moral beliefs be inferred from other things believed because of their truth. Thus, the above principle rules out moral knowledge.
Now, in order to see how naturalism vitiates trust in induction, we will have to expand the principle similarly to how we expanded the other principle.
The principle: every component of reasoning must be better explained by truth of the content of the component than by alternative contents or follow from a chain of components of reasoning that ultimately bottoms out in components of reasoning that are explained by their contents and are not equally well explained by alternative contents.
In short, to have accurate reasoning, it’s not enough for each of the contents to be explained by the facts they’re about. It must be that there’s not some equally viable content that might explain it just as well. In the case of induction, because the theory that induction only worked in the past but won’t work in the future explains our inductive intuitions just as well as the theory that induction works universally, if the above principle is true, standard naturalism undermines induction.
One final way of adding to the principle: one might think that you could be justified in thinking there’s a table based on seeing it, even if you’re hallucinating, so long as you don’t know you’re hallucinating. To avoid this thorny complication, we can revise the principle to be:
One isn’t justified in trusting some pattern of reasoning if one knows that some components of reasoning are not better explained by truth of the content of the component than by other potential contents of follow from a chain of components of reasoning that ultimately bottoms out in components of reasoning that are explained by their contents and are not equally well explained by alternative contents.
7 Conclusion
In this essay, I’ve defended the moral knowledge argument. I regard this argument to have considerable force. Theists, in my judgment, have a far better time explaining how we know moral facts—as well as various other facts about the world, like that induction is reliable—than naturalists do. While naturalists can explain our possession of the relevant kind of knowledge by invoking a faculty that allows us to directly grasp non-natural facts, such a faculty is quite mysterious on naturalism. Moral knowledge is one of many features in the world that fits better on a theistic worldview than a naturalistic worldview.
Do we need go so far as to posit God - a maximally powerful and good entity - to explain moral knowledge under this argument? All we need is an entity that:
a) Has genuine moral knowledge
b) Wants to give that moral knowledge to humans
c) Has a way of transmitting that moral knowledge into humans' minds
To explain why our moral intuitions are suspiciously accurate. You could just as easily posit that we're all subjects in a huge social experiment run by aliens to see how humans would react if they had moral knowledge, and that our moral intuitions are being shaped by the aliens' telepathic ray guns to correctly give us moral knowledge. Of course, you might have other reasons to think God is more plausible than alien scientists, like fine tuning - but then you're relying on other arguments for God, and the moral knowledge argument in particular is not doing any work.
I think a naturalist could plausibly explain moral knowledge either in terms of rationalism, where our moral norms derive from some a priori discernible principle, such as the categorical imperative, so as long as our rational faculties are explainable in terms of naturalistic evolution (which they are, since being rational is evolutionarily advantageous), the naturalist could account for their moral knowledge. Or, they could be an Aristotelian, and think that moral facts correspond with natural facts, such as facts about human flourishing, or about pain and pleasure. Insofar as these are plausible moral accounts (which they are), there should be little problem for the naturalist in explaining their moral knowledge, no more than they would have in explaining the other forms of knowledge they might have, in which case moral knowledge isn’t particularly threatening to naturalism.