0 An Introduction to moral realism
There are vast numbers of superficially clever arguments one can generate for crazy, skeptical conclusions; conclusions like that the external world doesn’t exist, we can’t know anything, memory isn’t reliable, and so on. These arguments, while interesting and no doubt useful if one ever comes across a real honest-to-god skeptic — a rather rare breed — don’t have much significance; skepticism exists as little more than a curiosity in the mind of the modern philosopher, something which takes real thought to refute, yet is not worth taking seriously as a serious set of views.
Yet there’s one1 form of extreme skepticism with actually existing trenchant advocates — real advocates who fill philosophy departments, rather than, like the external world or memory skeptic, merely being hypothetical advocates for the devil in philosophy papers. This skeptic is one who doubts that there are objective moral truths — moral facts made true not by the beliefs of any person.
Moral realism is the claim that there are true moral facts — ones that are not made true by anyone’s attitudes towards them. So if you think that the sentence that will follow this one is true and would be so even if no one else thought it was, you’re a moral realist. It’s typically wrong to torture infants for fun!
Now, no doubt to the moral anti-realist, my remarks sound harsh. How dare I compare them to the person who doubts anything can be really known.
“I refute the skeptic about the external world thus — ‘here’s one hand; here’s another hand,’ now show me the moral facts.”
— A hypothetical moral anti-realist
Well, in this article, I’ll explain why moral anti-realism is so implausible — while one always can accept the anti-realist conclusion, it’s always possible to bite the bullet on crazy conclusions. Yet moral anti-realism, much like anti-realism about the external world, is wildly implausible in what it says about the world.
We do not live in a bleak world, devoid of meaning and value. Our world is packed with value, positively buzzing with it, at least, if you know where to look, and don’t fall pray to crazy skepticism. Unfortunately the flip side of that is that the world is also packed full of disvalue — horrific, agonizing, pointless, meaningless suffering, suffering that flips the otherwise positive value of the hedonic register — that suffering must be eliminated as soon as possible. It is a moral emergency every second that it goes on.
In this article, I will defend moral realism. I will defend that it is, in fact, wrong to torture infants for fun — even if everyone disagreed. It’s no surprise that Moral realism is accepted by a majority of philosophers, though it’s certainly far from a universal view.
1 A Point About Methodology
Seeming is believing — as I hope to argue. Or, more specifically, if X seems the case to you, in general, that gives you some reason to think X is, in fact, the case. I’ve already addressed this in a previous article, so I’ll quote that.
Absent relying on what seems to be the case after careful reflection, we could know nothing, as (Huemer, 2007) has argued persuasively. Several cases show that intuitions are indispensable towards having any knowledge and doing any productive moral reasoning.
Any argument against intuitions is one that we’d only accept if it seems true after reflection, which once again relies on seemings. Thus, rejection of intuitions is self-defeating, because we wouldn’t accept it if its premises didn’t seem true.
Any time we consider any view which has some arguments both for and against it, we can only rely on our seemings to conclude which argument is stronger. For example, when deciding whether or not god exists, most would be willing to grant that there is some evidence on both sides. The probability of existence on theism is higher than on atheism, for example, because theism entails that something exists, while the probability of god being hidden is higher on atheism, because the probability of god revealing himself on atheism is zero. Thus, there are arguments on both sides, so any time we evaluate whether theism is true, we must compare the strength of the evidence on both sides. This will require reliance on seemings. The same broad principle is true for any issue we evaluate, be it religious, philosophical, or political.
Consider a series of things we take to be true which we can’t verify. Examples include the laws of logic would hold in a parallel universe, things can’t have a color without a shape, the laws of physics could have been different, implicit in any moral claim about x being bad there is a counterfactual claim that had x not occurred things would be better, and assuming space is not curved the shortest different between any two points is a straight line. We can’t verify those claims directly, but we’re justified in believing them because they seem true--we can intuitively grasp that they are justified.
The basic axioms of reasoning also offer an illustrative example. We are justified in accepting induction, the reliability of the external world, the universality of the laws of logic, the axioms of mathematics, and the basic reliability of our memory, even if we haven’t worked out rigorous philosophical justifications for those things. This is because they seem true.
Our starting intuitions are not always perfect, and they can be overcome by other things that seem true.
Maybe you’re not a phenomenal conservative. Perhaps you think that in some cases, intuitions don’t serve as justification. However, we should all accept the following more modest principle.
Wise Phenomenal Conservatism: If P seems true upon careful reflection from competent observers, that gives us some prima facie reason to believe P.
This allows us to sidestep the main objections to phenomenal conservatism listed here.
Responding to the crazy appearances objection
Some critics have worried that phenomenal conservatism commits us to saying that all sorts of crazy propositions could be non-inferentially justified. Suppose that when I see a certain walnut tree, it just seems to me that the tree was planted on April 24, 1914 (this example is from Markie 2005, p. 357). This seeming comes completely out of the blue, unrelated to anything else about my experience – there is no date-of-planting sign on the tree, for example; I am just suffering from a brain malfunction. If PC is true, then as long as I have no reason to doubt my experience, I have some justification for believing that the tree was planted on that date.
More ominously, suppose that it just seems to me that a certain religion is true, and that I should kill anyone who does not subscribe to the one true religion. I have no evidence either for or against these propositions other than that they just seem true to me (this example is from Tooley 2013, section 5.1.2). If PC is true, then I would be justified (to some degree) in thinking that I should kill everyone who fails to subscribe to the “true” religion. And perhaps I would then be morally justified in actually trying to kill these “infidels” (as Littlejohn [2011] worries).
But in the case of a person to whom a certain religion seems true, this is no doubt not after careful, prolonged rational reflection in which they consider all of the facts. If a very rational person considered all the facts and religion still seemed to have prima facie justification, it seems they would be justified in thinking religion is true. This objection is also diffused by Huemer’s responses to it.
Phenomenal conservatives are likely to bravely embrace the possibility of justified beliefs in “crazy” (to us) propositions, while adding a few comments to reduce the shock of doing so. To begin with, any actual person with anything like normal background knowledge and experience would in fact have defeaters for the beliefs mentioned in these examples (people can’t normally tell when a tree was planted by looking at it; there are many conflicting religions; religious beliefs tend to be determined by one’s upbringing; and so on).
We could try to imagine cases in which the subjects had no such background information. This, however, would render the scenarios even more strange than they already are. And this is a problem for two reasons. First, it is very difficult to vividly imagine these scenarios. Markie’s walnut tree scenario is particularly hard to imagine – what is it like to have an experience of a tree’s seeming to have been planted on April 24, 1914? Is it even possible for a human being to have such an experience? The difficulty of vividly imagining a scenario should undermine our confidence in any reported intuitions about that scenario.
The second problem is that our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S’, which we encounter or think about quite a bit, and (iii) the correct judgment about S’ is different from the correct judgment about S. For instance, in the actual world, people who think they should kill infidels are highly irrational in general and extremely unjustified in that belief in particular. It is not hard to see how this would incline us to say that the characters in Tooley’s and Littlejohn’s examples are also irrational. That is, even if PC were true, it seems likely that a fair number of people would report the intuition that the hypothetical religious fanatics are unjustified.
A further observation relevant to the religious example is that the practical consequences of a belief may impact the degree of epistemic justification that one needs in order to be justified in acting on the belief, such that a belief with extremely serious practical consequences may call for a higher degree of justification and a stronger effort at investigation than would be the case for a belief with less serious consequences. PC only speaks of one’s having some justification for believing P; it does not entail that this is a sufficient degree of justification for taking action based on P.
There’s certainly much more to be said on this topic, only a minuscule portion of which I can discuss in this article. However, in philosophy, it’s pretty widely accepted that what seems to be the case probably is the case, all else equal, in at least most cases. One can accept epistemic particularism, for example, and still accept this modest requirement.
Responding to the alleged defeaters in the moral domain
Walter Sinnott-Armstrong argues that we need extra justification in some sorts of cases. If a person had a belief in a proposition purely as a result of self-interested motivated reasoning, their seeming wouldn’t be justified. Thus, he argues a constraint for accepting a belief to garner prima facie justification is the following
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
However, as Ballantyne and Thurrow note, this isn’t a blanket defeater for our moral beliefs; rather, this is only a defeater for the subset of our moral beliefs that are likely to be caused in some way by partial considerations.
So now the question is whether, in regards to the specific thought experiments I’ll appeal to in defending moral realism, they are plausibly caused by partiality. We’ll investigate this more in regard to the specific thought experiments that I’ll appeal to.
However, one thing is worth noting. Utilitarianism seems to have a plausible route to avoiding these objections. Utilitarianism is frequently chided for being too demanding, for being too impartial. Thus, this gives us a good reason to revise the intuitions of utilitarianism’s rivals, though not utilitarianism.
This principle is also too broad. Let’s imagine that all people had self-interested reasons to believe in core logical or mathematical facts. This wouldn’t mean we should reject the core truth of modus ponens or the core mathematical axioms. Perhaps it would undercut the intuition, but it wouldn’t be enough to totally eliminate the intuition.
This is one worry I have with Armstrong’s approach. He seems much too willing to divide intuitions into two distinct classes: justified and unjustified. However, justification comes in degrees. Declaring an intuition flat out justified or flat out unjustified seems to be a mistake — just like declaring a food hot or cold would be unwise, if one were attempting to make precise judgments about the average temperature of a room.
Armstrong’s next constraint is the following.
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Several points are worth making. First, the intuitions I’m appealing to are very widespread — not many people lack the intuitions to which I’ll appeal. Perhaps some people end up reflectively rejecting those intuitions, but people tend to have the intuitions. Thus, we need not revise these intuitions in light of those who disagree. I’ll defend this more later.
Second, given that most philosophers are moral realists, it seems that most relevant domain experts find the intuitions appealing. If they didn’t, they almost surely wouldn’t be moral realists.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
All of our decisions are clouded by emotion to some degree. That does not mean that we should abandon all of our judgments. Again, rather than seeing things as a yes/no question of whether or not our intuitions are justified, it makes far more sense to see justification as coming in degrees. The more emotional we are, the less we should trust our intuitions. However, we shouldn’t throw out all of our intuitions based merely on our omnipresent emotions.
Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.
With my traditional caveat about justification in coming in degrees, this seems mostly correct.
Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source
Ibid
2 Some Intuitions That Support Moral Realism
The most commonly cited objection to moral anti-realism in the literature is that it’s unintuitive. There is a vast wealth of scenarios in which anti-realism ends up being very counterintuitive. We’ll divide things up more specifically; each particular version of anti-realism has special cases in which it delivers exceptionally unintuitive results. Here are two cases
This first case is the thing that convinced me of moral realism originally. Consider the world as it was at the time of the dinosaurs before anyone had any moral beliefs. Think about scenarios in which dinosaurs experienced immense agony, having their throats ripped out by other dinosaurs. It seems really, really obvious that that was bad.
The thing that’s bad about having one’s throat ripped out has nothing to do with the opinions of moral observers. Rather, it has to do with the actual badness of having one’s throat ripped out by a T-Rex. When we think about what’s bad about pain, anti-realists get the order of explanation wrong. We think that pain is bad because it is — it’s not bad merely because we think it is.
The second broad, general case is of the following variety. Take any action — torturing infants for fun is a good example because pretty much everyone agrees that it’s the type of thing you generally shouldn’t do. It really seems like the following sentence is true
“It’s wrong to torture infants for fun, and it would be wrong to do so even if everyone thought it wasn’t wrong.”
Similarly, if there were a society that thought that they were religiously commanded to peck out the eyes of infants, they would be doing something really wrong. This would be so even if every single person in that society thought it wasn’t wrong.
Everyone could think it’s okay to torture animals in factory farms, and it would still be horrifically immoral.
This becomes especially clear when we consider moral questions that we’re not sure about. When we try to make a decision about whether abortion is wrong, or eating meat, we’re trying to discover, not invent, the answer. If the answer were just whatever we or someone else said it was — or if there was no answer — then it would make no sense to deliberate about whether or not it was wrong.
Whenever you argue about morality, it seems you are assuming that there is some right answer — and that answer isn’t made sense by anyone’s attitude towards it.
Let’s see whether these results can be debunked as a result of biasing factors.
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
I’m not particularly partial about whether the dinosaur’s suffering was bad. It has little emotional impact on me and I am not a dinosaur. Additionally, I’m not very partial on the question of whether torturing infants would be wrong even if everyone thought it wasn’t wrong — this will never affect me, and the moral facts themselves are causally inert. Thus, this judgment can’t be debunked by partiality considerations.
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Very few people disagree, at least based on initial intuitions, with the judgments I’ve laid out. I did a small poll of people on Twitter, asking the question of whether it would be wrong to torture infants for fun, and would be so even if no one thought it was. So far, 82.6% of people have been in agreement.
There are some people who disagree. However, there is almost inevitable disagreement. If disagreement made us abandon our beliefs, we’d abandon our beliefs in political claims, because there’s way more disagreement about political claims than there is about the claim that it’s typically wrong to torture infants for fun.
Also, those who disagree tend to have views that I think are factually mistaken on independent grounds. Anti-realists seem more likely to adopt other claims that I find implausible. Additionally, they tend to make the error of not placing significant weight on moral intuitions. Thus, I think we have independent reasons to prefer the belief in realism.
It also seems like a lot of the anti-realists who don’t find the sentence “it’s typically wrong to torture infants for fun and would be so even if everyone disagreed” intuitive, tend to be confused about what moral statements mean — about what it means to say that things are wrong. I, on the other hand, like most moral realists, and indeed many anti-realists, understand what the sentence means. Thus, I have direct acquaintance to the coherence of moral sentences — I directly understand what it means to say that things are bad or wrong.
If it turned out that a lot of the skeptics of quantum mechanics just turned out to not understand the theory, that would give us good reason to discount their views. This seems to be pretty much the situation in the moral domain.
Additionally, given that most philosophers are moral realists, we have good reason to find it the more intuitively plausible view. If the consensus of people who have carefully studied an issue tends to support moral realism, this gives us good reason to think that moral realism is true. The wisdom of the crowds tends to be greater than that of any individual.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
I’m really not particularly emotional about the notion that dinosaur suffering was bad. Nor do I have a particularly strong emotional reaction to some types of wrong actions, say tax fraud. If there was a type of tax fraud that decreased aggregate utility, I’d think it was wrong, even if everyone thought it wasn’t. I have no emotional attachment to that belief.
Additionally, we have good evidence from the dual process literature that careful, prolonged reflection tends to be what causes utilitarian beliefs — it’s the unreliable emotional reactions that causes our non-utilitarian beliefs. Thus, at best, this would give a reason to revise our non-utilitarian beliefs. I’ll quote an article I wrote on the subject.
One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.
A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.
Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.
Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgments. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.
Additionally, there are lots of moral results that seem to be backed by no emotional results. For example, I accept the repugnant conclusion, though I have no emotional attachment to doing so.
Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.
We have no reason to think that beliefs in the moral domain — particularly ones that reach reflective equilibrium — are particularly susceptible to illusion. This is especially true of the consequentialist ones.
Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source
This isn’t true of moral belief. The belief that dinosaur suffering was bad, even before any person had ever formed that thought, was formed through careful reflection on the nature of their suffering — it wasn’t on the basis of anything else.
What if the folk think differently
I’m supremely confident that if you asked the folk whether it would be typically wrong to torture infants for fun, even if no one thought it was, they’d tend to say yes. Additionally, it turns out that The Folk Probably do Think What you Think They Think.
Also, I trust the reflective judgment of myself and qualified philosophers significantly more than I trust the folk. Sorry folk!
Classifying anti-realists
Given that, as previously discussed, moral realism is the view that there are true moral statements, that are true independently of people’s beliefs about them, there are three ways to deny it.
Non-cognitivism — this says that moral statements are neither true nor false; they’re not in the business of being true or false. On this view, moral statements are not truth-apt, in that they can’t be true or false. There are lots of sentences that are not truth-apt — examples include “shut the door.” Shut the door isn’t true or false.
Error theory — this says that moral statements, much like statements about witches, try to state facts, but they are systematically false. For example, if a person says ‘witches can fly and cast spells’ they think they’re saying something true, but they falsely believe in a vast category of things that aren’t real, namely, witches. Thus, all positive statements about morality, much like all positive statements about witches according to error theory, turn out to be false.
Subjectivism — this says that moral statements are hinge on people’s attitudes towards them. There are different versions of subjectivism — they’re all implausible.
It turns out that each of these views has especially implausible results, ones not shared by the other three.
Non-cognitivism
Non-cognitivists think that moral statements are not truth apt. A non-cognitivist might think that saying murder is wrong really means boo! murder, or don’t murder! I’ve already explained why I think non-cognitivism is super implausible, which I’ll quote here.
“It’s wrong to torture infants for fun, most of the time,”
is neither true nor false
The statement
“If it’s wrong to torture infants, then I shouldn’t torture infants
It’s wrong to torture infants
Therefore, I shouldn’t torture infants”
Is incoherent. It’s like saying if shut the door then open the window, shut the door, therefore, open the window.
Additionally, as Huemer says on pages 20-21, describing the reasons to think moral statements are propositional
(a) Evaluative statements take the form of declarative sentences, rather than, say, imperatives, questions, or interjections. 'Pleasure is good' has the same grammatical form as 'Weasels are mammals'. Sentences of this form are normally used to make factual assertions. )] In contrast, the paradigms of non-cognitive utterances, such as 'Hurray for x' and 'Pursue x', are not declarative sentences.
(b) Moral predicates can be transformed into abstract nouns, suggesting that they are intended to refer to properties; we talk about 'goodness', 'rightness', and so on, as in 'I am not questioning the act's prudence, but its rightness'.
(c) We ascribe to evaluations the same sort of properties as other propositions. You can say, 'It is true that I have done some wrong things in the past', 'It is false that contraception is murder', and 'It is possible that abortion is wrong'. 'True', 'false', and 'possible' are predicates that we apply only to propositions. No one would say, 'It is true that ouch', 'It is false that shut the door', or 'It is possible that hurray'.
(d) All the propositional attitude verbs can be prefixed to evaluative statements. We can say, 'Jon believes that the war was just', 'I hope I did the right thing', 'I wish we had a better President', and 'I wonder whether I did the right thing'. In contrast, no one would say, 'Jon believes that ouch', 'I hope that hurray for the Broncos', 'I wish that shut the door', or 'I wonder whether please pass the salt'. The obvious explanation is that such I11ental states as believing, hoping, wishing, and wondering are by their nature propositional: To hope is to hope that something is the case, to wonder is to wonder whether something is the case, and so on. That is why one cannot hope that one did the right thing unless there is a proposition-something that might be the case-corresponding to the expression 'one did the right thing'.
(e) Evaluative statements can be transformed into yes/no questions: One can assert 'Cinnamon ice cream is good', but one can also ask, 'Is cinnamon ice cream good?' No analogous questions can be formed from imperatives or emotional expressions: 'Shut the door?' and 'Hurray for the Broncos?' lack clear meaning. The obvious explanation is that a yes/no question requires a proposition; it asks whether something is the case. A prescriptivist non-cognitivist might interpret some evaluative yes/no questions as requests for instruction, as in 'Should I shut off the oven now?' But other questions would defy interpretation along these lines, including evaluative questions about other people's behavior or about the past-t—Was it wrong for Emperor Nero to kill Agrippina?' is not a request for instruction.
(f) One can issue imperatives and emotional expressions directed at things that are characterized morally. If non-cognitivism is true, what do these mean: 'Do the right thing.' 'Hurray for virtue!' Even more puzzlingly for the non-cognitivist, you can imagine appropriate contexts for such remarks as, 'We shouldn't be doing this, but I don't care; let's do it anyway'. This is perfectly intelligible, but it would be unintelligible if 'We shouldn't be doing this' either expressed an aversive emotion towards the proposed action or issued an imperative not to do it.
(g) In some sentences, evaluative terms appear without the speaker's either endorsing or impugning anything, yet the terms are used in their normal senses. This is known as the Frege-Geach problem and forms the basis for perhaps the best-known objection to noncognitivism.
Error Theory
Error theory says that all positive moral statements are false. Error theory is best described as in error theory, because of how sharply it diverges from the truth. It runs into a problem — there are obviously some true moral statements. Consider the following six examples.
What the icebox killers did was wrong.
The holocaust was immoral.
Torturing infants for fun is typically wrong.
Burning people at the stake is wrong.
It is immoral to cause innocent people to experience infinite torture.
Pleasure is better than pain.
The error theorist has to say that the meaning of those terms is exactly the same as what the realist thinks. The error theorist has to think that when people say the holocaust is bad, they’re actually making a mistake. However, this is terribly implausible. It really, really doesn’t seem like the claim ‘the holocaust is bad’ is mistaken.
Any argument for error theory will be way less intuitive than the notion that the Holocaust was, in fact, bad.
Let’s test these intuitions.
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
I’m not really that partial about many things I take to be bad. I think malaria is bad, despite not being personally affected by malaria. Similarly, I am in no way harmed by most of history’s evils — including hypothetical evils that have never been experienced, but that I recognized would be bad if experienced.
On top of this, this may be a reason to rethink the intuition somewhat, but it’s certainly not a reason to just throw out any intuition stemming from
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Very few people disagree that the notion that it’s wrong to cause infinite torture is intuitive.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
Being emotional does reduce the probative force of intuitions. However, it does not suffice to debunk an intuition — we cannot merely disregard intuitions because there’s some emotional impact. But also, I’m not particularly emotional when I consider suffering in the abstract. It still seems clearly bad.
The responses to four and five from above still apply.
Subjectivism
Subjectivism holds that moral facts depend on some people’s beliefs or desires. This could be the desires of a culture — if so, it’s called cultural relativism.
Cultural Relativism: Crazy, Illogical, and Accepted by no One Except Philosophically Illiterate Gender Studies Majors
Cultural relativism is — as the sub-header suggested — something that I find rather implausible. There are no serious philosophers that I know of who defend cultural relativism. One is a cultural relativist if they think that something is right if a society thinks that it is right.
Problem: it’s obviously false. Consider a few examples.
Imagine the Nazis convinced everyone that their holocaust was good. This would clearly not make it good.
Imagine there was a society that was in universal agreement that all babies should be tortured to death in a maximally horrible and brutal way. That wouldn’t be objectively good.
People often accept cultural relativism because they’re vaguely confused and want to be tolerant. But if cultural relativism is true, then tolerance is only good if supported by the broader culture. On cultural relativism, disagreeing with the norms of one’s broader culture is incoherent. Saying my culture is acting wrongly is just a contradiction in terms. Yet that’s clearly absurd.
This also means that if two different cultures argue about which norm is correct, they’re arguing about nothing. If norms are relative to a culture then there’s no fact of the matter about which culture is correct. But that’s absurd; the Nazis were worse than non-Nazis.
To quote my previous article on the subject
If it’s determined by society the following statements are false
“My society is immoral when it tortures infants for fun.”
“Nazi Germany acted immorally.”
“Some societal practices are immoral.”
“When society chops off the fingers and toes of small children based on their skin color, that’s immoral.”
“It’s immoral for society to boil children in pots.”
Individual Subjectivism
Individual subjectivism says that morality is determined by the attitude of the speaker. The statement murder is wrong means “I disapprove of murder.” There are, of course, more subtle versions, but this is likely the main version.
I’ve already given objections in my previous article on the subject.
If it’s determined by the moral system of the speaker the following claims are true.
“When the Nazi whose ethical system held that the primary ethical obligation was killing jews said “It is moral to kill jews,”” they were right.
“When slave owners said ‘the interests of slaves don’t matter,’ they were right.”
“When Caligula says "It is good to torture people,” and does so, he’s right”
“The person who thinks that it’s good to maximize suffering is right when he says “it’s moral to set little kids on fire””
Additionally, when I say “we should be utilitarians,” and Kant says “we shouldn’t be utilitarians,” we’re not actually disagreeing.
Conclusion of this section
So, I think that the moral conclusions of moral anti-realism are absurd. It holds that wrongness either isn’t real or depends on our desires in some way. But that’s just wrong! It is well and truly wrong to torture infants to death, and it would be so even if no one agreed.
3 Irrational Desires
The fool says in his heart ‘I have future Tuesday indifference.’
The argument I intend to lay out is relatively simple in its essence, relatively drab, and yet quite forceful.
1 If moral realism is not true, then we don’t have irrational desires
2 We do have irrational desires
Therefore, moral realism is true
Defending premise 1
Premise one seems the most controversial to laypersons, but it is premise 2 that is disputed by the philosophical anti-realists. Morality is about what we have reason to do — impartial reason, to be specific. These reasons are not dependent on our desires.
Morality thus describes what reasons we have to do things, unmoored from our desires. When one claims it’s wrong to murder, they mean that, even were one to desires murdering another, they shouldn’t do it — they have a reason not to do it, independent of desires.
Thus, the argument for premise one is as follows.
1 If there are desire independent reasons, there are impartial desire independent reasons
2 If there are impartial desire independent reasons, morality is objective
Therefore, morality is objective.
Premise 2 is true by definition. Premise 1 is trivial — impartial desire independent reasons are just identical to non-impartial desire independent reasons, but adding in a requirement of impartiality. This can be achieved by, for example, making decisions from behind the veil of ignorance — or some other similar system.
Thus, if you actual have reasons to have particular desires — to aim for particular things, then morality is objective. Let’s now investigate that assumption.
Defending Premise 2
Premise 2 states that there are, in fact, irrational desires. This premise is obvious enough.
Note here I use desire in a broad sense. By desire I do not mean what merely enjoys; that obviously can’t be irrational. My preference for chocolate ice-cream over vanilla ice cream clearly cannot be in error. Rather, I use desire in a broad sense to indicate one’s ultimate aims, in light of the things that they enjoy. I’ll use desire, broad aims, goals, and ultimate goals interchangeably.
Thus, the question is not whether one who prefers chocolate to vanilla is a fool. Instead, it’s whether someone who prefers chocolate to vanilla but gets vanilla for no reason is acting foolishly.
The anti-realist is in the difficult position of denying one of the most evident facts of the human condition — that we can be fools not merely in how we get what we want but in what we want in the first place.
Consider the following cases.
1 Future Tuesday Indifference2: A person doesn’t care what happens to them on a future Tuesday. When Tuesday rolls around, they care a great deal about what happens to them; they’re just indifferent to happenings on a future Tuesday. This person is given the following gamble — they can either get a pinprick on Monday or endure the fires of hell on Tuesday. If they endure the fires of hell on Tuesday, this will not merely affect what happens this Tuesday — every Tuesday until the sun burns out shall be accompanied by unfathomable misery — the likes of which can’t be imagined, next to which the collective misery of history’s worst atrocities is but a paltry, vanishing scintilla.
They know that when Tuesday rolls around, they will shriek till their vocal chords are destroyed, for the agony is unendurable (their vocal chords will be healed before Wednesday, so they shall only suffer on Tuesday). They shall cry out for death, yet none shall be afforded to them.
Yet they already know this. However, they simply do not care what happens to them on Tuesday. They do not dissociate from their Tuesday self — they think they’re the same person as their Tuesday self. However, they just don’t care what happens to themself on Tuesday.
Now you might be tempted to imagine that they don’t actually mind what happens on Tuesday — after all, they’re indifferent to what happens on Tuesday. This misses the case; they are only indifferent to what happens on future Tuesdays. When Tuesday rolls around, they will fiercely regret their decision. Yet after Tuesday is done, they will be glad that they made the decision — after all, they don’t care what happens on a future Tuesday. We can even stipulate that when it’s Tuesday, they’re hypnotized to believe it’s a Monday, so their suffering feels from the inside exactly and precisely as it would were it experienced on Monday.
This person with indifference to future Tuesdays is clearly making an error. This is not a minor, menial error. In fact, this is certainly the gravest error in human history — one which inflicts more misery than any other. However, the anti-realist must insist that, not only is it not the greatest error in human history, it isn’t an error at all.
After all, the person is making no factual error — they are perfectly aware that they will suffer on a future Tuesday. On the anti-realist account, where lies their error. They know they will suffer, yet they do not care — the suffering will be on a Tuesday.
Only the moral realist can account for their error — for their irrationality and great foolishness in aiming at unfathomable misery on Tuesday, rather than a pinprick on Monday. On the realist account — or at least the sensible realist account, no doubt some crazy natural law theorists would deny this — we all have reason to avoid future agony. This explains why it would be an error to subject oneself to infinite torture on a Tuesday. The fact that it’s a Tuesday gives one no reason to discount their suffering.
Now the anti-realist could try to avoid this by claiming that a decision is irrational if one will regret it. However, this runs into three problems.
First, if anti-realism is true then we have no desire independent reason to do things. It doesn’t matter if we’ll regret them. Thus, regrettably, this criteria fails. Second, by this standard both getting the pinprick on a single Monday and the hellish torture on Tuesday would be irrational, because the person who experiences them will regret each of them at various points. After all, on all days of the week except Tuesday, they’d regret making the decision to endure a Monday pinprick. Third, even if by stubbornness they never swayed in their verdict, that would in no way change whether they choose rightly.
2 Picking Grass: Suppose a person hates picking grass — they derive no enjoyment from it and it causes them a good deal of suffering. There is no upside to picking grass, they don’t find it meaningful or causing of virtue. This person simply has a desire to pick grass. Suppose on top of this that they are terribly allergic to grass — picking it causes them to develop painful ulcers that itch and hurt. However, despite this, and despite never enjoying it, they spend hours a day picking grass.
Is the miserable grass picker really making no error? Could there be a conclusion more obvious than that the person who picks grass all day is acting the fool — that their life is really worse than one whose life is brimming with meaning, happiness, and love?
3 Left Side Indifference: A person is indifferent to suffering that’s on the left side of their body. They still feel suffering on the left side of their body just as vividly and intensely as it would be on the right side of their body. Indeed, we can even imagine that they feel it a hundred times more vividly and intensely — it wouldn’t matter. However, they do not care about the left side suffering.
It induces them to cry out in pain, it is agony after all. But much like agony that one endures for a greater purpose, the agony one endures on a run say, they do not think it is actually bad. Thus, this person has a blazing iron burn the left side of their body from head to toe, inflicting profound agony. They cry out in pain as they do it. On the anti-realist account, they’re acting totally rationally. Yet that’s clearly crazy!
4 Four-Year-Old Children: Suppose that — and this is not an implausible assumption — there’s a four-year-old child who doesn’t want to go into a Doctor’s office. After all, they really don’t like shots. This child is informed of the relevant facts — if they don’t go into the Doctor’s office, they will die a horribly painful death of cancer. You clearly explain this to them so that they’re aware of all the relevant facts. However, the four-year-old still digs in their heels (I hear they tend to do that) and refuses categorically to go into the Doctor’s office.
It’s incredibly obvious that the four-year-old is being irrational. Yet they’ve been informed of the relevant facts and are acting in accordance with their desires. So on anti-realism, they’re being totally rational.
5 Cutting: Consider a person who is depressed and cuts themself. When they do it, they desire to cut themself. It’s not implausible that being informed of all the relevant facts wouldn’t make that desire go away. In this case, it still seems they’re being irrational.
6 Consistent Anorexia: A person desires to be thin even if it brings about their starvation. This brings them no joy. They starve themself to death. It really seems that they’re being irrational.
7 A person had consensual homosexual sex. They then become part of a religious cult. This religious cult doesn’t have any factual mistakes, they don’t believe in god. However, they think that homosexual sex is horrifically immoral and those who do it deserve to suffer, just as a base moral principle. On the anti-realist account, not only are they not mistaken, they would be fully rational to endure infinite suffering because they think they deserve it.
8 A person wants to commit suicide and know all the relevant facts. Their future will be very positive in terms of expected well-being. On anti-realism, it would be rational to commit suicide.
9 A person is currently enduring more suffering than anyone ever has in all of human history. However, while this person doesn’t enjoy suffering — they experience it the same way the rest of us do, they have a higher order indifference to it. While they hate their experience and cry out in agony, they don’t actually want their agony to end. They don’t care on a higher level. On this account, they have no reason to end their agony. But that’s clearly implausible.
10 A person doesn’t care about suffering if it comes from their pancreas. Thus, they’re in horrific misery, but it comes from their pancreas so they do nothing to prevent it, instead preventing a miniscule amount of non-pancreas agony. On anti-realism, they’ve made no error. But that’s crazy!
4 The Discovery Argument
One of the arguments made for mathematical platonism is the argument from mathematical discovery. The basic claim is as follows; we cannot make discoveries in purely fictional domains. If mathematics was invented not discovered, how in the world would we make mathematical discoveries? How would we learn new things about mathematics — things that we didn’t already know?
Well, when it comes to normative ethics, the same broad principle is true. If morality really were something that we made up rather than discovered, then it would be very unlikely that we’d be able to reach reflective equilibrium with our beliefs — wrap them up into some neat little web.
But as I’ve argued at great length, we can reach reflective equilibrium with our moral beliefs — they do converge. We can make significant moral discovery. The repugnant conclusion is a prime example of a significant moral discovery that we have made.
Thus, there are two facts about moral discovery that favor moral realism.
First, the fact that we can make significant numbers of non-trivial moral discoveries in the first place favors it — for it’s much more strongly predicted on the realist hypothesis than the anti-realist hypothesis.
Second, the fact that there’s a clear pattern to the moral convergence. Again, this is a hugely controversial thesis — and if you don’t think the arguments I’ve made in my 36-part series are at least mostly right, you won’t find this persuasive. However, if it turns out that every time we carefully reflect on a case it ends up being consistent with some simple pattern of decision-making, that really favors moral realism.
Consider every other domain in which the following features are true.
1 There is divergence prior to careful reflection.
2 There are persuasive arguments that would lead to convergence after adequate ideal reflection.
3 Many people think it’s a realist domain
All other cases which have those features end up being realist. This thus provides a potent inductive case that the same is true of moral realism.
5 The argument from phenomenal introspection
Credit to Neil Sinhababu for this argument.
If we have an accurate way of gaining knowledge and this method informs us of moral realism, then this gives us a good reason to be a moral realist, in much the same way that, if a magic 8 ball was always right, and it informed us of some fact, that would give us good reason to believe the fact.
Neil Sinhababu argues that we have a reliable way to gain access to a moral truth — this way is phenomenal introspection. Phenomenal introspection involves reflecting on a mental state and forming beliefs about what its like. Here are examples of several beliefs formed through phenomenal introspection.
My experience of the lemon is brighter than my experience of the endless void that I saw recently.
My experience of the car is louder than my experience of the crickets.
My experience of having my hand set on fire was painful.
We have solid evolutionary reason to expect phenomenal introspection to be reliable — after all, beings who are able to form reliable beliefs about their mental states are much more likely to survive and reproduce than ones that are not. We generally trust phenomenal introspection and have significant evidence for its reliability.
Thus, if we arrive at a belief through phenomenal introspection, we should trust it. Well, it turns out that through phenomenal introspection, we arrive at the belief that pleasure is good. When we reflect on what it’s like to, for example, eat tasty food, we conclude that it’s good. Thus, we are reliably informed of a moral fact.
Lance Bush has written a response to an article I wrote about this argument; I’ll address his response here.
I summarize Sinhababu’s argument as follows.
Premise 1: Phenomenal introspection is the only reliable way of forming moral beliefs.
Premise 2: Phenomenal introspection informs us of only hedonism
Conclusion: Hedonism is true…and pleasure is the only good.
However, we can ignore premise one, because it serves as a reason other methods are unreliable — not as a reason phenomenal introspection is reliable. Lance says
I have a lot of concerns with (1), given that I don’t know what is meant by a “moral belief”
I take a moral belief to be a belief about what is right and wrong, or what one should or shouldn’t do, or about what is good and bad. Morality is fundamentally about what we have impartial reason to do, independent of our desires. For more on this definition, I’d recommend reading Parfit’s On What Matters.
I’d also note that it’s strange to frame P1 as a claim about a reliable way to form moral beliefs, since “reliable” doesn’t seem connected to whether the beliefs in question are true or not. After all, one can have a system that “reliably” (in some sense) produces false beliefs. This premise might be rephrased as something like “Phenomenal introspection is the only way to reliably form true moral beliefs” or something like that. I’m not sure; perhaps Bentham’s bulldog could update or refine the premises in a future post or in a response to this post.
By reliable, I meant reliably true.
However, my initial reaction is to reject (2) because it seems like Sinhababu overestimates what kinds of information is available via introspection on one’s phenomenology, at least not without bringing in substantial background assumptions that aren't themselves part of the experience or that might have a causal influence on the nature of the experience. It’s possible, for instance, that a commitment to or sympathy towards moral realism can influence one’s experiences in such a way that those experiences seem to confirm or support one’s realist views, when in fact it’s one’s realist views causing the experience. Since people lack adequate introspective access to their unconscious psychological processes, introspection may be an extraordinarily unreliable tool for doing philosophy.
Lance here criticizes some types of introspection — however, none of this is phenomenal introspection. People are good at forming reliable beliefs about their experiences, less good at forming reliable beliefs about, for example, their emotions. Not all introspection is alike.
Philosophers may think that they can appeal to theoretically neutral “seemings” to build philosophical theories, but not appreciate that the causal linkages cut both ways, and that their philosophical inclinations, built up over years of studying academic philosophy, can influence how they interpret their experiences, and do so in a way that isn’t introspectively accessible. If this does occur (and I suspect it not only does, but is ubiquitous), philosophers who appeal to how things seem to support their philosophical views are, effectively, appealing to their commitment to their philosophical positions as evidence in support of their commitment to their philosophical positions. Without a better understanding of the psychological processes at play in philosophical account-building, philosophers strike me as being in an epistemically questionable situation when they so confidently appeal to their philosophical intuitions and seemings.
I think this objection to phenomenal conservatism is wrong. One can reject a seeming. For example, to me, the conclusion I describe here seems wrong, however, I end up accepting it upon reflection, because the balance of seemings supports it.
But we can table this discussion because Sinhababu doesn’t rely on seemings — he relies on phenomenal introspection.
Phenomenology involves access to what your experiences are like, but it is not constituted by any substantive philosophical inferences about those experiences. That is, if I have, say, an experience of something seeming red, it isn’t (and I think it couldn’t) be a feature of that experience that the redness of the red is, e.g., of such a kind so as to be directly (perhaps “non-inferentially”) inconsistent with a particular model of perception or consciousness. For instance, I don’t think substance dualism could be something one has phenomenal access to, but rather it would be an inference, or position one takes, that explains one’s experiences or may be inferred from one’s experiences.
No disagreement so far.
When I have good or enjoyable experiences, my phenomenology involves what I’d call positive affective states. I don’t think anything about these states includes, as a feature of the experience itself, that the experience itself involves stance-independence or stance-independence about the goodness of the experience. That doesn’t seem like the sort of thing that could be a feature of one’s phenomenology. The notion that phenomenal introspection informs us of hedonism thus strikes me almost as a kind of category error. Substantive metaphysical theses don’t seem like the sorts of things one can experience. And thus the notion that hedonism is true in a stance-independent way just isn’t the kind of thing that I think one could experience, since it’s a metaphysical thesis, not e.g., a phenomenal property (though as an aside I don’t even think there are phenomenal properties, but that’s a separate issue).
I agree that generally introspecting on experiences doesn’t inform us of their mind-independent goodness. But if we introspect on experiences that we don’t want but are pleasurable, they still feel good, showing that their goodness doesn’t depend on our desires.
Second, nothing about the phenomenology of my positive affective states is distinctively moral. If I eat my favorite food or listen to music I like, I enjoy these experiences, but they aren’t moral experiences. As such, I see no reason to think that my good and bad experiences reflect any kind of distinctively moral reality. It’s not a feature of my positive experiences that they are morally good. I don’t even know what that means, and I am confident no compelling account from any philosopher will be forthcoming.
But when you reflect on pleasure it feels good in a way that seems to give one a reason to promote it — to produce more of it. This is a distinctly moral notion. Sinhababu has a longer section on this in his paper — his account is somewhat different from mine.
Even if pleasure were “good,” and I do think positive experiences are good (in an antirealist sense), nothing about these experiences strikes me as morally good. I don’t think there is any principled distinction between moral and nonmoral norms. I think the very notion of morality is a culturally constructed pseudocategory, not a legitimate category in which normative and evaluative concepts could subsist independent of the idiosyncratic tendency for certain linguistic communities to refer to them as “moral.” So it’s not clear to me how my positive experiences relate in any meaningful way to the culturally constructed notion of moral good that persists in contemporary analytic philosophy.
Pleasure feels good in the sense that it’s desirable, worth aiming at, worth promoting. If this argument successfully establishes that pleasure is worth promoting, then it has done all that it needs to do. I don’t think morality is anything over and above a description of the things that are well and truly worth promoting.
I don’t think any of my experiences involve any distinctively moral phenomenology, and such experiences are better explained in nonmoral terms. I’d note, however, that the notion that “hedonism is true” doesn’t make clear that hedonism is the true moral theory which isn’t explicitly stated here. I don’t know if Sinhababu (or BB, or anyone else) claims to have distinctively moral phenomenology, but I don’t think that I do, and I’m skeptical that anyone else does.
This question is ambiguous, but I think the answer would be no.
In any case, if this remark: “Therefore, hedonism is true — pleasure is the only good,” … is meant to convey the notion that hedonism is true in a way indicative of moral realism, I still I am very confident that it doesn’t mean anything; that is, I think this is literally unintelligible. I find my experiences to be good, in that I consider them good, but I don’t think this in any way indicates that they are good independent of me considering them as such, nor do I think this even makes any sense.
I’d have a few things to say here.
1 It seems that most people have an intuitive sense of what it means to say something is wrong. This normal usage acquaintance is going to be more helpful than some formulaic definition that appears in a dictionary.
2 This seems rather like denying that there’s knowledge on the grounds that we don’t have a good definition of it. Things are very difficult to define — but that doesn’t mean we can’t be confident in our concepts of them. Nothing is ever satisfactorily defined.
3 I take morality to be about what we have impartial reason to aim at. In other words, what we’d aim at if we were fully rational and impartial.
Bush quotes me saying the following.
“Phenomenal introspection involves reflecting on experiences and forming beliefs about what they’re like (e.g. I conclude that my yellow wall is bright and that itching is uncomfortable).”
He responds.
But the latter isn’t part of phenomenal introspection. Only the former is. Phenomenal introspection involves reflecting on your experiences such that you have the appearance of a bright yellow wall and the sense of an itch; the beliefs you form about these experiences aren’t part of the phenomenal introspection; they’re just standard philosophical reflection, or theory-building, that seeks to account for those experiences. And while we’re all welcome to engage in such theorizing, it’s a mistake to say that those beliefs are part of phenomenal introspection itself, or that you form beliefs about what those experiences are like; what you describe instead seem like inferences about what’s true given those experiences. And such inferences aren’t part of the phenomenology.
The beliefs about what they’re like are beliefs about the experience. So, for example, the belief that hunger is uncomfortable is reliably formed through phenomenal introspection.
There are other difficulties with BB’s framing here:
Premise 2 is true — when we reflect on pleasure we conclude that it’s good and that pain is bad.
This is ambiguous. What does BB mean by ‘good’ and ‘bad’? Since I understand these in antirealist terms, if Premise 2 is taken to imply that they’re true in a realist sense, then I simply deny the premise. I find it odd and disappointing that BB would echo the common tendency for philosophers to engage in such ambiguous claims. BB knows as well as I do that one of the central disputes in metaethics is between realism and antirealism. So why would BB present a premise that only includes, on the surface, normative claims, without making the metaethical presuppositions in the claim explicit?
This was responded to above — when we reflect on pain we conclude that it’s the type of thing that’s worth avoiding, that there should be less of. We conclude this even in cases when we want pain. To give an example, I recall when I was very young wanting to be cold for some reason. I found that it still felt unpleasant, despite my desire to brave the cold.
This particular ambiguity is especially common in metaethics, and its proliferation has a clear and perfidious rhetorical value: moral realists often present normative claims, e.g., “x is good” or “it’s wrong to torture babies for fun,” without making their metaethical presuppositions explicit, e.g., “x is stance-independently good” or “it’s objectively wrong to torture babies for fun.” Yet these normative claims serve as the premises to arguments that presuppose realism, or that are intended as arguments for realism, or are intended to prompt intuitions against antirealism and in favor of realism. All of these uses are illegitimate, because they rely on the inappropriate pragmatic implicature that to reject the premise or the claim isn’t merely to reject its metaethical component (which has been concealed), but the normative claim itself.
Earlier in this article I was more precise and clarified the things that the anti-realist is committed to.
The other problem with this remark is the claim that when “we” reflect on pleasure we conclude that it’s good and that pain is bad. Who’s “we”? Not me, certainly. I don’t reach the same conclusions as BB does via introspection. BB echoes yet another bad habit of contemporary analytic philosophers: making empirical claims about how other people think without doing the requisite empirical work. BB does not have any direct access to what other people’s phenomenology is like, so there’s little justification in making claims about what things are like for other people in the absence of evidence. And there’s little empirical evidence most people claim to have phenomenology that lends itself to moral realism.
I think Lance does — he’s just terminologically confused. When he reflects on his pain, he concludes it’s worth avoiding — that’s why he avoids it! I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable.
6 Responding to Objections
A Disagreement
One common objection to moral realism is the argument from disagreement. The basic version is as follows.
Premise 1: If some domain has disagreement, then it only establishes subjective truths
Premise 2: The moral domain has disagreement
Therefore, it only establishes subjective truths
Problem: Premise 1 is obviously false. The domain of physics, mathematics, and numerous others garner lots of disagreement. They also are objective.
There are lots of more robust arguments from disagreement — however, I think the best paper on this subject by Enoch decisively refutes them.
B Access
Some worry about how we have access to the moral facts. Enoch puts these worries to rest decisively.
I think we can rather safely postpone discussion of these worries to the following subsections, without saying much more on epistemic access. This is not just because one way of understanding talk of epistemic access is as an unofficial introduction to one of the other ways of stating the challenge, or because as they stand, worries about epistemic access are too metaphorical to be theoretically helpful (it isn’t clear, after all, what ‘‘access’’ exactly means here). The more important reason why we can safely avoid further discussion of the worry put in terms of epistemic access is the following. In the following subsections, I discuss versions of the epistemological worry put in terms of justification, reliability, and knowledge. It is possible, of course, that my arguments there fail. But if they do not, what remaining epistemological worry could talk of epistemic access introduce? If in the next subsections I manage to convince you that there are no special problems with the justification of normative beliefs, with the reliability of normative beliefs, or with normative knowledge, it seems to me you should be epistemologically satisfied. I do not see how talk of epistemic access should make you worried again
Enoch similarly describes why epistemic challenges for moral realism shouldn’t be thought of in terms of justification, reliability, or knowledge. I’d recommend the full paper for an explanation of this.
C Correlation
Enoch thinks the most puzzling version of epistemological objections don’t focus on any of the things above — instead, they focus on a puzzling correlation. This correlation is between the correct moral views and the moral things we happen to believe. Enoch says
Suppose that Josh has many beliefs about a distant village in Nepal. And suppose that very often his beliefs about the village are true. Indeed, a very high proportion of his beliefs about this village are true, and he believes many of the truths about this village. In other words, there is a striking correlation between Josh’s beliefs about that village and the truths about that village. Such a striking correlation calls for explanation. And in such a case there is no mystery about how such an explanation would go—we would probably look for a causal route from the Nepalese village to Josh (he was there, saw all there is to see and remembers all there is to remember, he read texts that were written by people who were there, etc.). The reason we are so confident that there is such an explanation is precisely that the striking correlation is so striking—absent some such explanation, the correlation would be just too miraculous to believe. Utilizing such an example, Field (1989, pp. 25–30) suggests the following problem for mathematical Platonism: Mathematicians are remarkably good when it comes to their mathematical beliefs. Almost always, when mathematicians believe a mathematical proposition p, it is indeed true that p, and when they disbelieve p (or at least when they believe not-p) it is indeed false that p. There is, in other words, a striking correlation between mathematicians’ mathematical beliefs (at least up to a certain level of complexity) and the mathematical truths. Such a striking correlation calls for explanation. But it doesn’t seem that mathematical Platonists are in a position to offer any such explanation. The mathematical objects they believe in are abstract, and so causally inert, and so they cannot be causally responsible for mathematicians’ beliefs; the mathematical truths Platonists believe in are supposed to be independent of mathematicians and their beliefs, and so mathematicians’ beliefs aren’t causally (or constitutively) responsible for the mathematical truths. Nor does there seem to be some third factor that is causally responsible for both. What we have here, then, is a striking correlation between two factors that Platonists cannot explain in any of the standard ways of explaining such a correlation—by invoking a causal (or constitutive) connection from the first factor to the second, or from the second to the first, or form some third factor to both. But without such an explanation, the striking correlation may just be too implausible to believe, and, Field concludes, so is mathematical Platonism. Notice how elegant this way of stating the challenge is: There is no hidden assumption about the nature of knowledge, or of epistemic justification, or anything of the sort. There is just a striking correlation, the need to explain it, and the apparent unavailability of any explanation to the challenged view in the philosophy of mathematics.
On this, several points are worth making.
1 As Enoch points out, this is an explanatory game, so it makes sense to compare the explanatory adequacy of the theories holistically, and see if the best ones favor realism.
2 Also pointed out by Enoch, many people are in error, so the correlation isn’t that striking — it’s not as though there’s perfect correlation.
3 Our reasoning can weed out lots of views that are inconsistent — so that narrows the pool even more.
I’d also note
4 The correlation is not that striking — the correct moral view which seems to be hedonistic act utilitarianism is often wildly unintuitive.
5 Most of our beliefs tend to be right. Thus, based purely on priors, we’d expect the same broad pattern to be true when it comes to our moral beliefs.
6 The same broad arguments can be made against epistemic realism — it’s why there’s the correlation in that case too — but this doesn’t debunk our epistemic beliefs.
D Evolutionary Debunking
Street famously argued that our moral beliefs are evolutionarily debunkable — we formed them for evolutionary reasons, independent of their truth, so we shouldn’t believe them.
First, as Sinhababu points out, we’d expect evolution to make us reliable judges of our conscious experience. Belief in the badness of pain resists debunking because it’s formed through a mechanism that would evolve to be reliable. Much like beliefs about vision aren’t debunkable, neither are beliefs about our mental states, given that beings who can form accurate beliefs about their mental states are more likely to survive.
Second, as Bramble (2017) points out, evolution just requires that pain isn’t desired, it doesn’t require the moral belief that the world would be better if you didn’t suffer. Given this, there is no way to debunk normative beliefs about the badness of pain.
Third, there’s a problem of inverted qualia. As Hewitt (2008) notes, it seems eminently possible to imagine a being who sees red as blue and blue as red, without having much of a functional change. However, it seems like undesirability rigidly designates pain, such that you couldn’t have a being with an identical qualitative experience of pain, who seeks out and desires pain. This means that the badness and correlated undesiredness of pain is a necessary feature, not subject to evolutionary change.
One could object that there are many people like sadists who do, in fact, desire pain. However, when sadists are in pain, the experience they gain is one they find pleasurable. This is not a counterexample to the rule, so much as one that shows that experiences can have many features in common with pain, while lacking its intrinsic badness. A decent analogy here would be food--eating the same food at different times will produce different results, even with the same general taste. If one finds a food disgusting, their experience of eating it will be bad. Traditionally painful experiences are similar in this regard--closely related experiences can actually be desirable.
Fourth, evolution can’t debunk the direct acquaintance we have with the badness of pain, any more than it could debunk the belief that we’re conscious. Much like I have direct access to the fact that I’m conscious, I similarly have direct access to the badness of pain. After I stub my toe my conviction is much greater that the pain was bad than it is in the external world.
Fifth, it’s plausible that beings couldn’t be radically deluded about the quality of their hedonic experiences, in much the same way they can’t be deluded about whether or not they’re conscious. It seems hard to imagine an entity could have an experience of suffering but want more of it.
Sixth, there’s a problem of irreducible complexity. Pain only serves an evolutionary advantage if it’s not desired when experienced. Thus, the experience evolving by itself would do no good. Similarly, a mutation that makes a being not want to be in pain would do no good, unless it already feels pain. Both of those require the other one to be useful, so neither would be likely to emerge by themselves. However, only the intrinsic badness of pain which beings have direct acquaintance with can explain these two emerging together.
Seventh, evolution gave us the ability to do abstract, careful reasoning. This reasoning leads us to form beliefs about moral facts, in much the same way it does for mathematical facts.
E Explanatorily Unnecessary
People often object to moral realism on the grounds that the moral facts are explanatorily unnecessary. The earlier comments apply — positing real moral facts explains the convergence, for example, in our moral views. It also explains our moral seemings — seemings that inform us that, for example, it’s wrong to torture infants for fun and would be so even if nobody thought that it was.
F Objectionably Queer
Ever since the time of Mackie, it’s been objected that moral realism is objectionably queer, something about it is strange. However, it’s pretty unclear what exactly about it is supposed to be so strange. As Taylor says
Firstly, there is ‘the metaphysical peculiarity of the supposed objective values, in that they would have to be intrinsically action-guiding and motivating’; related to this is ‘the problem how such values could be consequential or supervenient upon natural features’ of the world (p. 49)
However, it’s not clear why exactly this is so queer. As Huemer notes, many things are very different from everything else. Time is very different from other things, as is space, as are laws of physics — but we shouldn’t give up our belief in those things.
On top of this, it’s not clear why normativity is queer. There seem to be other things that are irreducibly normative — epistemic normativity seems on firm ground. One who believes the earth is flat on the basis of the available evidence is objectively making an epistemic error and, in an epistemic sense, they ought to change their views. None of this seems too queer.
Mackie just describes what morality is, before declaring that it’s too queer.
If you look at the attitudes of most everyday people towards the notion that it’s really wrong to torture infants for fun — it doesn’t seem strange at all to them.
Additionally, if one is too concerned about queerness, I think hedonism gives a particularly promising route for avoiding such worries. To quote my book.
There are several ways the the hedonic facts resist the charge of being objectionably queer. The first one is that our mental states are already very queer. If one assessed the odds that a universe made up of particles and waves, matter and energy, could sustain the smorgasbord of truly bizarre mental states that exist -- that some mental states were normative would be one of the least surprising facts. If we start with the fundamental strangeness that there’s any consciousness at all -- somehow generated by neurons -- and then we combine that with the bizarreness of the following mental states: color qualia -- particularly when we consider that there are color qualia that no human will ever see but that non-human animals have seen, psychedelic experiences, the intrinsic motivation that comes with the experience of desire, the strangeness of taste qualia, and the fact that there are literal entire dimensions that we will never experience.
Once we become accustomed to these mental states, it’s very easy to no longer appreciate just how strange they are. Yet if we imagine what the mental states that we haven’t experienced must be like -- for example the experience of a bat using nociception or of experiencing four dimensional objects, it becomes clear just how miraculous and bizarre our conscious experiences are. Thus, if something as strange as value were to lurk anywhere in the universe, the obvious place for it to be would be part of experience, alongside its equally strange brethren.
Yet there’s another account of why normative qualia wouldn’t be objectionably strange -- namely, that the supposedly strange feature of qualia, their normativity, is something that we commonly accept. Every time a person makes a decision on account of something they know, they are treating their mental states as normative -- they take particular facts or experiences of which they’re aware to count either for or against an act.
Take one simple example -- when one puts their hand on a hot stove, they pull away rapidly. Something about the feeling of the stove seems to urge that one remove their hand from the stove -- immediately!!
Indeed, anti-realists commonly accept that desires have reason giving force. However, if desires -- a type of mental states -- can have reason giving force, there seems no reason in principle that valenced qualia can’t have reason giving force.
Street (2008) provides a constructivist account of reasons -- arguing we evolved to have a feeling of ‘to be doneness’. When one’s hand is on a hot stove, however, not only do they have a feeling of ‘to be avoidedness’ but that feeling seems to be fitting. Were they fully rational, that feeling wouldn’t go away. That’s because it’s a substantive property of some mental states -- including the one experienced when one’s hand is on a hot stove -- that they are simply worth avoiding.
Conclusion
Given the immense debate about moral realism, in this article, I have not been able to cover all of the relevant articles and arguments. However, I think I’ve summarized many of the main reasons to be a moral realist — some of which have, to the best of my knowledge, yet to be explored in the literature.
These arguments have been unapologetically pro hedonist. This is because I think that the challenges from the anti-realists to the hedonists are far weaker than they are for other moral realist views.
Actually there are two — the other one is a skeptic about consciousness
Credit to Derek Parfit in “On What Matters” for this brilliant thought experiment
I don’t think this sufficiently steelmans the moral relativist side — and leans on lots of loaded terms for rhetorical points.
The problem with your sentences that seem “obviously true” is that the word “wrong” hasn’t been defined.
A smart moral relativist defines “wrong” as “something I deeply abhor”. Perhaps even “something society generally abhors.” That’s it. Critically, it’s seen as a *preference*. It doesn’t exist outside of the subjective preference of the subject.
Whereas a moral objectivist sees “wrong” as “something inherently wrong, written into the universe, and I can reach the true, objectively correct, answer.”
Regardless of which is easier to swallow, I think the preference way of looking at this has a lot of points in it favor, in terms of which framework is objectively true in reality.
I'm mostly unpersuaded by the examples of irrational desires because they are often so unusual. It seems Huemer's earlier point about thought experiments applies here:
"Our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S’, which we encounter or think about quite a bit, and (iii) the correct judgment about S’ is different from the correct judgment about S."
Future Tuesday indifference (and many other of the alleged irrational desire cases) are paradigm examples of this.