Erik Wielenberg and How Richard Carrier Keeps Missing the Point of Grounding Morality
Carrier is wrong here.
Richard Carrier has recently produced an article titled “Erik Wielenberg and How Atheists Keep Missing the Point of Grounding Morality.” His article has many errors that I thought were worth objecting to.
Sidenote: Carrier is one of my favorite bloggers and I think he is insanely well informed about history and pretty good on philosophy. I have the utmost respect for Carrier. But even smart people get things wrong sometimes. Additionally, Carrier is often find of calling out fallacies fully (e.g. “this is a red herring logical fallacy”) so I shall do the same here for comedic effect.
Carrier starts
In 2009 philosopher Erik Wielenberg published “In Defense of Non-Natural, Non-Theistic Moral Realism” in the journal Faith and Philosophy. The abstract claims:
Many believe that objective morality requires a theistic foundation. I maintain that there are sui generis objective ethical facts that do not reduce to natural or supernatural facts. On my view, objective morality does not require an external foundation of any kind. After explaining my view, I defend it against a variety of objections posed by William Wainwright, William Lane Craig, and J. P. Moreland.
But after reading his article, I can only conclude that Wielenberg does not know what the word “foundation” means as used by any of these authors when they argue for a “theistic foundation” of objective morality. Because nowhere in his article does he ever present what they mean by a foundation at all. Not even a bad one. Just none at all. It literally never comes up. And this is a problem I am finding with atheists in general; especially everyday atheists, who can perhaps be excused by not being all that well educated in philosophy, but I find even professional philosophers doing this, and they should know better by now. Thus requiring me to explain this point—yet again (I’ve gone over this many times before, e.g. from Epilogue to the Sam Harris Moral Facts Contest to Shermer vs. Pigliucci on Moral Science); but this time I will use Wielenberg’s paper as a foil for illustrating where atheists are going wrong and talking right past theists, and with catastrophic effects vis-a-vis moving anyone away from superstitions about gods and towards a reality-based worldview (an example of that point, Justin Brierley’s detailed account, I will write on in a coming month).
The Irrelevance of Wielenberg’s Actual Thesis
Ignoring the abstract, if an alien from Planet X read Wielenberg’s paper and had to report what it was about, I think they’d have to say something like: “Certain superstitious Earthlings believe meeting moral obligations cannot be rationally justified without the existence of some sort of conveniently-propertied ghost, and this Earthling shows that moral obligations have intelligible meanings without a conveniently-propertied ghost, which no one disputes and in no way replies to the superstitious Earthlings. It is therefore not possible to ascertain the point of this report.”
Wielenberg provided an account of how atheists could explain what morality is and why it exists. This does not require believing that there’s some extra thing that grounds morality—indeed, Wielenberg’s supervenience thesis specifically requires that it’s a necessary relationship. If X necessarily supervenes on Y then worlds with Y will have to have X. Brightness supervenes on color—something can’t be black and very bright.
A good analogy here would be epistemic facts. Facts about what we should believe in an epistemic sense supervene on natural facts: it is necessarily true that one who has precisely our evidence would be rationally required to believe the earth is round. This doesn’t require believing epistemic facts are ghosts that happen to pop up whenever there’s evidence—they’re just necessary properties of natural facts. Moral facts can be the same.
For example, Wielenberg quotes Craig and Moreland as saying “What does it mean to say, for example, that the moral value justice just exists? … It is clear what is meant when it is said that a person is just; but it is bewildering when it is said that in the absence of any people, justice itself exists” (p. 33). To this Wielenberg responds, “With respect to justice, my view is that there are various obtaining states of affairs concerning justice, and that when individual people have the property of being just, it is (in part) in virtue of the obtaining of some of these states of affairs” (p. 34). But that’s not what Craig and Moreland are asking.
I am certain they’d both agree that no God is required for me to say, and be stating an objective fact even, that my girlfriend’s bedroom’s decoration is “Star-Wars-y,” in that it resembles the canonical aesthetic of the Star Wars franchise. Because that isn’t saying anything about how people should or ought to “Star-Wars-ify” their bedrooms. It’s just a neutral statement of fact that the decor meets certain defining criteria. Everyone agrees justice exists in that sense, the only sense Wielenberg ever articulates. What Moreland and Craig are asking is how it can be the case that justice is moral, as in is “good,” and “good” not trivially, but in a way that motivates our caring about it, and indeed not just caring about it, but wanting our actions to conform to it—and indeed, wanting that more than we want anything else, otherwise we’d just laugh “justice” off as a curious aesthetic and continue preferring other styles of being. Wielenberg never answers this question. It does not even appear anywhere in his article as if anyone has ever asked this question, least of all the superstitious Earthlings he thinks he is answering but isn’t. Yet that is most definitely exactly what they are asking. So his paper is a non-response to their point.
This is just a total red herring (or read herring logical fallacy as Carrier is fond of saying). Wielenberg is currently giving an account of morality. He is not explaining why we should always be moral. Pointing out that Wielenberg isn’t answering a different question that’s on Carrier’s mind is no objection to Wielenberg’s account.
My answer to Craig’s question would be like Wielenberg’s—of course it makes sense to talk about justice even if no one was around, much like it makes sense to say black is not bright, even if there were no black things that existed.
Carrier asserts this isn’t answering the question, which is clearly false. Craig asks if justice would exist, Wielenberg answers yes, much like the statement unicorns have horns is true, despite there being no unicorns. I’m not sure why Carrier thinks this isn’t an answer.
Carrier says that everyone agrees justice exists in Wielenberg’s sense. This is clearly false. While everyone agrees that if you gave a purely naturalistic description of justice, it would exist, it seems the claim “helping kind people is just,” is a substantive claim, and not just true by definition. Many would say that there is no objective justice, any more than there is an objective present on the b theory of time. Justice is just a ham fisted description of natural phenomena, that we have no reason to care about, on some accounts. Wielenberg is arguing against these accounts.
Almost the only useful thing Wielenberg does say is that, if there is no God, then “it is in some sense an accident that we have the moral properties that we do,” but “that they are accidental in origin does not make these moral properties unreal or unimportant.” That is entirely true, and theists do need to hear it. It would not matter why you ought to behave a certain way if it is nevertheless true that you ought to behave that way—because it’s still the case that you ought to behave that way; so “accidental” moral facts would not be any less obligating. Yet even here Wielenberg fails to take the actual step the theists are looking for: going from the vacuous and unmotivating observation that moral standards “exist” (they are “real”) and somehow “important” (whatever that means), to actually giving an actual reason anyone should actually adhere to those standards in their behavioral choices. That is what the theists mean by the grounding of morality, by a “ground for morality,” by a “foundation” of moral obligation, and all the various other turns of phrase they like.
What these theists mean is a reason to be moral. Not the mere existence of morality as a concept. Moreover, they do not mean just “any” reason to be moral, like someone presenting a list of reasons to Star-Wars-ify their bedroom; reasons that might be entirely rational (they are not random gibberish, but are actual reasons people do or could actually have to do it) and even mildly enticing (like, enough that you even do think about it for a minute or two), but ultimately insufficient to motivate us to actually do it (because, say, we like a 60s Mod look more, or can’t be bothered because a redec would be more work than we think it’s worth). Theists mean by a “moral ground” a fully motivating reason to actually be moral—and one that would be true for everyone, and not just random people who perchance like the aesthetic they are marketing.
This is obviously false. One could think there’s a fact of the matter about what we should do, but not think that we’re rationally obligated to do it. Sidgwick held this view. Obviously there are naturalistic properties that can be called moral, but if there’s no morality outside of trivial naturalistic descriptions, then there’s no fact of the matter about which moral system is correct. This would seem to rob morality of its force. It seems plausible that if one is a deontologist, they are wrong.
Theists are not just asking for a motivating reason. Morality could be objective even if there is no motivating reason. Indeed, this used to be my view. If morality is about what would be done by one who was totally rational and impartial, then one who thinks that there’s a fact of the matter about morality wouldn’t have to think that we’re all rationally obligated to be moral, because they might not think we’re rationally obligated to do what we’d do if we were impartial.
Wielenberg never provides any such ground for morality. He thus never responds to the theists he claims to be answering. When it comes time in his article to even get anywhere near doing that, all he says is, “Necessarily, any being that can reason, suffer, experience happiness, tell the difference between right and wrong, choose between right and wrong, and set goals for itself” thereby simply “has certain rights, including the rights to life, liberty, and the pursuit of happiness, and certain obligations,” like for example “the duty to refrain from rape (in typical circumstances),” an example he mentions only because they brought it up. That’s a non answer. Nowhere does he connect “having” rights in this sense, with any reason anyone should care about that. For example, I too could elaborately argue that “Necessarily, any being that can reason and set goals for itself” and so on thereby simply “has the property of enjoying Star Wars.” That would be merely an unjustified assertion (it is, after all, false; we can empirically adduce ample evidence of people who don’t enjoy Star Wars). Which is also all Wielenberg gives us (we can empirically adduce ample evidence of people who don’t care about the mere “existence” of human rights). But it’s also not relevant even were it a justified true belief. That everyone enjoyed Star Wars wouldn’t be sufficient to motivate everyone to redec their bedrooms accordingly. I enjoy Star Wars. And I like my bedroom the way it is.
Here Carrier commits an equivocation fallacy, confusing rights and duties with motivating reasons. It might be the case that one is motivated to take some action like torture infants, but they still shouldn’t do it. When I say they shouldn’t do it, I mean that if they were totally rational and impartial they wouldn’t do it. Why? Well, that’s just true in virtue of natural facts. There’s something that it’s like to be in pain which would make a rational impartial observer not want to be in pain.
One who thinks that all reasons are motivating has to accept that if a person desires to do x they would be rational to do x. This requires biting lots of bullets, including the rationality of eating a car if one desired to do so, setting oneself on fire, not going into a Doctor’s office when they’re 4 years old because they don’t like shots despite their choice causing immense long term harm, cutting themselves, not eat enough food because of an eating disorder, suffering on a future tuesday for no good reason, and enduring immense agony despite it giving them no joy. This is implausible. Indeed, on Carrier’s account, if one is not motivated to care about their desires as a whole, rather than just their current desires, they have no reason to take into account their future self. Thus procrastination would be rational, as would enduring infinite agony on a tuesday rather than a pinprick on monday, merely because on has an arbitrary preference for avoiding suffering on monday’s, despite their qualitative experience not changing based on the day of the week.
If someone asked why 1+1 equals 2, the answer would be that it just does. Similarly if one asks why there’s something rather than nothing. Morality is the same—it bottoms out at things that just are, with no deeper story. This is true of all lines of reasoning, as per agrippa’s trilemma.
So it’s not simply enough to somehow show that “rights” are just a thing that follows from being people. “Violence” is also just a thing that follows from being people. So is selfishness, dishonesty, lust, gender, singing badly in the shower, garden tending, a frequent affectation for cheese. That tells us nothing about how we should behave in respect to these things. Is violence just like an affectation for cheese? Is selfishness good because it’s inherent to being a singular mind in need of self-preservation? Do gender norms follow necessarily from anything, and if so, why care? Tattoos and eyeglasses defy what nature bestowed on us; that does not make them immoral. We defy our evolved and biological nature all the time. And indeed, ample rational reasons can be given that we even ought to. But you won’t find any in Wielenberg’s paper. How do we get to the ought in morality? That’s what we’re supposed to be on about here.
To be clear, I am not here saying Wielenberg thinks “biology dictates morality.” I am only using biology as an example of “a” way rights can be properties of people in Wielenberg’s sense. But it does not matter what that way is, whether structural (all social systems of conscious beings, whatever their biology, will possess the configurable property of rights), or magical (a mystical realm of Platonic Ideals just mindlessly imbues people with the configurable property of rights), or anything else whatever (up to and including “God did it,” which you might start to realize now is a bigger problem for theists than even Wielenberg realizes). Regardless of how rights are properties of people, we need more than just some way “human rights” are an inevitable configurable property of people. We need grounds to give a shit that it is. Otherwise, we have no grounds to give a shit that it is. And that’s what perplexes Moreland and Craig. That’s what they are asking the likes of Wielenberg to produce.
Wielenberg gives a platonist account, so he thinks there are abstract objects like mathematical facts and moral facts that accounts for morality and math. Carrier calling it “magical,” is just a strawman fallacy. Platonists don’t think there’s some magical sky realm, they just think there’s a part of reality that’s not physical that relates to and grounds the mathematical and moral facts.
The question of why we should care about morality is not one that Wielenberg tackles to the best of my knowledge. My account would be like Parfit’s in reasons and persons. We would be just as rationally foolish to discount other people as we would to discount our future self. Parfit lays out his case in excruciating detail, I haven’t time to summarize it much here.
It’s not clear that there’s a deeper reason that one shouldn’t set themself on fire other than that it causes suffering and suffering is bad.
And that’s why that has been what I produce: an actual grounding of morality in natural facts (under peer review in The End of Christianity; extensively in response to Moreland in Sense and Goodness without God; and in numerous articles on my blog).
I’ll get to Carrier’s account later.
But that leaves those of us who do think morality has a ground, a foundation, in the actually relevant sense: an actual justification for being moral. I’ve long struggled to understand what Wielenberg thinks that is. I haven’t found it in any of his articles or books; and this, by its title, I thought surely should. Which is strange, as he is confident he has one, and that it requires no deity (as I’m sure no genuine moral facts do). So I’d really like to know what it is, and thereby whether it corroborates or challenges what I have so far found it to be, a fundamental requirement of scientific progress on any problem. To get at what I mean by this requires going through some stages of thinking in Wielenberg’s article which may at first seem a digression, but trust me, they do connect back the central point.
This is not what foundation means, Carrier is committing a strawman fallacy. If someone says they have a foundation for consciousness, that doesn’t mean a reason to care about consciousness. If someone has a foundation for the laws of physics in superstring theory that wouldn’t mean they have a reason to care about it, instead, it would mean they have an account for how it exists. Wielenberg’s account is that it’s necessarily supervenient on natural facts. Once again, there’s something it’s like to be set on fire which would make a rational person not want it (assuming they have the normal human reaction to being set on fire, there are of course exceptions).
“Consider,” Wielenberg proposes, “the state of affairs in which it is morally wrong to torture the innocent just for fun and the state of affairs in which pain is intrinsically bad (that is, bad in its own nature, or in and of itself).” He maintains that “these states of affairs obtain not just in the actual world but in all metaphysically possible worlds” (p. 26). This is actually false as stated. It is false in two different but connected respects: first, semantically; second, physically.
Semantically, I have visited many a dungeon in which it was perfectly moral “to torture the innocent just for fun.” Now, what I mean by this is, in some respect, a triviality; I think Wielenberg could fix this problem with suitable rewording, such as he gets to later when he starts incorporating “consent” as a key component of moral propositions. In those dungeons (and many a private bedroom), it is only moral “to torture the innocent just for fun” if they informedly and competently consent to it. So one might say Wielenberg is hanging a lot on key words like “just” for fun or “innocent.” But these are distinctions that should not be left unstated. They matter. Pain is simply not intrinsically bad. It is only contextually bad. And that begs explanation. Why does context matter? Which means, not merely why might we care about context, but why should we care about context? Think about it. Put a pin in that.
I agree with Carrier here that there are some possible worlds in which it’s okay to torture the innocent just for fun. To be more precise, Wielenberg should have said “it’s bad to torture innocent people just for fun in ways that cause more suffering than happiness.” However, this is just a technicality, Wielenberg’s point was clear. Thus, this is a red herring fallacy.
There is only one moral law which always applies, namely, that if one is choosing between some number of actions, they should choose the one that would cause there to be the most joy of sentient creatures.
Pain may not be intrinsically bad but suffering is, where suffering is just the types of mental states that are undesirable. Some people may enjoy pain, but suffering must be the type of thing that should be avoided by definition.
One might ask whether it is moral for a sociopath who does not at all care about others “to torture the innocent just for fun” so long as they are always appropriately consenting adults. Yes, that sounds like some sort of moral Gettier Problem. But think about it. Do we mean to classify mere mental stances as moral or immoral? Or is that sociopath still “behaving morally”? The fact that you are asking that question would mean the question itself has quite a lot to do with what you care about. What is more important, that a sociopath think correctly, or that they always behave in ways you will not find alarming and a social problem to deal with? It’s difficult to intuitively answer that question because it is nigh impossible to decouple “thinking correctly” from “always behaving in correct ways.” Because the very reason you might give to be concerned about “thinking incorrectly” is simply that an incorrect mindset risks causing incorrect behavior; and we can’t really conceive of an incorrect mindset perfectly reliably producing nothing but correct behavior. That would require such an extraordinary set of coincidences as to not even contemplate as a possibility worth considering. Bad minds simply are dangerous because they cause bad behavior. That’s really the only rational reason to care about them. But that would leave bad behavior as the actual thing we have any ground to care about. And even when they are logically inseparable (e.g. you will adjudge pretending to love you as bad, therefore the goods of love can only exist for you with a good mindset in the one who loves you; they are effectively synonymous), we’re still talking about which natural facts we care about.
Here Carrier just shows that moral questions exist and there are disagreements, which proves nothing normatively significant. One can care about things and be irrational to do so. A fundamentalist Christian cares about decreasing the amount of homosexual activity in the world, yet they are irrational to do so.
Which gets us to the physical sense in which Wielenberg’s statement is false. Imagine a world (and indeed, someday someone may even be able to produce and live in it, whether that’s a good idea or not) where “torturing the innocent just for fun” cures all diseases and disorders (mental and physical), up to and including restoring youth and fitness to the elderly, and where nothing else effects any such cure, and where anyone who isn’t ever tortured, rapidly ages and accumulates diseases and disorders endlessly until they become a gibbering, incompetent lunatic—who can be at once fully restored if someone tortures them just for fun. It’s hard to argue that in that universe it is “morally wrong to torture the innocent just for fun.” In that universe, to the contrary, it is arguably morally right to do so. All because we simply changed the physical facts. Which seems to indicate that moral facts are grounded in natural facts.
Wielenberg used the word “just,” before for fun. If torture cures all disease, then it wouldn’t be just for fun. If it served other purposes, it wouldn’t only be for fun, by definition.
Okay. How might we push back on that? You could say that, well, the competent should still have to consent. But that won’t apply to those who have become so ailed they lack competence to consent. At that point, is it really more moral to let them die in gibbering madness than to torture them for fun and thereby cure them? We do, after all, deem it moral to perform painful and invasive procedures on children and the insane, when there is sufficient need to, such as to preserve their own life or limb. And in this bizarre alternative world, that’s basically what “torturing the innocent just for fun” simply does. So it seems evident that changing the natural facts, changes the moral facts. Or you might try to argue the world proposed is impossible, but I doubt it (once we have virtual worlds to play in, the “impossible” will have a lot less meaning), and in any case, all you are then arguing is still that the moral fact you insist upon derives from some physical fact (like, the intentions of the “torturer,” or the physical impossibility of “selfish intentions” ever being consistently aligned with “unselfish outcomes”). You thus have just grounded moral facts in natural facts again. You can’t escape this. No matter how you try to maneuver, all you end up doing is defending the same conclusion: moral facts are grounded in physical, hence natural facts.
Now, this hasn’t gotten us to a conclusion yet. Because we haven’t gotten to why we should care about these outcomes. And morality must ultimately be grounded in some such thing; or else it has no ground at all, as in, we will have no grounds to obey it. All I am showing so far is that it looks like moral facts are grounded in physical facts. So if moral motivation is also grounded (the thing Moreland and Craig are worried about), then we have good reason to suspect that that motivation will be found somewhere in the natural facts of ourselves and our world as well. Because everything else appears to be (and indeed, I mean everything else). So we should look there first, before trying to find some other presumed source (as Wielenberg does, in some kind of vague conceptology; and Moreland and Craig do, in God). Okay. Now put a pin in that.
I disagree with Wielenberg if he stands by the sentiment. I think the utility monster should torture people just for fun. But as I pointed out before, there is a contradiction between torture being just for fun, and it serving other purposes, that are not just fun,. The reasons stem from natural facts—much like we would be foolish to conclude the earth is flat based on available evidence, we’d be foolish to set ourselves on fire if we have the normal human responses to being set on fire.
The Problem in a Nutshell
Wielenberg says “my view does violate the principles that (i) all values are properties of persons and (ii) all values have external foundations” but “I suggest that the lesson to be drawn from this is that (i) and (ii) are false; certainly Craig and Moreland provide no arguments for such principles.” But this is only true if we categorize, for example, “justice” and “injustice” (or other “system describers”; for example, “democracy” vs. “monarchy”) as mere descriptors for possible social-causal systems. As such, “justice” (like “injustice,” “democracy,” “monarchy”) exists as a universal potential: anywhere a system is organized a certain way, it will be correctly described by that label. Such a fact requires neither (i) nor (ii) because it is entirely conditional. “If it is possible for a system to be organized at location A so as to manifest the properties defined by justice, then justice as a thing always and necessarily potentially exists at A.” This is true even if no A exists. But that does not address the actual question, which is whether justice is good, which would normally mean “preferable.” So we aren’t actually defending moral facts here. Just amoral possibilities. There is no reason to prefer a system organized as “justice” over a system organized as “injustice.” And thus no reason to call the one moral and the other not.
I think the lesson to be drawn from this is that (i) and (ii) are both true, but it is by finding how they can both be true, which still aligns with the actual empirical facts of our actual existence, that we will discover the actual grounding of moral facts. And it’s the theists who are screwed here. Because they actually never do what they are asking Wielenberg to do either. If I told you that all moral facts are grounded in the tomato on my desk, such that they can only be true—in the only relevant sense, that you really should obey them over all other alternative directives—as long as that tomato exists, have I actually grounded moral facts in anything? Switch out “your god” for “my tomato” and you have gained nothing here. God actually doesn’t ground anything even if he exists. You still have to care what God thinks (or what he is, or whatever thing theists want to ground moral facts in). If you don’t, then how does what he thinks (or is, or whatever) ground anything? Crickets.
Justice is good because if we were fully rational and impartial we’d prefer for things to be just.
Values aren’t properties of persons, non persons can suffer.
I don’t think external foundation is well defined. Values are not merely descriptions of physical states of affairs, but they do supervene on them. If one thinks that morality is not objective and that nothing is desirable they can’t account lots of intuitions
It’s possible to be mistaken about morality without being mistaken about non moral facts.
It’s possible to have irrational desires.
It’s objectively mind indepedently wrong to torture infants for fun in ways that cause more suffering than joy.
We have made moral progress over time objectively, not just relative to our current view.
Some things like agony truly matter.
Moral disagreement is possible even when there’s agreement about natural facts.
Carrier’s tomato point is ill conceived. Facts about what we should do are like epistemic facts about what we should (rationally) believe. Much like one who sets themself on fire despite not enjoying it would be making an er, one who believes the earth is flat is irrational. Saying that epistemic facts are supervenient on natural facts would be categorically different from saying they’re grounded in a tomato. Moral facts are the same in this regard.
“What may be true,” Wielenberg says, “is that nihilism is false only if there are basic ethical facts” (p. 39). But that’s probably false. Since a god actually can’t ground moral facts (without appeal to the natural facts of what physically is the case and what things people actually already care about), neither can any “basic moral facts” of Wielenberg’s construction. There is simply no reason to care about them—without appeal to the natural facts of (a) what physically is the case and (b) what things people actually already care about. So always, every single time, that’s where things always land. That’s the only foundation there can ever be or will ever be for being a moral person—and hence, in turn, for ascertaining what a moral person is.
Consider the following hypothetical person. Every morning they pick out of a hat either left or right. If they pick out left, then for that entire day they are indifferent to suffering that occurs on the left side of their body. If they pick out right, they’re indifferent to suffering that occurs on the right side of their body. The direction pulled out of the hat doesn’t affect their experiences at all—they still are in just as much pain regardless of which side it was.
Now, they’re given a choice to endure infinite suffering on the left side of their body or a pinprick on the right side of their body. When they endure their suffering they’ll be hypnotized to believe that they’d pulled out from the hat the opposite direction to the one on the side of their body on which they’re suffering, so their qualitative experience will not be affected by which direction they pulled out of the hat. However, in reality they pulled out left in the morning, and they attach metaphysical significance to this, even if it doesn’t affect their experiences.
If they choose to endure infinite suffering on the left side of their body, it seems they’d be irrational. Carrier would have to deny this. I take this to be a decisive counterexample.
The theist (and maybe Wielenberg too) is worried that if morality simply derives from what we want, then it is capricious and arbitrary, just a matter of shifting opinion. But that’s a false fear. It’s also moot. Because there has never been any argument for being moral that didn’t appeal to what people wanted. So you had better get with the program and admit we’re already where I am, and have been there for thousands of years. You have to want to be moral (or else want something that entails you will want to be moral once you realize this) for it ever to be true that you ought to be moral. Full stop. They also worry, of course, that the correlation between benevolence and good outcomes and malevolence and bad outcomes, though reliable enough to make rationally clear which is the smarter bet, is nowhere near perfectly reliable (the odds are “good,” not one hundred percent), and that’s annoying. But merely inventing a God who will fix that won’t fix it. Empirically, we already know only one entity exists who actually ever does anything to fix that: us. So you might want to get on that.
Obviously people will only be motivated by morality if they want to be moral. But people will also only be convinced of beliefs if they think they’re true. This doesn’t mean that the only rational beliefs are the ones that actually convince people.
Theists don’t like this, because it means they will pretty quickly have to start admitting they are wrong about what is and isn’t moral. They can’t have that, so they delusionally go on about our having to obey the God they invented in their own image or else we’ll have no reason to obey any moral code at all. They are lying. And we should stop taking advice from liars. We don’t need God to justify being moral. Because the way the world really works (and doesn’t), and the things we’ll want most out of life once we know what’s really available (and isn’t), fully suffice to justify and motivate a benevolent disposition in all rational persons. Moral facts then become fully discoverable natural facts—not random ideas we stumble across in our heads (as Wielenberg’s system seems to entail), and definitely not whatever our ignorant, delusional peers or ancestors have fantasized (as all modern theism entails).
But Carrier can’t give an account of why we should care about other people’s desires or even why we should change our desires. It seems obvious that as I got older and realized there were better things in life than chocolate icecream, I was actually learning something. On Carrier’s account, however, my desires were just randomly shifting.
In another article Carrier writes
What “Is” Morality?
In all cultures, today and throughout history, “morals” have always meant “what we ought to do above all else.” In other words imperative statements that supersede all other imperatives. To date, despite much assertion to the contrary, we have only discovered one kind of imperative statement capable of having relevant truth conditions, and hence the only kind of “ought” statement that can be reduced to a relevant “is” statement: the hypothetical imperative. “If you want X, you ought to Y.” These statements are routinely empirically confirmed by science, e.g. we can demonstrate as empirically true what you ought to do to save a patient, build a bridge, grow an edible crop, etc.
This is false. Science can confirm that the best way of getting X is doing Y, but it can’t confirm that if you want X you ought to do Y. After all, when we say ought, we’re talking about what you’d be rationally required to do. Science can’t prove that you’d be foolish not to do the things that best achieve your goals.
Carrier is also wrong that one can’t reduce ought statements to descriptive statements. I think X is wrong is equivalent to saying “If you were totally rational and impartial you wouldn’t do X.” There, I did it!
The “is” form of these statements is something to the effect of “when you want X, know Y best achieves it, and seek the best means to achieve your goals, you will do Y.” That is basically what the information is that we are claiming to convey to you when we tell you you ought to do something. Even if our only implied motive is “we’ll beat you if you don’t comply,” we are still just stating a fact, a simple “is”: that if you don’t do Y we’ll beat you; and if you reason soundly about it, you will not want to get beaten.
But usually moral propositions are not meant to be appeals to oppressive force anymore. Because we know that doesn’t work; it always leads to everyone’s misery, as I just noted at the start of this article. Though Christians often do end up defaulting to that mode (“Do X or burn in hell; and if you reason soundly about it, you will not want to burn in hell”), the smarter ones do quickly become ashamed of that, realizing how bankrupt and repugnant it is. So they try to deny that’s what they mean, attempting to come up with something else. But no matter what they come up with, it’s always the same basic thing: “Doing Y gets you X; and if you reason soundly about it, you will realize that you really do want X.”
This isn’t very different from my formulation of “if you were rational and impartial you’d do x.” However, it’s also false. The statement
“If you really want me to be happy you should leave me be.”
is different from
“The best way of making me happy is leaving me be.”
Only one has imperative force.
Even Kant’s attempt to dodge this consequence by inventing what he called “categorical imperatives,” imperatives that are somehow always true “categorically,” regardless of human desires or outcomes, failed. Because he could produce no reason to believe any of his categorical imperatives were true—as in, what anyone actually ought to do (rather than what they mistakenly think they should do or what he merely wanted them to do). Except a hypothetical imperative he snuck in, about what will make people feel better about themselves, about what sort of person they become by so behaving.
I agree with Carrier that Kant failed. However, I think my account succeeds for reasons given above. Additionally, there being moral facts is a good explanation for why reflective equilibrium converges to utilitarianism over time, as I’ve argued elsewhere, and why many things seem or bad. If we think that X seeming true gives us prima facie justification for believing X (which is requires to escape skepticism), then the many bizarre results of anti realism count against it.
Carrier goes on to talk about the science of our moral beliefs, which doesn’t show what we should do. However, it is another piece of evidence for utilitarian moral realism. As the dual process studies have shown, more careful reasoning makes people more consequentialist. This is much more expected if there are moral facts which we discover than if there are not, much like more careful reasoning leads to better scientific knowledge, it would be a strange coincidence if there weren’t desire independent morality that reasoning more carefully lead to consequentialism.
Carrier’s account makes moral facts leave their force—they’re just trivial natural facts that we can learn things about. One final case against this would be the following.
Suppose you accept my thesis that careful reflection across dozens of cases leads to utilitarianism. However, initially people are not utilitarian. Thus morality has several features
Many people think it’s objective.
There is not convergence prior to reflection
There would be convergence after sufficiently diligent reflection
Are there any examples of other things with those features that are not objective?
I know Carrier says that morality is objective, but his account deflates the force of morality. It would be like saying there are objectively justified beliefs because there are beliefs that people think are true. His account cannot provide an explanation of why there would be this type of convergence, if morality is just about our desires.
While Carrier may be right about there having been no real Jesus, he is wrong if he thinks there is no real (robustly objective) morality.