Reasons and Moral Anti-Realism
I don't think there are any reasons to do anything if anti-realism is true!
A brief note: I recently debated Liron Shapira about AI doom. I thought it was a very good discussion. Video of the debate is linked below.
Okay, back to your regularly scheduled program.
Here is something that seems obvious to me: sometimes, I have a reason to perform an act. I have a reason not to stab myself in the eye for no reason. I have a reason to eat when I am hungry. I have a reason to eat healthy foods. Can anti-realism accommodate this datum?
I believe that the answer is no. As Parfit suggested, if moral anti-realism is true, all our reasons to act are built on sand. No action is more worth taking than any other. There might be actions that we are, in fact, psychologically disposed to take. But there are none that we have genuine reason to take.
The anti-realist presumption is generally that one’s reasons to behave in some way are given by their desires. This is supposed to be the default. Yet I do not see why the mere fact that one wishes to perform some act gives them a reason to do it. Why do my reasons come from my desires and not from, say, my neighbor’s desires? What is it that makes it so that the wise and sensible action to perform is whichever one accords with my aims?
This becomes clearer when one considers more vividly cases where a person has a desire to perform an act but no reason to perform it otherwise. Suppose that a person has a strong desire to throw their mug across the room or smash their hand against the table. Do they really have any reason to do so? Or suppose a person has a strong desire to consume a drug, even though doing so would give them no pleasure. Do they have any reason to consume it? I believe the answer is no.
Now, my sense is anti-realists often think it is an analytic truth that your reasons are given your desires. It is, they claim, true by definition. But this is hard to believe.1 Suppose my friend does not want to get life-saving surgery that would benefit him in the long-run. Or suppose my friend has an unfortunate predilection for recreational homicide. I say to him “come on, you have reason to stop murdering,” or “you have a reason to get the surgery.”
I think what I am saying is true. But even if you deny that it’s true, it seems, at the very least, that my position is substantive. I am not simply speaking nonsense or misusing language. Yet if it was an analytic truth that you have reason to do what you most want to do, then my sentence would be equivalent to saying “you want to stop murdering,” which would be trivially false. So long as the value realist position isn’t a misuse of language, it must not be an analytic truth that reasons are given by desires. So long as you can coherently ask whether one has genuine reason to do what they want, it can’t be an analytic truth that one has reason to do what they want.
Anti-realists generally claim that we have a reason to pursue our ends, but no reason to have the ends in the first place. Our reasons, it is claimed, are just given by our ends. But this seems to make real reasons illusory! How can you have a reason to take an action in furtherance of an end if you have no reason to have that end?
I only have a reason to buy a plane ticket to Paris if I have a reason to go to Paris. And yet if going to Paris is simply something I choose, how does that give me a reason? The fact that I decided to aim at something doesn’t seem to make aiming at it the wise or prudent thing to do. And what explains why we have any reason to pursue our aims? As we’ve seen, it isn’t an analytic truth. So why is it true?
The division between what you have a reason to do and what you want to do becomes clearer in cases where you want to do things that are unreasonable. Suppose you want to eat a car, for example. Or set yourself on fire, not because you’d enjoy being set on fire, but just because of a brute desire. Or perhaps you want to stay up late, even though you know it will make tomorrow much worse. It seems clear that you have a reason to behave otherwise—that you will be behaving irrationally if you behave that way.
Or, to take a more peculiar example, imagine that you are indifferent to pain in your colon. You cannot differentiate between your colon and pancreas. But you simply, at a higher-level, don’t care at all about pain from your colon. Currently, you are writhing around and screaming in agony. You instruct the doctor: check to see whether the pain originates from my colon or pancreas. If it is from my pancreas, then of course you should treat it. But if it’s from my colon, then keep it as it is.
This just seems so clearly irrational! Surely, even if this is my genuine desire, I’m behaving unreasonably! The fact that I care more about colon pains than pancreas pains gives me no genuine reason to prioritize colon pains over pancreas pains. In fact, cases where you have a reason to do other than what you in fact do seem like paradigm examples of irrationality.
When my brother was younger, he had a strong preference for sandwiches to be cut into triangles instead of squares. Yet as he aged, he grew out of this. My verdict: he came to see that there wasn’t really any reason to care about the shape of the sandwich. He came to see that he was aiming at some things that he didn’t really have any reason to aim at.
I have a final gripe with the claim that it is an analytic truth that one has reason to pursue their ends, which is that this fails to make it the genuinely wise, rational, or sensible thing to do. Suppose someone claims that it is true by definition that the morally right action is to maximize pleasure minus pain. I am skeptical of this semantic account. But even if it was correct, it would seem to give no genuine reason to maximize pleasure.
If this is correct, it would tell us only that people use the word moral to refer to maximizing pleasure and minimizing pain. But why does that give us any genuine reasons? How people speak tells us nothing about the sensible way to behave. “Oh no, if I don’t act as a utilitarian, my actions will no longer merit certain folk-theoretic names!”
In similar fashion, suppose it is an analytic truth that you have most reason to do whatever it is you desire. All that would mean is that English speakers typically use the word reason to refer to people getting what they desire. But how does that make that the sensible or wise way to behave? If I’m deciding how to behave, why should I care about whether I can aptly be described as rational, if the word rational is just a veiled term for a person who does what they aim at.
It might be claimed that one has no choice but to do what they want. Every time you perform an action, you desired to perform that action. Yet this account seems at risk of saying that people never act irrationally, for they always act in ways they want. To see where this goes wrong, we should distinguish between three things:
Inclinations: psychological states inclining one in some direction. These aren’t voluntary. An example would be having a desire to eat a bagel. It is not subject to rational evaluation.
Choices: what one ultimately decides to do.
Aims: what things, in the world, determine what actions one takes. What, at a high-level, people are attempting to achieve.
Often these are conflated under the umbrella term “desire.” But when we distinguish them, we see that one doesn’t need to accord with one’s own aims. One needs to act however it is they choose, but they could, in principle, aim at something other than what they deeply care about (say, what their neighbor cares about). Similarly, this distinction shows what goes wrong when people say there can’t be stance independent reasons, because desires cannot be rationally evaluated. This is true in the above sense, if desires and inclinations are used synonymously but not true of ultimate aims. One can reflect on one’s aims and have reason to change them. You could, tomorrow, simply decide to ditch your ultimate aims and maximize the number of bullfrogs in the world.
Now, anti-realists often say that the irrationality, in the above cases, comes from the fact that you’re not acting in accordance with your long-term desires. But I don’t think this is adequate.
First of all, this doesn’t explain why my brother went wrong in aiming for triangle sandwiches. If that aim was no less legitimate than his aim to avoid agony, then it’s hard to see why that was irrational but avoiding agony is rational. Similarly, it does not explain cases like the colon case, where one simply doesn’t care at all about the thing in question—no matter how much agony it causes them. They may dislike the state that they’re in, in the sense that they find it unpleasant, but they have no genuine desire to avoid it, provided the pain comes from their colon. It also seems incompatible with the fact—which strikes me as obvious—that even if I had a strong desire to slam my hand against the desk, I would have no genuine reason to.
Second, it isn’t clear why one would be behaving irrationally if they did what was bad from the perspective of their long-term desires. Anti-realists hold that rationality does not mandate acting in others’ interests. But why would it mandate acting in my own interests long-term? If I simply do not care what happens to me in five years, then what error could I possibly be making? Unless, of course, I have irreducible reason to care about my future welfare.
It is similarly often claimed by anti-realists that your reasons come from your reflective desires. You have a reason to perform some act if you would want to perform the act after ideal reflection. My brother, it might be claimed, had no reason to cut the sandwiches into triangles, because if he reflected more, he wouldn’t want to.
This account, in my view, has a number of problems.
First of all, it cannot account for the intuition that you might reflectively have no desire to perform some act but still have a reason to. If a person, after perfect reflection, had no desire to avoid future agony, it still seems they’d have a reason to. If a smoker was aware of all the pertinent facts, and still wanted to smoke, despite smoking vastly lowering his welfare, it still seems irrational.
Second, it isn’t clear, given anti-realism, why I should care about my reflective desires. Other preferences do not, in general, work this way. The foods I should eat are the ones I like, not the ones that I’d like if I reflected ideally. What if I just don’t care about my reflective desires? Why should I pursue them?
Perhaps the answer is that I should not. But then the position seems even less plausible. If I desire to smoke, eat a car, or set myself on fire, but would stop desiring to smoke if I reflected more, then, on this account, it is rational for me to smoke or eat a car. What?!
Third, even though this accounts for what things you have reason to do, it doesn’t explain why you have such reasons! Why is the sensible thing for me to do whatever my idealized self would want to do. Whatever explanation is given will be analogous to explanations of the alleged fact that you have reason to do what you want to do—and will fall prey to the same objections.
Fourth, I don’t think ideal reflection is some stable, precise procedure. As Joe Carlsmith points out, the version of me upon ideal reflection is very unlike me. This version of me knows all the facts (and the world has a great many facts). He’s some practically omniscient, Godlike being, with a brain the size of a galaxy. It isn’t clear why I should care about whatever weird alien things this guy cares about. There’s no important sense in which he remains me.
But it also isn’t clear that there’s a narrow fact about what your idealized self would aim at. There are many possible idealization procedures. What you’d care about might depend on the order on which you learn the facts. So it just seems clearly unstable and subject to arbitrary facts.
The reason I think idealized values are informative is that I think there are things that are worth pursuing. There are aims that are worthwhile, even if you have no motivation to pursue them. Upon ideal reflection, you would discover and converge on those aims. But absent anything in the world that there’s reason to aim at, if reasons are just given by whichever things we happen to aim at, it seems that we have no real reason to act. Anti-realists can perfectly well explain why people act in pursuit of their aims. It cannot, however, explain why anyone has a reason to do anything—even act in accordance with their aims.
Perhaps it will be claimed that it’s just a brutely normative fact that you should do whatever it is that you desire. This isn’t true by definition, but is just a substantive claim about the sensible course of actions. But if one is positing brute and irreducible normative reasons, then, in light of many of the cases discussed above, it seems sensible to think that you sometimes have normative reason to do other than what you want to do. Once irreducible normativity is in the picture, it isn’t clear why one wouldn’t simply be a realist.
Many of the points I’m making in this post came from Both Sides Brigade’s awesome post!




I don't really understand what a "reason" is as you're using it here. It seems kind of like a bizarre normativity thing like a "(stance independent) should". Why should I believe it exists if I have no intuition that bizarre normativity things exist?
...I think there are explanations for why I do things. I might eat a sandwich due to causes like "I was hungry" and "my brain is a PID controller that tends to keep hungriness within certain bounds". This seems like a fine accounting of the world. Obviously you don't have a normative obligation to act on being hungry under this accounting, which I say is a virtue of my accounting, since normativity doesn't exist.
Reasons-as-I-define-them are things that weigh on your decision. It doesn't make sense to say "X is a reason to Y", strictly speaking, because X can only be a reason to Y if you use a decision procedure that cares about if X obtains when making decisions about Y.
I think cases where people act on reasons that seem like "bad reasons" don't pose any meaningful challenge to this. Clearly the fact that you didn't want to go to sleep on time was more capable of moving you than the fact that you had work tomorrow. You may now wish that your decision procedure had been different yesterday, due to how you are tired at work, but so what?
This sounds like word celery. Applying common words to thought experiment scenarios you can get confusing results. The human mind is not easily completely modeled symbolically.
The notion of reasons for actions seems to work fine in the anti-realist pov. "A: What's your reason for eating healtht food? B: I want to be healthy in the future". Communication took place, perhaps A&B are anti-realists.