49 Comments

I don’t think this sufficiently steelmans the moral relativist side — and leans on lots of loaded terms for rhetorical points.

The problem with your sentences that seem “obviously true” is that the word “wrong” hasn’t been defined.

A smart moral relativist defines “wrong” as “something I deeply abhor”. Perhaps even “something society generally abhors.” That’s it. Critically, it’s seen as a *preference*. It doesn’t exist outside of the subjective preference of the subject.

Whereas a moral objectivist sees “wrong” as “something inherently wrong, written into the universe, and I can reach the true, objectively correct, answer.”

Regardless of which is easier to swallow, I think the preference way of looking at this has a lot of points in it favor, in terms of which framework is objectively true in reality.

Expand full comment

I'm mostly unpersuaded by the examples of irrational desires because they are often so unusual. It seems Huemer's earlier point about thought experiments applies here:

"Our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S’, which we encounter or think about quite a bit, and (iii) the correct judgment about S’ is different from the correct judgment about S."

Future Tuesday indifference (and many other of the alleged irrational desire cases) are paradigm examples of this.

Expand full comment

"Additionally, we have good evidence from the dual process literature that careful, prolonged reflection tends to be what causes utilitarian beliefs — it’s the unreliable emotional reactions that causes our non-utilitarian beliefs."

I think this is not the best reading of the available evidence. Check out the Moral Myopia Model:

https://www.researchgate.net/profile/Justin-Landy/publication/311964487_The_Moral_Myopia_Model_Why_and_How_Reasoning_Matters_in_Moral_Judgment/links/58653ac008aebf17d397f279/The-Moral-Myopia-Model-Why-and-How-Reasoning-Matters-in-Moral-Judgment.pdf

I also think this is not great evidence for moral realism in any case, since the dual process studies universally contrast utilitarianism vs non-utilitarianism. But they don't contrast moral realism with anti-realism.

Expand full comment
Sep 27, 2022·edited Sep 28, 2022

I found the arguments towards all three anti realist positions off

On subjectivism you seem to confuse the evaluator and the person spoken about in the hypothetical

So if you ask me, David thinks murder is good, does it follow that murder is good?

Since I am not a realist I can only answer in two ways here, as I see it or as I take it David sees it*.

David sees it as good, I see it as bad, which question are you asking? If your answer is "I am asking in general" I will just point out again that I am an anti realist.

(same issue arises if you switch out David for a hypothetical me, the is it good either needs to target me or hypothetical me, when that has been made clear I can answer the question)

* I could of course answer how someone else sees it but that would be really silly

On the error theory there just seems to be normative entanglement confusion. If you overheard a known anti realist talk to some newbies and say "hey, error theory is super intuitive, I can say stuff like 'what hitler did was not good' and 'there is nothing wrong about homosexuality'" you would presumably want to step in and say "woah, those sentences don't mean what you think they mean"

but in the same way you are using sentences that makes it sound as en error theorist wouldn't be against some horrific stuff, or want to stop/change/prosecute whatever. I think that is confused.

Found the least concern with non-congnitivism but lay people often treat agreement as fact (to the dismay of most philosophers). So we can find people saying that something tastes bad simplicitor (and even warn others that it tastes bad). On the surface these seem like declarative statements that aren't subjective. But here I think non-cog is quite plausible. Also, real world contexts are often more non-cog than philosophical contexts.

Aside from that, I think combination hypothesis are the most likely, sometimes people do this, sometimes that. As Huemer says, people are very confused, philosophy can make us less confused. I think lay people use moral language in confused and contradictory ways.

Indeterminancy is also a cool view.

Expand full comment

"positing real moral facts explains the convergence, for example, in our moral views"

Whereas relativsm. explains the divergence!

Expand full comment

> Now the anti-realist could try to avoid this by claiming that a decision is irrational if one will regret it. However, this runs into three problems.

It seems to me that there is an alternate solution: a decision is rational if you want to make it (in the moment when you make it) & irrational otherwise. On this principle, the future-Tuesday-indifferent person is rational in choosing to accept their suffering on Tuesday, because when they make that choice they don't care about their suffering on Tuesday; that, on Tuesday, their preferences will change does not contradict this. If you do something you don't want to do (obvious example of this include procrastinating when you know you need to do some work to achieve your desires, or falling asleep when you want to stay awake), then you have acted irrationally, not because you have an irrational desire to do it, but because you do not desire it.

Your other examples in this section seem to have the same problem: if you define rationality in the normal economic way, as doing what you want to do, i.e. what you expect to produce the outcomes you want, then the people in the examples are acting according to their preferences, i.e. rationally; it's just that their preferences are strange enough that this is unintuitive. (Also, some of the situations described are similar to more probable situations in which a similar action is irrational: e.g., if I became allergic to grass, I might still pluck grass at first, not because I would want to do it & to receive the consequences of doing it, but because I would do it out of habit, without thinking about whether it's rational.)

Expand full comment

> There are no serious philosophers that I know of who defend cultural relativism.

Some forms of social contract theory seem to amount to moral relativism, because they demand that a person follow the ethics of their particular society.

> Consider a few examples.

> Imagine the Nazis convinced everyone that their holocaust was good. This would clearly not make it good.

> Imagine there was a society that was in universal agreement that all babies should be tortured to death in a maximally horrible and brutal way. That wouldn’t be objectively good.

Under relativism it wouldn't be objectively good, because relativism denies that an objective good (regardless of the society you live in) exists. In your second example the torture of babies would be good according to that hypothetical society's values, but bad according to our society's values; this doesn't seem like a contradiction. (The Holocaust is a bad example because it was motivated not just by different values, but by wrong beliefs about the world. I.e. the Nazis falsely believed that the Jews were by nature enemies of Germany & that most or all of them were actively helping Germany's enemies; thus they devoted more resources to the Holocaust because they thought that, by killing the European Jews, they were eliminating some of their most powerful military enemies.)

> If it’s determined by society the following statements are false [...] “Some societal practices are immoral.”

This is consistent with relativism, because societies often do things that their culture considers immoral. (e.g. most modern Americans think war is generally wrong — an occasionally necessary evil at best — & yet America still fights wars.)

Expand full comment

The principle that Having an intuition that A is prima facie reason to believe A is wildly un Bayesian.

Your degree of belief in A givin you have the intuition that A, which I will write I(A) should be P(I(A)|A)/P(I(A)|~A))*P(A)/P(~) converted into a probability from odds. So firstly, if you are as likely to have the intuition that A in cases where A is true as in cases where A is not true, your intuition that A tells you nothing. Secondly, in cases where A is already wildly unlikely or posits a very complicated world, an intuition may be evidence but not justify belief, because even though it makes A much more likely, it does not make it likely enough to "give a reason for belief" whatever that means.

The best arguments for anti realism rely on making both of these points. FIrst that you are as likely to have these intuitions in cases where moral realism is true as in cases where moral realism is false. Secondly that the world posited by moral realism is extremely queer, complicated, and unlikely. You didn't engage with these kinds of arguments much, and so I found this post totally unconvincing.

Expand full comment

It seems to me that you're having a lot of trouble making yourself impartial enough when you're being an "impartial observer". I think you're snuggling a lot of assumed preferences into your analysis.

I also see that you're leaning heavily on intuitions about states of affairs being desirable or not in some of these "disconfirmatory" arguments for antirealism, and I would like to remind you that having your own opinions about events isn't a defeater no matter how absurd you think it is that it would just be your opinion.

Expand full comment

I'll be writing a longer response, but I do want to comment on this part:

"I think Lance does — he’s just terminologically confused. When he reflects on his pain, he concludes it’s worth avoiding — that’s why he avoids it! I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable. "

It's one thing for you to write a post describing what you think. It's quite another for you to write a post describing what someone else thinks. It's one thing to say that I am mistaken or confused, but you shouldn't be telling other people what their conclusions are.

I've reflected on this topic for more than twenty years. There's no need to speculate on what I would think under these conditions, because I already have.

I've already reflected on cases of being in pain even when I wanted to be in pain, and no, I have not concluded that they were "undesirable." Likewise, when I reflect on my pain (which I have done quite a lot, since I have a chronic pain condition), I have not concluded that it's "worth avoiding."

If you wanted to know what my conclusions were about these situations, you could have just asked me. But it's strange to discuss my thoughts on the matter under the apparent assumption that I'd never reflected on considerations this basic. I most certainly have many times. And I have not reached the conclusions or views you attribute to me. On the contrary, my reflecting on these matters has only served to further reinforce my antirealist views, and increase my confident that moral realists are mistaken.

Expand full comment

"This was responded to above — when we reflect on pain we conclude that it’s the type of thing that’s worth avoiding, that there should be less of it"

when I reflect on my pain, I want less of it ...but I don't feel.other people's pain. So how do I get from hedonism to hedonistic utilitarianism?

Expand full comment

that might be your best entry so far, very well written and strong argumentation.

Only thing I would say is this: I don't think that part

"So if you think that the sentence that will follow this one is true and would be so even if no one else thought it was, you’re a moral realist. It’s typically wrong to torture infants for fun!"

is strictly speaking true, because people like Korsgaard would say that this is true even though they aren't commonly classified as realists. I guess it depends on the exact definition of moral realism, which is of course very controversial.

Expand full comment