For reasons I’ve given before at length, I think morality is objective. Thus, I don’t think morality is a social technology any more than the sun is—it was here long before society, it will be here after society, and we did not create it. When dinosaurs died in agony and terror, long before anyone had any moral evaluations, that was deeply unfortunate.
But lots of moral anti-realists are fond of the phrase “morality is a social technology.” They are obsessed with this phrase. When, for instance, I argued for the modest proposal that insect suffering is the worst thing in the world, lots of people replied along roughly the following lines:
Morality is a social technology which we follow because it’s useful for facilitating cooperation among social primates. However, trying to extend morality to say crazy things totally outside of the context for which it evolved is asking it to do too much. While perhaps YOU—on account of being OBSESSED WITH BUGS—have a moral code that causes you to weep each time an aphid feels upset, the rest of humanity doesn’t and it’s pointless to try to argue someone into a moral system that they don’t hold.
I think everything about this is confused, even if you’re a moral anti-realist. It’s so confused that if confusion was fatness and the phrase was your mother, then your mother would be so fat that her weight would be considerable such that she’d be advised to begin dieting and exercising regularly.
The phrase comes from conflating descriptive claims about how moral beliefs came to be with normative claims about what we should do morally.
It’s true, of course, that part of the story of how our moral beliefs developed is that certain moral beliefs were adaptive. If everyone in a society thought that it was great to hit little babies with hammers, likely that society would not last very long—and society develops norms to punish such behavior. (Some of us would also absurdly suggest that maybe part of the reason that almost everyone thinks it’s wrong to rape and torture people is that it really is wrong to rape and torture people—but I know that such an absurd suggestion is unlikely to be seriously entertained by the “morality is a social technology” people. These people have moved beyond bogus superstition like the notion that you shouldn’t set random people on fire for no reason—and this fact doesn’t just depend on your not liking setting people on fire).
But this doesn’t tell us whether the moral beliefs we should adopt are those that we evolved to adopt. Now I’m well aware that moral anti-realists deny there are such things as stance-independent reasons. They hold that morality is a matter of mere preference—there aren’t reasons to hold moral beliefs that don’t depend ultimately on one’s own judgment.
But if this is right, then the morality we should adopt is a function of our judgments. If so, why the hell should anyone care about the social function of morality? Suppose I’m a utilitarian anti-realist. I recognize that my set of moral beliefs is out of accordance with those of society. But why should I care? I have many aesthetic evaluations that are out of accordance with society. I think that
is funny and that Fahrenheit 451 is the single most boring book ever written! If morality is a matter of preference, then its evolutionary history is totally irrelevant!Now, one could reply that if morality is a matter of preference then it’s hard to see how you’d be able to convince people of moral judgments. You can’t normally talk people out of preferences or aesthetic judgments. One of the funniest passages ever—written by the Chinese philosopher Mencius—reads as follows:
Mouths have the same preferences in flavors. Master Chef Yi Ya was the first to discover what our mouths prefer. If it were the case that the natures of mouths varied among people—just as dogs and horses are different species from us—then how could it be that throughout the world all tastes follow Yi Ya when it comes to flavor? When it comes to flavor, the reason the whole world looks to Yi Ya is that mouths throughout the world are similar.
Ears are like this too. When it comes to sounds, the whole world looks to Music Master Shi Kuang. This is because ears throughout the world are similar. Eyes are like this too. No one in the world does not appreciate the handsomeness of a man like Zidu. Anyone who does not appreciate the handsomeness of Zidu has no eyes. Hence, I say that mouths have the same preferences in flavors, ears have the same preferences in sounds, eyes have the same preferences in attractiveness. When it comes to hearts, are they alone without preferences in common?
Despite repeatedly trying to talk people out of finding Zidu handsome, I simply could not! Despite repeatedly trying to talk people out of enjoying the flavors of Master chef Yi Ya, I failed completely. Aesthetic preferences are not generally swayed by rational argument.
(As an aside, I find it absolutely hilarious that we know basically nothing about Zidu other than that he was so handsome that Mencius thought that his existence demonstrated the objectivity of beauty).
Behold: the ideal man!
But moral preferences are a bit different from other kinds of preferences even if moral anti-realism is true. Moral preferences aren’t just a function of the degree of enjoyment you get from different things—they have an evaluative character. They’re directed at other things in the world. They’re not just a function of our own enjoyment, the way our preference for cake is.
Suppose that there’s an anti-black racist. He thinks the interests of black people matter less than white people. I think it would be a problem for anti-realism if it holds that persuading him is impossible and that moral arguments shouldn’t change his mind. We should hold that at least for most people like that, they should change their mind if they reflected more. At the very least, we should hold that for many of these people if they reflected more, as a matter of fact they would change their mind. People can be talked out of moral positions—even anti-realists.
How would one go about talking this person out of his views? I think I’d argue roughly along the following lines:
The thing that your moral evaluations are being determined by—skin color—seems morally arbitrary. If my skin simply changed color, it seems weird that this would affect my moral worth. So therefore it seems like you’re placing weight on something obviously morally irrelevant.
Perhaps he would then say that what he really cares about is not skin color intrinsically but criminality. He thinks black people are more prone to criminality. Then we could debate whether this is actually true and whether—even granting that this is true—the fact that a group is more prone to criminality on average is a good reason to take the non-criminal members of the group to be morally unimportant. Men commit more crimes than women but presumably he wouldn’t claim that men don’t matter.
Now, I don’t know if I would actually convince such a person. But it seems too quick to just handwave the possibility of persuasion with the thought-terminating cliche “morality is a social technology.” Even if morality ultimately boils down to preference, one can sometimes come to see that their moral evaluations are a byproduct of preferences that they don’t actually endorse.
This is what I hope to bring about when I argue for other unintuitive moral claims. When I argue that insects matter a great deal, I’m under no illusion that people actually care about insects. What I think is that people reject insect welfare for no particularly good reason. If they thought about the subject more, I think they might see that. The reasons they reject the significance of insects is because they harbor various ill-thought-out biases against creatures that are small, funny-looking, and that you don’t naturally empathize with.
I also think there are other judgments that if they reflected on more, they could come to see are in conflict with their apathy towards insects. For instance, most people seem to be opposed to intense agony. All else equal, they think that if there’s more extreme agony in the world, that is unfortunate.
But people ignore their general opposition to agony when it implies that insect suffering matters. If they thought about it more, I think they could see that the factors that make them oppose other kinds of agony—the fact it feels bad—should also make them oppose insect agony. The reason it’s bad when, say, babies suffer isn’t because they’re smart (they’re not) or they’re human (one’s species doesn’t seem to affect how morally serious it is when they feel pain). The reason is because it hurts and it’s bad to hurt. But if insects can hurt too, why in the world should we ignore their pain?
Now, I think this appeal is a bit easier to make if someone’s a moral realist than if they’re not. But even moral anti-realists should be potentially persuadable. We’re not dogs—perpetually jerked around by our emotional reactions. We can reflect on what truly matters to us and change our aims when we see that we’ve been drawing clearly irrelevant distinctions. Morality may be social in origin, but that cannot be a blanket excuse for ignoring every counterintuitive moral appeal.
While "anti-realism" is used in many ways, contemporary writers in the broadly anti-realist tradition (Blackburn and Gibbard, and their fellow travelers) tend to go out of their way to make sense of the possibility of moral argument. That is, they don't like the emotivist idea that all we can do is just shout our values at each other, "boo" and "hooray" style, with no room for rational persuasion.
They have a variety of ways of doing it; maybe it involves bringing out tensions in our values, or showing us that not only do we have first-order values, but we also have meta-values, of valuing whichever first-order values are produced by certain sorts of processes (e.g., empathetic reflection, or stuff along those lines), such that we can make a case that we *would* value certain things if we changed along dimensions that we *already* recognize as improvements.
And it's not clear to me that things are all that different for the moral realist. Even if you and your interlocutor agree that morality is objective, in convincing them that they're making a mistake about objective morality, you have to appeal to some views they already have about what objective morality requires, which will look a lot like (what the anti-realist interprets as) appealing to values they already have, or meta-values...
Basically, I agree that nobody, no matter what their meta-ethics, should think there's a quick route to blocking the possibility of reasonable argument-induced value shift.
Sometimes I wonder if part of the reason moral anti-realism is so popular is because of the folk logical positivism (in the words of Tim Keller) people have, where any component that has an obvious social element is automatically ontologically suspect. I think I feel like people just need to learn to be more discriminating in the criteria they use to evaluate different concepts, more than anything else.