While "anti-realism" is used in many ways, contemporary writers in the broadly anti-realist tradition (Blackburn and Gibbard, and their fellow travelers) tend to go out of their way to make sense of the possibility of moral argument. That is, they don't like the emotivist idea that all we can do is just shout our values at each other, "boo" and "hooray" style, with no room for rational persuasion.
They have a variety of ways of doing it; maybe it involves bringing out tensions in our values, or showing us that not only do we have first-order values, but we also have meta-values, of valuing whichever first-order values are produced by certain sorts of processes (e.g., empathetic reflection, or stuff along those lines), such that we can make a case that we *would* value certain things if we changed along dimensions that we *already* recognize as improvements.
And it's not clear to me that things are all that different for the moral realist. Even if you and your interlocutor agree that morality is objective, in convincing them that they're making a mistake about objective morality, you have to appeal to some views they already have about what objective morality requires, which will look a lot like (what the anti-realist interprets as) appealing to values they already have, or meta-values...
Basically, I agree that nobody, no matter what their meta-ethics, should think there's a quick route to blocking the possibility of reasonable argument-induced value shift.
I definitely agree that it's possible to be, de facto, unpersuadable. I'm making a normative claim, not a descriptive one. There's no general purpose argument available to the effect that one *has no good reason* to change one's values, of the sort that argument might reveal, whether we understand "good reason" along realist or antirealist lines.
Cool :). Asking because I'm about to write an article called "No Views Have A Monopoly On Smartness," about how there are smart people who believe eveerything--and you were one of my go-to examples of a smart person who is reductionist about loads of stuff.
Sometimes I wonder if part of the reason moral anti-realism is so popular is because of the folk logical positivism (in the words of Tim Keller) people have, where any component that has an obvious social element is automatically ontologically suspect. I think I feel like people just need to learn to be more discriminating in the criteria they use to evaluate different concepts, more than anything else.
Ignoring the main point and talking about insects: it is entirely coherent to assume that insect pain is orders of magnitude less morally significant than human pain. For example, it’s likely that insects are either not conscious at all or only a very little bit. You only need a few orders of magnitude for the problem to go away.
This is true. There are about 9 orders of magnitude more. But consider the following:
- We should only consider suffering that humans could feasibly impact, not all insect suffering
- Unlike humans, it is not clear that an insect remaining alive is particularly good. So early death may not be significantly worse than later death.
- Humans have about 5 orders of magnitude more neurons than insects. That still leaves 4 orders of magnitude. But it’s reasonable to assert that consciousness is a highly nonlinear function of neurons.
I don’t think we know much about how intense insects experience is. But if we think there’s a 1% chance that it’s 1% as intense as ours it’s still really important
I think the criticism of you that you quote is still fine. basically I read it as an appeal to adaptiveneness, which has ever worked, though as you say ("why should I care") it's not final and you can not accept the conclusion even if you understand the argument (much as if there wasn't an outside objective standard of morality that entities can always eventually come to agreement on)
You seem not to understanding the difference between "moral subjectivism" - the belief that morality is just a matter of individual preferences and "moral social constructivism" - the belief that morality is a matter of aggregated preferences of members of society. As a result you write arguments against moral subjectivism, but fail to engage with moral social constructivism, while mistakenly thinking that you've disproven it. Essentially you argue against a weakman here.
For instance:
> These people have moved beyond bogus superstition like the notion that you shouldn’t set random people on fire for no reason—and this fact doesn’t just depend on your not liking setting people on fire
Under moral social constructivism the statement "you shouldn't set people on fire for no reason" is completely coherent. It refers to a social consensus about the matter - an external fact about the world, not just your own personal opinion. Even if you personally want to set a person on fire you can still coherently conseptualize that you shouldn't do it.
Moreover, you can even conceptualize a world where everyone agrees that setting people on fire for no reason is great, and still say that one shouldn't set people on fire. The trick is that your notion of "shouldness" is a social construct in our world where people agree that setting people on fire for no reason is a bad thing, while the people from the imaginary world have their own concept of "shouldness" that is different from your own.
Now, you can still disagree with this approach, but you would need much better arguments to persuade anybody who agrees with it, than you currently provide.
> For instance, most people seem to be opposed to intense agony. All else equal, they think that if there’s more extreme agony in the world, that is unfortunate.
I have no problem with the overwhelming majority of “intense agony.” If you told me tomorrow that the amount of”intense agony” we know about had increased a thousand times over, I would be totally indifferent.
Okay, let’s take moral truths as objective truths. Do those moral truths have prerequisites? You don’t see the existence of human beings as a factor, since dinosaur suffering is immoral on its own terms. Is the existence of sentience required? And are moral truths only those that impinge on sentience? Can a lifeless universe be a moral or immoral one? Was suffering bad before any creatures existed that could suffer?
If one accepts that all truth exists forever, independent of the context that defines or justifies it, then reality is currently awash with infinite amounts moral truths that are incoherent in the current context, but may someday cohere. On the other hand, if moral statements come into existence or become true only when reality aligns to necessitate them, then it’s hard to deny that they depend at least a little on one’s frame of reference, even if they’re not purely social constructs.
Moral truths don't depend on physical facts. Even before there were any dinosaurs, it was true that if there were to be dinosaurs, their suffering would be bad.
Got it, thanks! That makes sense to me as a stance. With that in mind, I guess one way that moral truths seem different from others is how they function. They’re not cleanly algebraic in the same way.
For example, I have a clear definition of “gravity” in my mind — the force that attracts two bodies of mass. That definition could exist in a world without matter, because if two bodies of mass came into being gravity would work on them. Whenever I talk about gravity, I could swap the term out for its definition to show a clear cause and effect.
On the other hand, it’s hard to do this for moral truths like insect suffering because I’m not sure how to define “bad” or “wrong” in the context-independent system you’ve established. If we both swapped these terms out for what we thought they meant, I’m not sure we’d have any consensus. It’s more than “unpleasant,” surely, but maybe less than “harmful to the smooth operation of the universe.” The closest catch-all I can think of (which I happen to also personally align with) is “makes God unhappy.” But now we’re in the theological realm, and need to parse God’s morality to best align with it.
Without prerequisite context, a clear cause-and-effect syllogism, or an omniscient arbiter to keep an inscrutable score, it’s hard to measure the stakes in absolute terms.
It's not quite right to say that that (hilarious, iconic) passage was *written* by Mencius. As with most other philosophical "Masters texts" of the pre-Qin period, the stories and dialogues collected in the text bearing Mencius's name are widely agreed to have been written by disciples and later followers of the Master, then compiled and edited by scholars in the Han dynasty. Interestingly (and I hope this philosophical upshot justifies the pedantry...), this means that these texts cannot be naively taken to represent a single coherent philosophical vision: they draw together developing and sometimes competing arguments and ideas, and the philosophical interpreter has the task of reconstructing the dialectic by tracing connections of debate and response across the many small units out of which the chapters/books of these texts were built. (This also explains why the organization of a text like the Mencius or the Zhuangzi can seem a bit random--it's a bit like if some librarian 600 years from now took a sample of posts from the philosophy leaderboard on Substack and redacted them into a single work called "Master Zizek".)
Minor note: your post is good, but the title is all wrong. Of course morality is a social technology! And a very useful one. It's just that it's not *only* a social technology.
As a moral anti-realist myself, I want to note that you have successfully persuaded me about the importance insects. And other people in the past have successfully persuaded me of various moral values (and some have failed, and some have been persuaded by me).
I don't see why people think "moral values aren't objectively true" to "there is no morality". I'd retort that moral values are subjectively true, which gives me 49% of what you can do with moral realism. I'd then note that human values, and standard human moral reasoning, both have evolutionary origins. This means that we can have productive conversations about them, adapt and improve them (according to broadly shared standards of what constitutes improvement; though these standards can also be discussed and meta-improved). This gives me another 49% of moral realism.
The missing 2% are arguments that explicitly rely on moral facts being objective l. But a lot of those arguments aren't convincing even to other moral realists, so I don't think I'm losing much by stripping out the "objective" part of the claim and seeing what's left.
Pain is Pain. Except that it may not be. At what point would we have to say that a robot experiences pain? When it passes a pain Turing test? How similar or different to a robot would an insect be? A bacterium?
What if every human had always enjoyed a peaceful and acceptably enjoyable life of 80 years, and then died in an unavoidable hour of agony. Would we say that would be better that the billions of humans never exist at all? Let’s say that each human knows by the time he or she is 6 that such is the case, and could choose to be euthanized. What would we think of those who chose death? would parents be justified in pleading with their six-year-old to choose to live? Why? To go new-agey, What if each human spirit got to choose pre-conception whether to be born and enjoy their eighty years, or to cease to exist and avoid the pain at the end?
Now we can think about non-anthropogenic animal suffering. Has the great mass of animal existence been more pain than peace? Or has it been mostly peace and enjoyment, with pockets here and there of mass suffering for short periods of time, and ubiquitous but relatively short lived suffering at end of life? If it is and has been 51 percent (as a weighted calculation— perhaps pleasure or peace is more significant than the same ‘amount’ of pain) suffering, then shouldn’t we be working very hard toward universal animal extinction? Probably not. But why not?
And where does mental capacity come in? Interestingly, Christianity has an account of an infinite mind suffering obscene injustice and one of the worst examples of a painful physical death ever conceived by bent human minds. Is this ultimate suffering? The better (having more great-making properties?) a being is, the more capacity it has for suffering? If so, how (or how much) do lesser beings suffer?
I am a little skeptical that most of animal life over the past billion years or so has been mostly suffering. I tend to think that a being’s capacity for suffering is determined by its capacity for pleasure or peace. If I cannot conceive that a house fly is experiencing pleasure like I do, or like my dog does, or like the snowboarding crow does, then should I think that it’s suffering could be categorized as agony?
This leaves aside the idea of an aggregate of suffering, and whether there is such a thing.
I mostly agree in the core argument, but there is NOTHING in the World were History and evolution are not important. There is no orthogonality between normative ethics (why we feel as individuals compeled to be moral) and the darwinian emergence of ethics. Every book on ethics shall beguin with the analysis of ethics as a natural phenomenon.
The division of reality between the material phenomenal reality and the conscious subject divides our sciences also between those describing the mind (Linguistics, Mathematics, and Philosophy and Psychology when their focus is the description of the self-reported states of conscience) and the phenomenal natural/social science.
This division is especially relevant for moral philosophy. On one hand moral behavior is a natural phenomenon that exists because of biological evolution and that is studied by Sociobiology and Cultural Anthropology with the help of the Game Theory formalism. On the other hand, moral action is observed by the conscious subject as a personal decision. Moral philosophy belongs to the noumenal side of reality and mostly answers the following question posed by a conscious subject: beyond personal preferences, what obligations shall I honor, and why?
"I think it would be a problem for anti-realism if it holds that persuading him is impossible and that moral arguments shouldn’t change his mind."
There's no problem at all. Persuading him is very possible, it just means relying on arguments that are not solely logical. Emotionally or materially there may be reasons for him to change his mind.
I think a lot of people primarily care about moral things out of emotions/their interest or conformity, abd they genuilly struggle some mins working differently
While "anti-realism" is used in many ways, contemporary writers in the broadly anti-realist tradition (Blackburn and Gibbard, and their fellow travelers) tend to go out of their way to make sense of the possibility of moral argument. That is, they don't like the emotivist idea that all we can do is just shout our values at each other, "boo" and "hooray" style, with no room for rational persuasion.
They have a variety of ways of doing it; maybe it involves bringing out tensions in our values, or showing us that not only do we have first-order values, but we also have meta-values, of valuing whichever first-order values are produced by certain sorts of processes (e.g., empathetic reflection, or stuff along those lines), such that we can make a case that we *would* value certain things if we changed along dimensions that we *already* recognize as improvements.
And it's not clear to me that things are all that different for the moral realist. Even if you and your interlocutor agree that morality is objective, in convincing them that they're making a mistake about objective morality, you have to appeal to some views they already have about what objective morality requires, which will look a lot like (what the anti-realist interprets as) appealing to values they already have, or meta-values...
Basically, I agree that nobody, no matter what their meta-ethics, should think there's a quick route to blocking the possibility of reasonable argument-induced value shift.
Very much agree!
1. Imagine an obstinant agent that doesn't change their mind on anything you classify under morality.
2. ???
3. The claim that everybody's mind can be changed via moral argumentation is defeated by facts and logic.
I definitely agree that it's possible to be, de facto, unpersuadable. I'm making a normative claim, not a descriptive one. There's no general purpose argument available to the effect that one *has no good reason* to change one's values, of the sort that argument might reveal, whether we understand "good reason" along realist or antirealist lines.
Unrelated: you’re a physicalist and moral anti-realist right?
Physicalist yes. For morality, I like broadly Blackburn/Gibbard style views. Don't know if they should be called antirealist.
Cool :). Asking because I'm about to write an article called "No Views Have A Monopoly On Smartness," about how there are smart people who believe eveerything--and you were one of my go-to examples of a smart person who is reductionist about loads of stuff.
Sometimes I wonder if part of the reason moral anti-realism is so popular is because of the folk logical positivism (in the words of Tim Keller) people have, where any component that has an obvious social element is automatically ontologically suspect. I think I feel like people just need to learn to be more discriminating in the criteria they use to evaluate different concepts, more than anything else.
Ignoring the main point and talking about insects: it is entirely coherent to assume that insect pain is orders of magnitude less morally significant than human pain. For example, it’s likely that insects are either not conscious at all or only a very little bit. You only need a few orders of magnitude for the problem to go away.
There are so many insects that you would need quite several orders of magnitude to make their suffering negligeble.
This is true. There are about 9 orders of magnitude more. But consider the following:
- We should only consider suffering that humans could feasibly impact, not all insect suffering
- Unlike humans, it is not clear that an insect remaining alive is particularly good. So early death may not be significantly worse than later death.
- Humans have about 5 orders of magnitude more neurons than insects. That still leaves 4 orders of magnitude. But it’s reasonable to assert that consciousness is a highly nonlinear function of neurons.
I don’t think we know much about how intense insects experience is. But if we think there’s a 1% chance that it’s 1% as intense as ours it’s still really important
article seems to not argue for the title
I think the criticism of you that you quote is still fine. basically I read it as an appeal to adaptiveneness, which has ever worked, though as you say ("why should I care") it's not final and you can not accept the conclusion even if you understand the argument (much as if there wasn't an outside objective standard of morality that entities can always eventually come to agreement on)
You seem not to understanding the difference between "moral subjectivism" - the belief that morality is just a matter of individual preferences and "moral social constructivism" - the belief that morality is a matter of aggregated preferences of members of society. As a result you write arguments against moral subjectivism, but fail to engage with moral social constructivism, while mistakenly thinking that you've disproven it. Essentially you argue against a weakman here.
For instance:
> These people have moved beyond bogus superstition like the notion that you shouldn’t set random people on fire for no reason—and this fact doesn’t just depend on your not liking setting people on fire
Under moral social constructivism the statement "you shouldn't set people on fire for no reason" is completely coherent. It refers to a social consensus about the matter - an external fact about the world, not just your own personal opinion. Even if you personally want to set a person on fire you can still coherently conseptualize that you shouldn't do it.
Moreover, you can even conceptualize a world where everyone agrees that setting people on fire for no reason is great, and still say that one shouldn't set people on fire. The trick is that your notion of "shouldness" is a social construct in our world where people agree that setting people on fire for no reason is a bad thing, while the people from the imaginary world have their own concept of "shouldness" that is different from your own.
Now, you can still disagree with this approach, but you would need much better arguments to persuade anybody who agrees with it, than you currently provide.
> For instance, most people seem to be opposed to intense agony. All else equal, they think that if there’s more extreme agony in the world, that is unfortunate.
I have no problem with the overwhelming majority of “intense agony.” If you told me tomorrow that the amount of”intense agony” we know about had increased a thousand times over, I would be totally indifferent.
Okay, let’s take moral truths as objective truths. Do those moral truths have prerequisites? You don’t see the existence of human beings as a factor, since dinosaur suffering is immoral on its own terms. Is the existence of sentience required? And are moral truths only those that impinge on sentience? Can a lifeless universe be a moral or immoral one? Was suffering bad before any creatures existed that could suffer?
If one accepts that all truth exists forever, independent of the context that defines or justifies it, then reality is currently awash with infinite amounts moral truths that are incoherent in the current context, but may someday cohere. On the other hand, if moral statements come into existence or become true only when reality aligns to necessitate them, then it’s hard to deny that they depend at least a little on one’s frame of reference, even if they’re not purely social constructs.
Moral truths don't depend on physical facts. Even before there were any dinosaurs, it was true that if there were to be dinosaurs, their suffering would be bad.
Got it, thanks! That makes sense to me as a stance. With that in mind, I guess one way that moral truths seem different from others is how they function. They’re not cleanly algebraic in the same way.
For example, I have a clear definition of “gravity” in my mind — the force that attracts two bodies of mass. That definition could exist in a world without matter, because if two bodies of mass came into being gravity would work on them. Whenever I talk about gravity, I could swap the term out for its definition to show a clear cause and effect.
On the other hand, it’s hard to do this for moral truths like insect suffering because I’m not sure how to define “bad” or “wrong” in the context-independent system you’ve established. If we both swapped these terms out for what we thought they meant, I’m not sure we’d have any consensus. It’s more than “unpleasant,” surely, but maybe less than “harmful to the smooth operation of the universe.” The closest catch-all I can think of (which I happen to also personally align with) is “makes God unhappy.” But now we’re in the theological realm, and need to parse God’s morality to best align with it.
Without prerequisite context, a clear cause-and-effect syllogism, or an omniscient arbiter to keep an inscrutable score, it’s hard to measure the stakes in absolute terms.
It's not quite right to say that that (hilarious, iconic) passage was *written* by Mencius. As with most other philosophical "Masters texts" of the pre-Qin period, the stories and dialogues collected in the text bearing Mencius's name are widely agreed to have been written by disciples and later followers of the Master, then compiled and edited by scholars in the Han dynasty. Interestingly (and I hope this philosophical upshot justifies the pedantry...), this means that these texts cannot be naively taken to represent a single coherent philosophical vision: they draw together developing and sometimes competing arguments and ideas, and the philosophical interpreter has the task of reconstructing the dialectic by tracing connections of debate and response across the many small units out of which the chapters/books of these texts were built. (This also explains why the organization of a text like the Mencius or the Zhuangzi can seem a bit random--it's a bit like if some librarian 600 years from now took a sample of posts from the philosophy leaderboard on Substack and redacted them into a single work called "Master Zizek".)
Minor note: your post is good, but the title is all wrong. Of course morality is a social technology! And a very useful one. It's just that it's not *only* a social technology.
As a moral anti-realist myself, I want to note that you have successfully persuaded me about the importance insects. And other people in the past have successfully persuaded me of various moral values (and some have failed, and some have been persuaded by me).
I don't see why people think "moral values aren't objectively true" to "there is no morality". I'd retort that moral values are subjectively true, which gives me 49% of what you can do with moral realism. I'd then note that human values, and standard human moral reasoning, both have evolutionary origins. This means that we can have productive conversations about them, adapt and improve them (according to broadly shared standards of what constitutes improvement; though these standards can also be discussed and meta-improved). This gives me another 49% of moral realism.
The missing 2% are arguments that explicitly rely on moral facts being objective l. But a lot of those arguments aren't convincing even to other moral realists, so I don't think I'm losing much by stripping out the "objective" part of the claim and seeing what's left.
Pain is Pain. Except that it may not be. At what point would we have to say that a robot experiences pain? When it passes a pain Turing test? How similar or different to a robot would an insect be? A bacterium?
What if every human had always enjoyed a peaceful and acceptably enjoyable life of 80 years, and then died in an unavoidable hour of agony. Would we say that would be better that the billions of humans never exist at all? Let’s say that each human knows by the time he or she is 6 that such is the case, and could choose to be euthanized. What would we think of those who chose death? would parents be justified in pleading with their six-year-old to choose to live? Why? To go new-agey, What if each human spirit got to choose pre-conception whether to be born and enjoy their eighty years, or to cease to exist and avoid the pain at the end?
Now we can think about non-anthropogenic animal suffering. Has the great mass of animal existence been more pain than peace? Or has it been mostly peace and enjoyment, with pockets here and there of mass suffering for short periods of time, and ubiquitous but relatively short lived suffering at end of life? If it is and has been 51 percent (as a weighted calculation— perhaps pleasure or peace is more significant than the same ‘amount’ of pain) suffering, then shouldn’t we be working very hard toward universal animal extinction? Probably not. But why not?
And where does mental capacity come in? Interestingly, Christianity has an account of an infinite mind suffering obscene injustice and one of the worst examples of a painful physical death ever conceived by bent human minds. Is this ultimate suffering? The better (having more great-making properties?) a being is, the more capacity it has for suffering? If so, how (or how much) do lesser beings suffer?
I am a little skeptical that most of animal life over the past billion years or so has been mostly suffering. I tend to think that a being’s capacity for suffering is determined by its capacity for pleasure or peace. If I cannot conceive that a house fly is experiencing pleasure like I do, or like my dog does, or like the snowboarding crow does, then should I think that it’s suffering could be categorized as agony?
This leaves aside the idea of an aggregate of suffering, and whether there is such a thing.
So much to consider.
"Thought-terminating cliche" is a thought-terminating cliche.
I mostly agree in the core argument, but there is NOTHING in the World were History and evolution are not important. There is no orthogonality between normative ethics (why we feel as individuals compeled to be moral) and the darwinian emergence of ethics. Every book on ethics shall beguin with the analysis of ethics as a natural phenomenon.
https://forum.effectivealtruism.org/posts/aCEuvHrqzmBroNQPT/the-evolution-towards-the-blank-slate
From the above link:
The division of reality between the material phenomenal reality and the conscious subject divides our sciences also between those describing the mind (Linguistics, Mathematics, and Philosophy and Psychology when their focus is the description of the self-reported states of conscience) and the phenomenal natural/social science.
This division is especially relevant for moral philosophy. On one hand moral behavior is a natural phenomenon that exists because of biological evolution and that is studied by Sociobiology and Cultural Anthropology with the help of the Game Theory formalism. On the other hand, moral action is observed by the conscious subject as a personal decision. Moral philosophy belongs to the noumenal side of reality and mostly answers the following question posed by a conscious subject: beyond personal preferences, what obligations shall I honor, and why?
Everyone I’ve met who read Fahrenheit 451 absolutely loathed it, so I don’t think that opinion is that out of whack.
"I think it would be a problem for anti-realism if it holds that persuading him is impossible and that moral arguments shouldn’t change his mind."
There's no problem at all. Persuading him is very possible, it just means relying on arguments that are not solely logical. Emotionally or materially there may be reasons for him to change his mind.
I think a lot of people primarily care about moral things out of emotions/their interest or conformity, abd they genuilly struggle some mins working differently