Philosophy engineered is an engineer with some very confused thoughts about philosophy. He is a self-avowed logical positivist who, in a twitter exchange, when presented with the fact that logical positivism is almost totally extinct because of numerous insoluble problems, claimed bizarrely that logical positivism is alive and well, pointing to most philosophers affirming the analytic synthetic distinction. Anyone even moderately versed in such matters will know that affirming the analytic synthetic distinction is not the same as affirming logical positivism.
Yet Mr. Engineered, (henceforth referred to as PE) wrote an article about morality. In it, PE makes some very confused claims about morality with utter confidence, claiming that pretty much everyone not called Philosophy engineered is totally wrong about morality.
PE starts saying
But what is morality, really? Because for all this talk about morals and values, it's surprisingly rare for anyone to actually break them down into rigorous, coherent terms. So let's begin with the simple observation that the core of all morality is implicitly defined by choice. That's why we only tend to punish people for things they consciously decide to do or not do, and never for things that just happen.
I don’t know quite what he means by it is “implicitly defined by choice.” Morality certainly has something to do with what people should choose to do. Yet that’s not all of it — a satisfactory morality should also tell us that tornadoes are bad, for example.
But it's also equally important to realize that choice itself has no practical meaning unless one is trying to actualize some desirable outcome. "Good" and "right" choices are those which can reliably produce a specified result, while "bad" and "wrong" choices ultimately fail in that goal.
This assumes consequentialism, with no argument.
So which goals are specifically "moral" in nature and which ones are not? This is another one of those sticky philosophical issues that sparks all kinds of academic debate to this day.
A goal is moral if we have impartial reason to have it. We have impartial reason to have the goal of promoting happiness — we don’t have impartial reason to have the goal of tormenting innocent people.
Yet despite all the contention, most people do tend to agree that any coherent concept of seemingly "moral" behavior must revolve around some kind of ultimate, social interaction.
This is false. While morality does tend to promote social interaction, this is not a necessary fact. There are an infinite number of possible worlds in which society is very evil, so morality dictates action to undermine cooperative society. If society was built on inflicting mutual misery, it would be good to undermine cooperation.
Morally "good" choices tend to manifest through desirable, pro-social consequences while morally "evil" choices are those which tend to do the opposite. But no matter what the specifics may be, it's important to always bear in mind that the whole notion of morality itself is utterly meaningless and irrelevant without some form of consequentialism at its foundation.
No argument is given for this claim. PE shoehorns in consequentialism, without arguing that consequentialism is correct. But if it turns out that, for example, we really do have decisive reasons to care about rights, then we shouldn’t be consequentialists. PE tries to generate a moral system without arguing his morality is correct — a totally doomed endeavor.
Strangely enough, however, most Christian philosophers actually reject this principle outright, claiming instead that morality is an objective feature of the universe itself, like the law of gravity or the charge of an electron; that even if the entire human race went extinct today, then certain laws of morality would still be absolutely true and universally binding on all sentient beings across the cosmos.
PE seems very confused here. One can be a consequentialist and a moral realist. This is not a correct dichotomy. Instead, the dichotomy should have been that PE thinks morality depends on people’s attitudes in some way, and Christians don’t. You could think that it would still be the case that all people have a reason to do what has the best outcomes even if there were no people, just like the sentence “people are the type of things that have lungs” would be true even if there were no people.
Because to say that anything is morally "good" or "evil," in and of itself, without any reference to goals or consequences, is just incoherent gibberish.
This claim is often made by anti-realists and I have trouble taking it seriously. Is the claim “people have rights,” really incoherent? I think it’s wrong, certainly, but incoherent? PE claims that semantically morality must be consequentialist, but this is bizarre. Deontology is substantively wrong, not ruled out by definition.
To say something is good is to say that it’s worth promoting or bringing about, and that the world is better when it is in it. I have trouble believing that PE gets confused when people say “suffering is bad but happiness is good.” Much like we all know what tables are, we all know what it means to say something is good — or at least, all competent speakers of English do.
For example, just stop ask yourself: what on Earth is an objective moral value supposed to look like? Like if some guy were to say to you that, "human life has objective value," or that "human life is objectively good," what does that even mean? Is "goodness" supposed to be some kind of radiant intensity that just emanates from human beings, simply by the mere virtue of living?
What’s supposed to be confusing about this? What you’re saying is that it’s good that there are people, life is worth promoting, and the universe is better, all else equal, because of the marginal person. Goodness is a property like redness — adding in a discussion of radiant intensity is just a strange attempt at a smear.
Can we quantify this goodness and measure it with moral thermometers? If so, then what's the standard of calibration?
Well, presumably the person who thinks human beings have intrinsic value think there’s an amount of value that they have which is measurable in theory. So, while they wouldn’t think you use thermometers, presumably they’d think you look at the number of years people are alive for and multiply those by some moral factor. What’s confusing about this?
Does a cow's life possess objective moral value, as well? Or a squirrel's? How many squirrels does it take to equal the moral value of one human?
This would depend on the views. Maybe a person has a dozen times the intrinsic value of a cow, maybe eight times the intrinsic value — any amount could be a model. Now, I don’t agree with these, I don’t think that beings have intrinsic value in most way people mean that, but it’s at least a coherent, though I think false, notion.
How the hell are we supposed to empirically verify any of this in any functional capacity?
Well, I don’t think we could test this empirically any more than we can test empirically much of very abstract mathematics. But we have methods in the moral domain for forming beliefs, involving careful reflection and reaching reflective equilibrium.
Obviously, we can't. Because any time we say a thing has value or that a thing is good, we're not talking about some intrinsic physical quality of the thing itself. Technically, what we're really saying is that somewhere, somehow, a subjective agent has arbitrarily decided to place value on that thing in the form of a preferential desire with respect to other things. That's why absolutely nothing in the entire universe can possibly have objective moral value because the very idea itself is an oxymoron! It's like trying to ask what the "objective value" is for a dollar - there isn't any! Value does not exist without some value-er to do the value-ing. So to say that human life has value simply means that, if given a choice, subjective agents will tend to behave in such a way as to promote and preserve human well-being over the alternatives.
This would mean the sentence “if everyone approved of the holocaust, it wouldn’t be wrong” is true. It is, however, false. This also means that the sentence “animal life is valuable even though no-one values it,” is a misuse of the english language. It’s not only false, but it’s semantically incoherent. But this sentence seems perfectly coherent — people can fail to value valuable things. Torture would still be wrong even if everyone approved of it.
There’s no argument given for this implausible form of subjectivism. There’s just scoffing at subjectivism and then an assertion of the thesis of subjectivism.
What about the claim that there exist such things as objective moral duties? That is to say, things we "ought" to do and things we "ought not" do. Well, again, to say that anyone ought to do anything is to say that there exists some desirable state of affairs that can be conditionally actualized through specific actions. For example, if we desire to raise our children into happy, healthy, well-adjusted adults, then it necessarily follows that we probably ought not torture them in their infancy. However, if we have no interest whatsoever in promoting the health, happiness, or emotional well-being of children, then there really is no good reason for us to refrain from torturing babies, now, is there?
PE’s view commits him to saying that if everyone approved of torturing infants it wouldn’t be wrong. This, however, is clearly absurd. Furthermore, this would mean that if Hitler’s sole goal was to cause Jewish suffering, when Hitler proclaimed “Jews should be killed” that sentence was true — for it did achieve his values. This is, however, deeply implausible. It’s not just substantively wrong like error theory, it’s a clear misuse of the English language. On this view, one who says “X is morally wrong though I approve of it,” is speaking gibberish, like saying “I like X, though I don’t like X.”
This also means that arguments about morality don’t involve disagreement. If when I say “murder is wrong,” I really mean “I don’t like murder,” and you mean “I like murder,” when you say “murder is right,” then we’re not disagreeing any more than people who like chocolate disagree with those who like vanilla. I of course wouldn’t disagree with the deontologist that they like deontology. Thus, when I proclaim the statement “utilitarianism is correct,” that could be true, while my friend’s statement “deontology is correct” is also true.
This is why we get to call the fascist Nazis "bad" and peaceful egalitarians "good." Because if our desire is to live in a happy, safe, productive society (which most of us generally do), then it is an objective fact that violently antagonizing our neighbors is counterproductive to that goal. However, if we have no desire whatsoever to form peaceful, cooperative, and mutually beneficial relationships with those around us, then there really is no good reason to refrain from rampant genocidal aggression, is there? Just don't act surprised when vast, national-scale resources that could have been spent improving infrastructure and funding innovations must instead be spent fighting off people who want to annihilate our culture.
Well, the Nazis actually are bad, and peaceful egalitarians are really good. This would be so even if I agreed with the Nazis, just like the sun would exist even if I denied its existence.
Thus PE provides a merely descriptive account of morality — one that describes human cooperative goals. But this doesn’t capture very plausible claims about morality — namely, that some moral claims don’t depend on our attitudes, that torture and genocide would be wrong even if everyone approved of it. This amoral theorizing totally ignores the important questions about what matters, in favor of just pondering banal descriptive claims.
I was going to spend a lot of time arguing with you about minute details, but here I have to agree that this PE fellow seems fairly unhinged.