10 Comments
User's avatar
Dominik's avatar

I agree that the view you outlined internally makes sense. I even think you articulated it very well. Still, I obviously disagree that it's the correct view. When I think of moral deliberation, it strikes me as obvious that the property "being wrong" is not like "being poor", but like more like "being free", in the sense that while x can be even-freeer than y, there is still always a fundamental fact of the matter whether someone is acting freely or not.

I am honestly somewhat bewildered by the ""counterexample"" to obligation and supererogation that you give in this article: https://benthams.substack.com/p/an-objection-to-this-whole-supererogation?utm_source=publication-search I was expecting an interesting case, as your cases are usually very thought-provoking. But in this case... I am very confident that not just I think that you should do neither, but that this is clearly the common sense view. Another commenter told you the same thing ("not a judgement shared by many people"). So instead of being a counterexample, it's actually a case that favours deontology.

Expand full comment
User's avatar
Comment deleted
Apr 24Edited
Comment deleted
Expand full comment
Dominik's avatar

"Almost certainly, if asked, the child would consent to this scenario" - The child couldn't consent to the maxim "If I like it, then I beat up some random child that I saved" because a beating - by the very nature of the act - is something that happens to someone against their will. This is just a basic misunderstanding of Kantianism, reminiscent of Parfit in On What Matters.

"Rawls suggests reflecting from behind a veil of ignorance on what we would want a just society to look like" - Right. I don't think we should be Rawlsians about morality and neither do most Kantian ethicists.

"Basically, I’m guessing that if you asked serious philosophers who advocate deontology" - What? Do you genuinely think Korsgaard would approve of beating up another human being as long as you saved them beforehand ("approve" in the sense of saying that it's not a violation of a perfect duty), have you read ANYTHING by her? lmao

Expand full comment
Anton's avatar

What I especially appreciated here is the acknowledgment that blameworthiness isn’t about technical violations of some moral statute—it’s about deviation from expected effort, given your circumstance and capacity. That’s a concept we don’t talk about enough: moral exertion. What costs you little might cost someone else everything. And that differential should shape how we interpret actions, not just the outcomes.

Also love the rejection of “moral threshold obsession.” Too many people treat right and wrong as moral binary code—like there’s a cosmic green checkmark waiting if you hit 80% Utilitarian Purity. Your take—that moral worth lives in degrees, not categories—feels like the kind of lens that could reduce a lot of unproductive moral self-flagellation and smugness.

Expand full comment
Rajat Sirkanungo's avatar

Indeed, also, one puzzle someone might have trouble with is how to think about how does a consequentialist think about the divine. And one solution is a kind of satisficing consequentialism and combine that with God always effortlessly meeting the satisficing bar for himself and doing much more. If satisficing bar is scaled linearly, then God's good actions can be scaled exponentially such that God is accelerating the goodness in the world with exponential speed!

Expand full comment
Haydn's avatar

"If you have ten trillion dollars, while giving a million dollars to charity is a very good thing to do, you don’t get many virtue points as it requires almost no sacrifice from you. In contrast, if you’re poor but still donate, because doing so is psychologically taxing, you get virtue points." If you're poor and don't try to get rich (in expectation), like most poor people, this seems unlikely.

Expand full comment
sean s's avatar

Re. "... the term right has different meanings and ..."

And right there is the reason these kinds of discussions seem interminably futile. Without basic agreement as to the meaning of words, philosophy becomes futile. So many words, so many ideas, so few results.

These different meanings are one way those who seem obviously wrong escape condemnation. The very idea of "right vs. wrong" is downgraded to mere opinion.

This is why deontology doesn't work; it's barely distinguishable from plain old magical thinking. It's rules are matters of opinion, and little more.

The only principle I can find to put some distance between opinion and "right vs. wrong" is a mixture of Mill's harm principle, Kant's categorical imperative, and some old-timey religion. The best statement of this I know of is the familiar Golden Rule. There are other variations, from Hillel's teaching to the Hippocratic oath. Some form can be found in the teachings of nearly every religion or culture on our planet. In Anglo-American jurisprudence it sometimes appears in the principles of equity.

These are consistent with "Consequentialist" notion that you should just strive to do as much good as possible without worrying about what you’re “allowed” to do morally. They are also consistent with the notion that the right is about what the best thing you could do is. They are consistent with the notion that Ted Bundy, by killing people, acted wrongfully because he caused great harm to others.

Re. "In the eyes of God, the man who upgrades his car when he could have saved several children is pretty shitty."

Does any god exist? IFF one does, is it any better than Ted Bundy? Both questions have the same answer: we don't know. Until we do, how we are seen "in the eyes of God" invites a retreat into opinion. "In the eyes of" whose god? That argument seems "deontological".

What is the point of even thinking about these questions? If it's to justify past actions then it's an empty exercise. If the point is to understand what we should do (or should have done) then the fact that none of us are saints is irrelevant. That we escaped condemnation is irrelevant. What matters is what we ought to do in the future. Along with not being saints, none of us are omniscient. Learning and improvement should be encouraged.

Expand full comment
Gumphus's avatar

Great article! I'm partial to a consequentialist account of rightness and wrongness where right choices are those that reliably and foreseeably have good outcomes (in the long term, all else being equal). In this sense, claims about rightness operate more like a heuristic principle, or a rule of thumb, developed from empirical inquiry into the sorts of consequences a type of choice tends to have.

So for instance, in my opinion it's wrong as a general principle to hit children with buses, because doing so reliably and foreseeably causes harm - and we can't reliably or foreseeably pick out cases where hitting some kid with a bus (say, a young Hitler) could actually have good consequences in the long term.

This is also why we might say, even within a consequentialist framework, that it's "wrong" for a surgeon to kill their patient and harvest their organs, even in light of a conceivable edge case where doing so saves five people - because murdering one's patients reliably and foreseeably has bad outcomes, and we can't reliably or foreseeably identify cases where doing so would lead to good outcomes. And this is why, for the thought experiment to "work" at all, all sorts of certainties must be stipulated - the surgeon knows the transplants will succeed, knows they won't be caught and ruin the reputation of the hospital, knows there is no other way to save the other patients, etc. etc. - it's then when we apply our rule-of-thumb principle to these counterintuitive premises, that we get a counterintuitive result.

Expand full comment
Ibrahim Dagher's avatar

Hm, I don’t think the consequentialist actually has to buy this highly reductive account you’ve offered. I tend to think there are 2 separate questions: (1) Are you blameworthy/praiseworthy for A-ing? (2) Is A-ing the best (right) action?

Consequentialists can say that blameworthiness isn’t just a function of the rightness of your act, but in fact a whole lot of other stuff (which consequences you *knew* about, your motivations, psychological obstacles/ease, etc.). In fact, how praiseworthy you are might have *nothing* to do with the external world (how good the act actually is) and everything to do with internal facts about you and what you know. E.g., I think a sincere but confused person who tries to make the world better is more praiseworthy than a lazy person, even if the former ends up doing more harm (and thus less optimal acts) than the latter.

I see blameworthiness as an internalist account of virtue/vice. Not sure why we want to reduce it to reasons, which strike me as (only) the foundation of optimal actions.

Expand full comment
Bentham's Bulldog's avatar

I was not intending to reduce blameworthiness to reasons. But blameworthiness also seems like an obviously degreed property. When we do knowingly terrible things, we're blameworthy.

Expand full comment
Ibrahim Dagher's avatar

Yah but the internalist account accommodates its degreed nature very well

Expand full comment