Good actions are ones we should think well of. It would seem crazily revisionary to deny that there are any such actions. Surely we should think well of some actions, and poorly of others.
Now, I take Norcross's arguments to show that we can't give a simple account of which actions are good by trying to pin down the baseline in non-normative terms (e.g. as fixed by the present moment, or by the counterfactual of what would've happened in the absence of the action). But IIRC, he doesn't have any argument against the possibility of a normative baseline. For example, there may be an independent fact of the matter, in any given situation, of what a *minimally decent* person would do -- such that any worse act counts as positively blameworthy. If that's so, then we can say that any *better* act is positively good.
So we can give an account of good actions. It just requires appeal to a normative baseline, i.e. of what's minimally adequate, rather than a non-normative baseline like "what would've happened otherwise".
I emailed Norcross -- here's what he said in reply.
This is basically the account I give in my contextualist analysis of good actions, but with the implication that there’s a way to give a well-grounded, non-contextual, non-circular account of something like ‘minimal decency’. However, I don’t think there’s any way to give a non-contextualist and non-circular account of what would count as ‘minimally adequate’. We can’t define it in terms of what a decent person would or should do, because there’s no account of ‘should’ that avoids my arguments against that notion, and no way to define what ‘decency’ consists in. There are simply better and worse states of character. In any given conversational context, it’s probably reasonably clear what the speakers expect of a minimally decent person, but different contexts set different expectations. As for the claim that ‘surely we should think well of some actions, and poorly of others’, ‘surely’ isn’t an argument. It's certainly true that we do think well of some actions and poorly of others, and that which ones we think well and poorly of is largely a matter of our expectations and normative assumptions. It’s also true that different practices of drawing the line (usually a vague one) between the actions we think well of and the ones we think poorly of, or rather drawing the two lines that have the different categories on one side of them (because we also think neither well nor poorly of a lot of actions), can themselves be assessed as better or worse than each other, or rather as better or worse than alternate practices available at the time.
Richard’s comment is really just another version of ‘the radical scalar approach that Norcross suggests outrages my intuitions too much for me to stomach’. I understand and sympathize, but I also suggest reminding ourselves of where most of our moral intuitions come from (the powerful elements that control society, such as religion, corporations, etc).
He also clarified that this represented his own views, not the views of bolder.
(2) We can take some form of moral 'adequacy' to be a normative primitive (my preferred version: primitive fittingness norms that draw a line between 'adequate' and 'inadequate' actions in terms of the boundary between acts that make pro-attitudes fitting in response, and acts that make blame or con-attitudes fitting in response).
The force of my "surely" is just to point out that the latter option is less costly than the former. (Norcross has no argument against fundamental fittingness norms; he just doesn't himself believe in them, and so asserts that there are no such norms. But he doesn't give the rest of us any reason to follow him in this assumption.)
What follows is an extended quote from that review:
...whether an act is good, bad, harmful, or the like, is not something that follows simply from a neutral accounting of how the outcome compares to what would have happened otherwise. This is a striking and important result. Which alternative we take to constitute the relevant baseline can differ from case to case.
Norcross takes this to motivate a contextualist view on which conversational context selects a salient alternative as the one deemed “relevant” in that context. But it’s an interesting question whether a more principled determination might yet be possible. For example, we might take the relevant alternative to be determined by what could be reasonably expected of any (minimally decent) agent. In Button Pusher, we expect any minimally decent agent to push the ‘0’ button to save all lives costlessly, so anything worse is outright ‘bad’. In cases where greater self-sacrifice is involved, any aid at all might strike us as ‘good’ in virtue of being more than is minimally expected.
This alternative account depends upon there being an objective threshold of adequate moral concern: a least amount of altruistic motivation an agent must exhibit in order to qualify as minimally decent. Norcross does not explicitly discuss such an idea, but it seems clear that he would be skeptical. It certainly goes beyond the conceptual resources that he allows himself. But it’s not clear why the rest of us must feel so constrained.
Our contrasting expectations in the button pusher vs fire rescue cases seem to reflect genuine normative differences between the cases, not just the arbitrary expectations embedded in conversational contexts. Against a background where Agent is known to be villainous (such that everyone expected him to watch all ten die), we might resignedly sigh, “Well, it’s a good thing he only killed nine people this time,” as an implicit comparative claim. But I’m still inclined to insist that costlessly saving all ten seems the normatively privileged alternative for determining whether the act was absolutely good (warranting a distinctive kind of pro-attitude on our part, perhaps).
It’s worth asking what hangs on Norcross’ contextualist analyses. In contrasting his contextualism to an error theory about the associated terms, Norcross notes that on his reductivist account, “it is possible, even quite common, to express substantively true or false propositions involving” these terms (110). But why care about that? Defining ‘God’ to mean love, one could express substantively true or false propositions involving the term ‘God’, but they wouldn’t have theological significance. Matching ordinary usage in the assignment of truth values to linguistic strings adds further constraints, but still doesn’t seem all that philosophically significant. We should care less about the words, I think, and more about their inferential roles: what follows from calling something good, bad, or harmful? The answers may push us away from contextualism. If harms warrant resentment, for example, contextualism about ‘harm’ would seem to saddle us with the awkward implication that whether resentment is truly warranted could depend upon arbitrary conversational context.
The superficiality of contextualist analyses seems especially troubling when applied to free will and determinism (chapter 6). Norcross suggests that “[e]ven if strictly speaking, an agent couldn’t have done otherwise, conversational context may select certain counterpossible alternatives as the relevant ones with which to compare the action.” (134) On the other hand, we’re told that in the context of a philosophy seminar discussing determinism, “there may be no relevant alternatives to an agent’s actual behavior.” (135) But what reasons for action an agent has depends upon their option set. (Letting five die could be an excellent choice if the only alternative is killing ten. Other options could render it a terrible choice, by contrast.) So, to fix the moral reasons (avoiding relativism), we need a principled way to determine an agent’s available options. Conversational context seems ill-suited to this task...
Do you think there are precise facts about minimal adequacy. I'm inclined to think that moral adequacy is going to be a vague, higher-order naturalistic concept, sort of like the idea of good person or bravery. I don't think that the concept of a good person carves reality at its joins, and there aren't objective facts about how good an action is -- the way there are for the goodness of states of affairs -- but we can still reasonably talk about good people. I elaborated a bit more on this here. https://benthams.substack.com/p/my-naturalism-about-some-moral-concepts
Yeah, I guess you can keep around some version of good. I think that this concept of good will be vague, context dependent, and reducible to a vague natural fact, unable to be straightforwardly derived from utilitarianism. I assume this would be a good fully informed version -- otherwise we get the result that Hitler's mother having a child isn't bad.
I think that a decent analogy is with color. We think of colors as something in the world, but they're not really. Thus, the concept of color picks out something, but it picks out something very different from what we thought it did. The same thing is plausibly true of good action.
I sense, my friend, that you have not read the article. Norcross agrees that actions can be better than other actions and that states of affairs have amount of goodness, and that you have more reason to take some actions than others -- just that there's no fact about whether an action is good.
Ok. Let’s say, hypothetically speaking. That I… *hypothetically* had a large knife, almost as large as a short sword really. And I was in the process of stabbing a… *hypothetical* person. Repeatedly.
Good actions are ones we should think well of. It would seem crazily revisionary to deny that there are any such actions. Surely we should think well of some actions, and poorly of others.
Now, I take Norcross's arguments to show that we can't give a simple account of which actions are good by trying to pin down the baseline in non-normative terms (e.g. as fixed by the present moment, or by the counterfactual of what would've happened in the absence of the action). But IIRC, he doesn't have any argument against the possibility of a normative baseline. For example, there may be an independent fact of the matter, in any given situation, of what a *minimally decent* person would do -- such that any worse act counts as positively blameworthy. If that's so, then we can say that any *better* act is positively good.
So we can give an account of good actions. It just requires appeal to a normative baseline, i.e. of what's minimally adequate, rather than a non-normative baseline like "what would've happened otherwise".
I emailed Norcross -- here's what he said in reply.
This is basically the account I give in my contextualist analysis of good actions, but with the implication that there’s a way to give a well-grounded, non-contextual, non-circular account of something like ‘minimal decency’. However, I don’t think there’s any way to give a non-contextualist and non-circular account of what would count as ‘minimally adequate’. We can’t define it in terms of what a decent person would or should do, because there’s no account of ‘should’ that avoids my arguments against that notion, and no way to define what ‘decency’ consists in. There are simply better and worse states of character. In any given conversational context, it’s probably reasonably clear what the speakers expect of a minimally decent person, but different contexts set different expectations. As for the claim that ‘surely we should think well of some actions, and poorly of others’, ‘surely’ isn’t an argument. It's certainly true that we do think well of some actions and poorly of others, and that which ones we think well and poorly of is largely a matter of our expectations and normative assumptions. It’s also true that different practices of drawing the line (usually a vague one) between the actions we think well of and the ones we think poorly of, or rather drawing the two lines that have the different categories on one side of them (because we also think neither well nor poorly of a lot of actions), can themselves be assessed as better or worse than each other, or rather as better or worse than alternate practices available at the time.
Richard’s comment is really just another version of ‘the radical scalar approach that Norcross suggests outrages my intuitions too much for me to stomach’. I understand and sympathize, but I also suggest reminding ourselves of where most of our moral intuitions come from (the powerful elements that control society, such as religion, corporations, etc).
He also clarified that this represented his own views, not the views of bolder.
We have two broad options here.
(1) Follow Norcross, OR
(2) We can take some form of moral 'adequacy' to be a normative primitive (my preferred version: primitive fittingness norms that draw a line between 'adequate' and 'inadequate' actions in terms of the boundary between acts that make pro-attitudes fitting in response, and acts that make blame or con-attitudes fitting in response).
The force of my "surely" is just to point out that the latter option is less costly than the former. (Norcross has no argument against fundamental fittingness norms; he just doesn't himself believe in them, and so asserts that there are no such norms. But he doesn't give the rest of us any reason to follow him in this assumption.)
Cf. my review of Norcross's book in *Ethics*: https://philpapers.org/rec/CHANAM-4
What follows is an extended quote from that review:
...whether an act is good, bad, harmful, or the like, is not something that follows simply from a neutral accounting of how the outcome compares to what would have happened otherwise. This is a striking and important result. Which alternative we take to constitute the relevant baseline can differ from case to case.
Norcross takes this to motivate a contextualist view on which conversational context selects a salient alternative as the one deemed “relevant” in that context. But it’s an interesting question whether a more principled determination might yet be possible. For example, we might take the relevant alternative to be determined by what could be reasonably expected of any (minimally decent) agent. In Button Pusher, we expect any minimally decent agent to push the ‘0’ button to save all lives costlessly, so anything worse is outright ‘bad’. In cases where greater self-sacrifice is involved, any aid at all might strike us as ‘good’ in virtue of being more than is minimally expected.
This alternative account depends upon there being an objective threshold of adequate moral concern: a least amount of altruistic motivation an agent must exhibit in order to qualify as minimally decent. Norcross does not explicitly discuss such an idea, but it seems clear that he would be skeptical. It certainly goes beyond the conceptual resources that he allows himself. But it’s not clear why the rest of us must feel so constrained.
Our contrasting expectations in the button pusher vs fire rescue cases seem to reflect genuine normative differences between the cases, not just the arbitrary expectations embedded in conversational contexts. Against a background where Agent is known to be villainous (such that everyone expected him to watch all ten die), we might resignedly sigh, “Well, it’s a good thing he only killed nine people this time,” as an implicit comparative claim. But I’m still inclined to insist that costlessly saving all ten seems the normatively privileged alternative for determining whether the act was absolutely good (warranting a distinctive kind of pro-attitude on our part, perhaps).
It’s worth asking what hangs on Norcross’ contextualist analyses. In contrasting his contextualism to an error theory about the associated terms, Norcross notes that on his reductivist account, “it is possible, even quite common, to express substantively true or false propositions involving” these terms (110). But why care about that? Defining ‘God’ to mean love, one could express substantively true or false propositions involving the term ‘God’, but they wouldn’t have theological significance. Matching ordinary usage in the assignment of truth values to linguistic strings adds further constraints, but still doesn’t seem all that philosophically significant. We should care less about the words, I think, and more about their inferential roles: what follows from calling something good, bad, or harmful? The answers may push us away from contextualism. If harms warrant resentment, for example, contextualism about ‘harm’ would seem to saddle us with the awkward implication that whether resentment is truly warranted could depend upon arbitrary conversational context.
The superficiality of contextualist analyses seems especially troubling when applied to free will and determinism (chapter 6). Norcross suggests that “[e]ven if strictly speaking, an agent couldn’t have done otherwise, conversational context may select certain counterpossible alternatives as the relevant ones with which to compare the action.” (134) On the other hand, we’re told that in the context of a philosophy seminar discussing determinism, “there may be no relevant alternatives to an agent’s actual behavior.” (135) But what reasons for action an agent has depends upon their option set. (Letting five die could be an excellent choice if the only alternative is killing ten. Other options could render it a terrible choice, by contrast.) So, to fix the moral reasons (avoiding relativism), we need a principled way to determine an agent’s available options. Conversational context seems ill-suited to this task...
Do you think there are precise facts about minimal adequacy. I'm inclined to think that moral adequacy is going to be a vague, higher-order naturalistic concept, sort of like the idea of good person or bravery. I don't think that the concept of a good person carves reality at its joins, and there aren't objective facts about how good an action is -- the way there are for the goodness of states of affairs -- but we can still reasonably talk about good people. I elaborated a bit more on this here. https://benthams.substack.com/p/my-naturalism-about-some-moral-concepts
Yeah, I guess you can keep around some version of good. I think that this concept of good will be vague, context dependent, and reducible to a vague natural fact, unable to be straightforwardly derived from utilitarianism. I assume this would be a good fully informed version -- otherwise we get the result that Hitler's mother having a child isn't bad.
I think that a decent analogy is with color. We think of colors as something in the world, but they're not really. Thus, the concept of color picks out something, but it picks out something very different from what we thought it did. The same thing is plausibly true of good action.
Only required response to someone who asserts that nothing matters is to ask why you shouldn't stab them 47 times in the chest.
If they come up with something, then they're argumentatively done for.
If they don't, then they're also done for.
But I didn't say that nothing matters.
Ok. Please explain to me why I shouldn't... I'm very curious.
Because nearly every other option is better, and you have more reason to take nearly every other option.
And thus the person you quote cannot be correct.
I sense, my friend, that you have not read the article. Norcross agrees that actions can be better than other actions and that states of affairs have amount of goodness, and that you have more reason to take some actions than others -- just that there's no fact about whether an action is good.
Ok. Let’s say, hypothetically speaking. That I… *hypothetically* had a large knife, almost as large as a short sword really. And I was in the process of stabbing a… *hypothetical* person. Repeatedly.
Would it be good for me to stop, or nah?