Recently, I was reading a paper by one of my favorite philosophers — Alastair Norcross — who argued that, if utilitarianism is true, then no actions are good. I think that this applies more broadly — even if utilitarianism is not true, no actions are good. The concept of a good action may just be unintelligible.
What does it mean to say that an action is good? One natural thought is the following
An action is good if and only if it makes the world better than it would have been if the action hadn’t been taken
But this raises a crucial question — what is included in the second part of the sentence, namely, the way things had been if the action hadn’t been taken. Norcross says
For the sake of simplicity, I will assume happiness and un- happiness to be the only things of intrinsic value and disvalue. Consider an agent, called Agent, whose action affects only herself and one other person, Patient. Agent is faced with a range of op- tions that do not affect her own happiness, but that have dramat- ically different effects on Patient's happiness. This case seems sim- ple enough. The good actions are those that make Patient happy, the bad are those that make him unhappy.
But this won't yet do. It seems to assume that Patient was neither happy nor unhappy to begin with. Let's modify the account slightly. The good actions are those that make Patient happier the bad are those that make him unhappier Happier than what? One obvious answer is happier than he was before the action. If Agent does something that increases (or augments) Patient's happiness, she has done a good thing. To generalize, we simply compare the wel- fare of all those affected by a particular action before and after the action. If the overall level of welfare is higher after than before, the action is good. If it is lower, the action is bad. If it is the same, the action is neutral.
But this still won't do. Consider again a restricted case involving only Agent and Patient. Call this case Doctor. Patient is terminally ill. His condition is declining, and his suffering is increasing. Agent cannot delay Patient's death. The only thing she can do is to slow the rate of increase of Patient's suffering by administering various drugs. The best available drugs completely remove the pain that Patient would have suffered as a result of his illness. However, they also produce, as a side effect, a level of suffering that is dramatically lower than he would have experienced without them but signifi- cantly higher than he is now experiencing.9 So the result of ad-ministering the drugs is that Patient's suffering continues to in- crease, but at a slower rate than he would have experienced with- out them. The very best thing Agent can do has the consequence that Patient's suffering increases. That is, after Agent's action, Pa- tient is suffering N amount of suffering as a direct result of the action, and N is more than Patient was suffering before the action. Has Agent done a bad thing if she slows the rate of increase of Patient's suffering as much as she can? This hardly seems plausible. It is consistent with the schematic description of this case to imag- ine that Agent has done a very good thing indeed.
This has a weird implication — we can’t compare the goodness of actions that are occurring at different times. But we can still say everything that would be worth saying if we were comparing actions. For example, we can say that, while there is no fact of the matter about whether torturing people is worse than writing this blog article, if the torturers had done what they would have done instead of torturing people, that would have been a bigger improvement relative to torturing people than if I had done what I would have done if I were not writing this blog, relative to writing this blog. We can also say that, if one could choose between causing me not to write this blog or causing the torturers not to torture, it would be better to cause the torturers not to torture.
Thus, I think this is just a weird linguistic result, but it’s not actually a problem for utilitarianism. It’s just a problem because linguistic statements generally don’t have explicit fixed counterfactuals in mind. But it’s not any deep metaphysical problem for a moral theory. It’s sort of like the fact that you don’t need anything simpliciter — you just need things for various purposes. This isn’t a problem with our concept of needing things, just an interesting linguistic quirk.
But if it is a problem for a moral theory, it will plague every moral theory. The same general problem applies to every theory — what does it mean to say that an action is good? What are we comparing it to? Norcross explains why several solutions to this don’t work in the paper.
Thus, even if you’re not a utilitarian, you’ll have to think that no actions are good. This is only problematic if you’re thinking in word — if you’re thinking in fundamental concepts, then this isn’t really a worry. Thus, the following false-sounding sentence is actually true: it isn’t bad to kick people for no reason.
Edit: Richard Chappell points out that we can have a concept of good actions as just actions that a pretty good person would take. This is true — but I think this will show that the concept of good actions are reducible and much less robust than we thought. They’re the type of naturalistic concept that I’ve discussed here.
Good actions are ones we should think well of. It would seem crazily revisionary to deny that there are any such actions. Surely we should think well of some actions, and poorly of others.
Now, I take Norcross's arguments to show that we can't give a simple account of which actions are good by trying to pin down the baseline in non-normative terms (e.g. as fixed by the present moment, or by the counterfactual of what would've happened in the absence of the action). But IIRC, he doesn't have any argument against the possibility of a normative baseline. For example, there may be an independent fact of the matter, in any given situation, of what a *minimally decent* person would do -- such that any worse act counts as positively blameworthy. If that's so, then we can say that any *better* act is positively good.
So we can give an account of good actions. It just requires appeal to a normative baseline, i.e. of what's minimally adequate, rather than a non-normative baseline like "what would've happened otherwise".
Only required response to someone who asserts that nothing matters is to ask why you shouldn't stab them 47 times in the chest.
If they come up with something, then they're argumentatively done for.
If they don't, then they're also done for.