Debate with Truth Teller Part 2: Reply to Truth-Teller
“Despite what J.L. Mackie would have you believe, calling something your opponent believes in “weird” is not, by itself, an argument.”
Here, I’ll reply to Truth-teller’s opening statement in depth. This is part two of our debate—see my opening statement here.
1.1 Kantian Constructivism
Truth teller is sympathetic to Kantian constructivism. He defines it in the article—you can read it if you’re not sure what it is. The main gist is two-fold. First, the broad constructivist thesis, that when you value something there are certain entailments and those make it so that you are bound, in some sense, by morality. Second, the Kantian will say that it is constitutive of agency that our ends our worth-bestowing in virtue of our rational nature, thus making rational nature the source of all normativity. We will next get into the motivation for this.
There is a lot of literature on Kantian constructivism. One worry for this it is that it requires denying moral realism, which I take to be a mark against it. More on that later. But another problem is David Enoch’s shmagency objection. The basic idea is this: the Kantian Constructivist says that you can, in theory, not respect other people as ends, but you, by doing so, revoke your status as an agent—to be a true agent, you have to not violate the rational will. But then the question arises: why should I care about being an agent? Call a being that’s sort of like an agent but doesn’t respect others’ ends a shmagent. Why be an agent rather than a shmagent?
1.2 The Normative Question
Truth teller argues that grounding morality in one’s nature answers the question of why one should be moral. He proposes
there seem to be 3 conditions a satisfying answer to the Normative question must meet.
1. It must succeed in addressing an agent in the first-person position who demands justification for the claims morality makes on them.
2. When the agent comes to know what justifies their action as required, they come to the belief that their action is justified and makes sense.
3. It must appeal, to a deep sense of who we are, our identity.
I don’t agree with these criterion. There is an answer to 1 on moral realism—the reason to be moral is because morality is about what you have most reason to do it, and by definition, you should do what you have most reason to do. I don’t think 2 is quite right—Jeffrey Dahmer should be moral even if he would need come to the belief that his action is justified and makes sense. Maybe it requires that it would be possible for the agent to know that the action is justified, but then we can just appeal to reasons to solve this too. Three seems unmotivated and wrong—if I say “you shouldn’t torture puppies” that doesn’t need to appeal to “A deep sense of who we are.” It just needs to appeal to the fact that one has most reason not to torture puppies. In defending three, Truth teller notes
…the third condition is perhaps not a necessary condition for a successful answer, but is plausible in light of the fact that there are plausibly cases where morality demands us to sacrifice our life. What can be worse than death, other then something equivalent to it, such as losing our own sense of who we are? It seems to be our practical conception of ourselves that matters most, that gives rise to unconditional obligations.
This assumes that the reason to be moral is about what happens to you. I think that dying for morality is probably worse for you, but it’s better for others. Several others dying is worse than death for you.
The main competitor to constructivism in answering the normative question is Moral Realism, which Matthew himself is committed to…. The realist believes, rather than normativity being imposed on us by an authority, or contract, that normativity, obligation etc. is an irreducible notion. Some actions just intrinsically instantiate the property of being right, and there is no further explanation or reason to appeal to. The problem here is simple, this doesn't answer the normative question, in fact, it doesn't meet any of the conditions. What the realist has done, is merely relocate the problem. Since there cannot be obligations unless there are actions which we necessarily have reason to do, the realist simply put the necessity where he desired to find it - those moral duties which we already thought were obligatory. But the normative question is asking if there really is anything we necessarily have reason to do, and if so, which things and why them?
I explained above how it’s able to, relatively easily, answer the three questions. In terms of answering the questions of why all and only the things that actually matter matter, the answer would be grounded in facts about the things. There isn’t some extra further non-natural add-on—instead, the non-natural facts just serve to pick out which of the natural facts matter.
This seems like a perfectly good answer. If we ask why pain is bad, what answer could be more satisfying than “because of what pain is like.” Pain is bad—this is a fact about it—and this gives us all a reason to want to avoid it.
But I think that there are issues for the Kantian constructivist account of normativity. It seems unable to ground non-moral axiology. It seems that tornadoes are bad and puppies are good. But this isn’t grounded in facts about agents.
By contrast, we have the constructivist answer to the normative question, by taking the view that morality is grounded in human nature, or more precisely, our identity as rational agents.
But this doesn’t seem like a good answer. It seems like the reason that pain is bad isn’t just that you’d care about it if you thought hard. It seems that the fact of the badness of pain grounds the fact that you wouldn’t like it. Otherwise, our idealized preferences would be arbitrary—and one could have any crazy idealized desire, to starve to death for no reason, for example.
Another way of seeing that a Kantian constructivist view like mine is in a better position to answer the normative question would be a consideration regarding the regress of one's values. Suppose you wanted to eat ice-cream. The fact that you desire to eat ice-cream isn't a reason (by this I mean a reason sufficient to justify action) to eat the ice-cream. You can ask the normative question "why ought I act according to this desire?". You can weigh it against other desires, such as your desire to lose weight and ask "which of these desires is a better reason to act?".
But here we can just replace desire with reason and get the same result. You have some reason to eat it based on its pleasant taste and some reason not to do so.
1.3 The Kantian Argument
Truth teller presents the following argument. He’s in blockquotes—my explanation of where I disagree is not.
1) I value the ends I rationally set myself, and take myself to have
reason to pursue them.
(2) But I recognize that their value is only conditional: if I did not set
them as my ends, I would have no reason to pursue them.
I disagree with this because I accept moral realism. If our reasons to have ends is just based on our desires then none of these obviously irrational desires would be irrational.
1 A person doesn’t care about suffering if it comes from their pancreas. Thus, they’re in horrific misery, but it comes from their pancreas so they do nothing to prevent it, instead preventing a miniscule amount of non-pancreas agony.
2 Picking Grass: Suppose a person hates picking grass — they derive no enjoyment from it and it causes them a good deal of suffering. There is no upside to picking grass, they don’t find it meaningful or causing of virtue. This person simply has a desire to pick grass. Suppose on top of this that they are terribly allergic to grass — picking it causes them to develop painful ulcers that itch and hurt. However, despite this, and despite never enjoying it, they spend hours a day picking grass.
Is the miserable grass picker really making no error? Could there be a conclusion more obvious than that the person who picks grass all day is acting the fool — that their life is really worse than one whose life is brimming with meaning, happiness, and love?
3 Left Side Indifference: A person is indifferent to suffering that’s on the left side of their body. They still feel suffering on the left side of their body just as vividly and intensely as it would be on the right side of their body. Indeed, we can even imagine that they feel it a hundred times more vividly and intensely — it wouldn’t matter. However, they do not care about the left side suffering.
It induces them to cry out in pain, it is agony after all. But much like agony that one endures for a greater purpose, the agony one endures on a run say, they do not think it is actually bad. Thus, this person has a blazing iron burn the left side of their body from head to toe, inflicting profound agony. They cry out in pain as they do it. On the anti-realist account, they’re acting totally rationally. Yet that’s clearly crazy!
4 Four-Year-Old Children: Suppose that — and this is not an implausible assumption — there’s a four-year-old child who doesn’t want to go into a Doctor’s office. After all, they really don’t like shots. This child is informed of the relevant facts — if they don’t go into the Doctor’s office, they will die a horribly painful death of cancer. You clearly explain this to them so that they’re aware of all the relevant facts. However, the four-year-old still digs in their heels (I hear they tend to do that) and refuses categorically to go into the Doctor’s office.
It’s incredibly obvious that the four-year-old is being irrational. Yet they’ve been informed of the relevant facts and are acting in accordance with their desires. So on anti-realism, they’re being totally rational.
5 Cutting: Consider a person who is depressed and cuts themself. When they do it, they desire to cut themself. It’s not implausible that being informed of all the relevant facts wouldn’t make that desire go away. In this case, it still seems they’re being irrational.
6 Consistent Anorexia: A person desires to be thin even if it brings about their starvation. This brings them no joy. They starve themself to death. It really seems that they’re being irrational.
7 A person had consensual homosexual sex. They then become part of a religious cult. This religious cult doesn’t have any factual mistakes, they don’t believe in god. However, they think that homosexual sex is horrifically immoral and those who do it deserve to suffer, just as a base moral principle. On the anti-realist account, not only are they not mistaken, they would be fully rational to endure infinite suffering because they think they deserve it.
8 A person wants to commit suicide and know all the relevant facts. Their future will be very positive in terms of expected well-being. On anti-realism, it would be rational to commit suicide.
9 A person is currently enduring more suffering than anyone ever has in all of human history. However, while this person doesn’t enjoy suffering — they experience it the same way the rest of us do, they have a higher order indifference to it. While they hate their experience and cry out in agony, they don’t actually want their agony to end. They don’t care on a higher level. On this account, they have no reason to end their agony. But that’s clearly implausible.
Truth teller later in the article objects to the idea that these are irrational desires—I’ll reply here.
I'll make a couple points in response. Firstly, I simply do not share the intuition that these cases are irrational. It is certainly true that they all involve very bizarre desires for a rational agent to possess, but this does not entail that they are making a mistake.
Most people think that if you set yourself on fire for no reason beyond the fact that you want to that that is irrational.
Secondly, it is precisely because these scenario's are so strange that we should not trust our intuition that they are irrational.
There are lots of strange desires. No one has the intuition that two aliens engaging in weird pleasurable activity where they put their tentacles in each others ears are wrong. Our intuition isn’t just that it’s weird, but that it’s irrational.
As a general point, it's not clear to me how hendonism/utilitarianism actually gets a boost in explanatory power here relative to my view. It only explains these cases in the sense that, given the truth of hedonism, they would be irrational. But it's not actually explaining what makes them irrational, if the answer is just "because well-being is necessarily worth-pursuing" that seems to me to just reassert the content of hedonism.
I’m not sure what’s being missed. Hedonism—which I don’t have to defend for the purposes of this article, just general utilitarianism—says that pleasure is good which makes it rational to pursue. What’s unexplained? There’s no deeper account of why pleasure is good just like all fundamental things bottom out at some point.
(3) So I must see myself as having a worth-bestowing status.
The first premise talked about what you do, in fact, value. But none of that means you make things valuable; it just means you care about things. Your desires don’t make things really worth caring about. Thus, this premise is totally unmotivated and false on most views.
(4) So I must see myself as having an unconditional value—as being an end in myself and the condition of the value of my chosen ends—in virtue of my capacity to bestow worth on my ends by rationally choosing them.
No, this just means you can value things. Even if you can value things and somehow make them valuable with your will, this wouldn’t mean you are an end in and of yourself.
(5) I must similarly accord any other rational being the same unconditional value I accord myself.
The previous objections explain why I don’t accept this.
(6) So I should act in a manner that respects this unconditional value: I should use humanity (that is, rational nature), whether in my own person or in the person of any other, always at the same time as an end, never merely as means.
But why does this respect rational natures. It seems just as plausible that making the lives of rational natures go well is a better way of respecting them.
1.4 Kantian Constructivism is theoretically virtuous
1. Through the Kantian argument and the categorical imperative test, we have an a priori way of testing the correctness or incorrectness of some maxim
I’m not sure what the way is supposed to be. Just asking whether something respects the ends of rational creatures seems undefined. Indeed, I don’t think there’s a fact about whether you use someone as a mere means. Any time you care about one’s welfare at all, you’re not treating someone as a mere means, but this means that consequentialism, which treats everyone’s welfare as equally significant, is on equally firm ground.
One other worry is that this seems unable to ground morality. After all, it can’t explain axiology, and it also can’t explain the wrongness of harm to some severely mentally disabled, who are not rational beings. Finally, it seems that respecting someone as a rational end doesn’t mean not using them as a mere means—you respect yourself a year from now as a rational end, but you could use them as a mere means, rationally, to prevent greater harm from befalling yourself two years from now.
2. Kantian constructivism is reason internalist, connecting moral reasons to our motivational set, thus explaining why they are something we deeply care about, whereas it is a bit more mysterious why we would or should care about external reason-giving properties.
But reasons internalism is false. The reason why ridiculous things don’t matter is because there’s nothing about them that’s worth pursuing, not that you personally don’t care much about them.
3. There are also potential further problems with reason externalism. One immediate concern might be about the intelligibility of stance-independent external reasons. Lance Bush has made this point in a number of places, reasons seem to be relational properties, conceptually tied to goals, desires or other such internal motivations, as well as consistency relations therein.
Lance Bush seems to strangely lack the relevant concepts—he, of course, disputes this. There’s nothing incoherent about reasons. They count in favor of what they’re reasons for. It’s unclear what about them is supposed to be incoherent. When I say “you shouldn’t torture babies for fun,” or “you have good reason not to set yourself on fire even if you want to” that seems coherent, even though it’s not appealing to any person’s desires. Thus, I just reject that reasons are conceptually tied to goals.
For the record, I think a lot of this is irrelevant. If we think that as rational people reflecting we’d end up concluding that rights don’t matter, then this is successful, even if we’re constructivists.
A similar point would be, following Jonas Olson (Olson 2014), external reasons are very queer to the point that we are better off rejecting them. To briefly present the case, the realist posits that there is a counts-in-favor of relation, that is a relation which gives us reason to perform one action rather than another, I'll call it the C-relation (X counts in favor of Y). Realism is committed to the C-relation obtaining independent of the agent's psychology. The problem is, it's difficult to understand what the truth-maker for the C-relation obtaining would be, if it's not the fact that, given X, the agent has some motivation to Y.
“Despite what J.L. Mackie would have you believe, calling something your opponent believes in “weird” is not, by itself, an argument.” This doesn’t seem strange at all—there are things that give you reasons. Previous examples show the intuitiveness of this notion.
4. Even if you, for whatever reason, find reasons externalism plausible, there are plenty of realist accounts of Kantian ethics to accommodate such a view, most notably in the works of Allen Wood, Paul Guyer, and Peter Railton.
Railton is a consequentialist—and one of my favorite professors, unrelatedly. Maybe it’s possibly compatible with realism, but this is suspect for the reasons I lay out in the article.
5. Kantian constructivism is less ontologically profligate. It does not involve positing additional ontological commitments such as non-natural, irreducibly normative facts.
I think moral naturalism is more plausible than Kantian constructivism, but I agree that it’s more parsimonious than non-naturalism. However, being an anti-realist theory, it runs into various worries—see here for more.
6. To expand on (5), positing these non-natural irreducibly normative properties, or even natural normative properties does not seem to yield sufficient explanatory benefits so as to off-set it's lack of ontological parsimony.
It’s necessary to make sense of various moral truths that I highlight in the above article. Additionally, it explains various moral judgments and patterns of moral judgment. I’ll just quote one of the things I said it could explain.
The Discovery Argument
One of the arguments made for mathematical platonism is the argument from mathematical discovery. The basic claim is as follows; we cannot make discoveries in purely fictional domains. If mathematics was invented not discovered, how in the world would we make mathematical discoveries? How would we learn new things about mathematics — things that we didn’t already know?
Well, when it comes to normative ethics, the same broad principle is true. If morality really were something that we made up rather than discovered, then it would be very unlikely that we’d be able to reach reflective equilibrium with our beliefs — wrap them up into some neat little web.
But as I’ve argued at great length, we can reach reflective equilibrium with our moral beliefs — they do converge. We can make significant moral discovery. The repugnant conclusion is a prime example of a significant moral discovery that we have made.
Thus, there are two facts about moral discovery that favor moral realism.
First, the fact that we can make significant numbers of non-trivial moral discoveries in the first place favors it — for it’s much more strongly predicted on the realist hypothesis than the anti-realist hypothesis.
Second, the fact that there’s a clear pattern to the moral convergence. Again, this is a hugely controversial thesis — and if you don’t think the arguments I’ve made in my 36-part series are at least mostly right, you won’t find this persuasive. However, if it turns out that every time we carefully reflect on a case it ends up being consistent with some simple pattern of decision-making, that really favors moral realism.
Consider every other domain in which the following features are true.
1 There is divergence prior to careful reflection.
2 There are persuasive arguments that would lead to convergence after adequate ideal reflection.
3 Many people think it’s a realist domain
All other cases which have those features end up being realist. This thus provides a potent inductive case that the same is true of moral realism.
Next, TruthTeller says
7. Kantian ethics has good explanatory power for accounting for our intuitions about human rights, human dignity, what makes us valuable e.g our rational natures, and accurately diagnoses what makes promise-breaking, rape, callousness, and racial and sexual discrimination wrong.
I think consequentialism gives good accounts of that, as we see when we get to the specific objections. Additionally, no explanation was given at all for why Kantian deontology explains any of that.
8. Finally, Kantian constructivism is better equipped to address evolutionary debunking concerns.
This is a whole can of worms, but I addressed it in my moral realism article.
2.1 The Epistemic Objection
Truth teller says this.
For this sub-section, I will be taking my cues from the excellent paper 'Consequentialism and Cluelessness' (Lenman 2000). The basic idea being that Consequentialism implies we are hopeless judges of right and wrong. If consequentialism is right, then the right and wrong-making features of actions is the total goodness of their consequences. Yet, it is not possible to really know what the future holds, and in particular the many unforeseen outcomes which will determine whether the given act was right or wrong to perform. You're betting on total outcomes, over periods of time you couldn't possibly know enough about to make informed decisions. Thus, making consequentialism unusable as a practical guide for action, and ultimately lead to moral skepticism.
For one, as I showed in my opening statement, unpredictability results in the conclusion that deontologists shouldn’t act at all. Second, I think the expected value response to this works—it’s true that it’s hard to know the exact consequences, but we can make estimates. If you save someone’s life, maybe they’ll give birth to Hitler, but they also might give birth to someone who stops Hitler—these probabilities fizzle out.
Truth teller replies to this.
First off, this response doesn't seem to take seriously enough how likely it is for a given action to have disastrous consequences
But it’s also likely to prevent very bad things. All we can do is rely on our best judgment—many of these huge ripple effects fizzle out.
Second, it doesn't take nearly seriously enough the extreme limitations of our knowledge when it comes to ultimate consequences and their moral value.
This doesn’t implicate the ability. Imagine a game way more complicated than chess, where it was hard to know which moves were best. Even if it is hard to know, it’s not impossible, so you could still make moves with the highest expected value. Morality is the same.
Truth teller then argues that we have no a priori reason to expect this to cancel out. This is true, but we can just evaluate the expected consequences, taking into account uncertainty. No part of this reply requires accepting the strong principle of indifference.
Truth teller’s argument is more radical than he realizes. It results in the conclusion that we can’t calculate consequences, so consequences shouldn’t factor into our considerations at all. But this is absurd—if you could cure depression, for example, that would be good because it would have good consequences. Any sensible view will say consequences matter at least somewhat. \
Truth teller replies to this.
What is not shared however, is that all consequences, including those which are long-term, indirect and unintended, even in part, determine the right or wrongness of actions. But it is the latter claim that is needed in order to mount the tu quoque objection.
But this makes the view absurd. Suppose you know that your action of going outside today will cause the extinction of all life on earth five years from now in an indirect way. On accounts that reject the significance of these consequences, these are, quite literally, no reason not to go outside.
2.2 Demandingness
Truth teller claims that consequentialism demands too much such that it runs afoul of our ordinary intuitions.
First, utilitarianism is intended as a theory of right action not as a theory of moral character. Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection. However, it is far less counterintuitive to suppose that it would be good to sacrifice oneself to save two others than it is to suppose that one is a bad person unless they sacrifice themselves to save two others. In fact, it seems that any plausible moral principle would say that it would be praiseworthy to sacrifice oneself to save two others. If a person sacrificed their lives to protect the leg of another person, that act would be bad, even if noble, because they sacrificed a greater good for a lesser good. However, it’s intuitive that the act of sacrificing oneself to save two others is a good act.
The most effective charities can save a life for only a few thousand dollars. If we find it noble to sacrifice one's life to save two others, we should surely find it noble to sacrifice a few thousand dollars to save another. The fact that there are many others who can be saved, and that utilitarianism prescribes that it’s good to donate most of one’s money doesn’t count against the basic calculus that the life of a person is worth more than a few thousand dollars.
Second, while it may seem counterintuitive that one should donate most of their money to help others, this revulsion goes away when we consider it from the perspective of the victims. From the perspective of a person who is dying of malaria, it would seem absurd that a well off westerner shouldn’t give up a few thousand dollars to prevent their literal death. It is only because we don’t see the beneficiaries that it seems too demanding. It seems incredibly demanding to have a child die of malaria to prevent another from having to donate.
(Sobel, 2007) rightly points out that allegedly non-demanding moralities do demand a great deal of people. They merely demand a lot of the people who would have been helped by the allegedly demanding action. If morality doesn’t demand that a person gives up their kidney to save the life of another, then it demands the other person dies so the first doesn’t have to give up a kidney. Sobel argues that there isn’t a satisfactory distinction between demands on the victim of ill fortune and the demands of well-off people who consequentialism places significant demands on.
If a privileged wealthy aristocracy objected to a moral theory on the grounds that it requested they donate a small share of their luxury to prevent many children from dying, we wouldn’t take that to be a very good objection to that moral theory. Yet the objection to utilitarianism is almost exactly the same--minus the wealthy aristocracy part. Why in the world would we expect the correct moral theory to demand we give up so little, when giving up a vacation could prevent a child from dying. Perhaps if we consulted those whose deaths were averted as a result of the foregone vacation or nicer car, utilitarianism would no longer seem so demanding.
Third, we have no a priori reason to expect ethics not to be demanding. The demandingness intuition seems to diffuse when we realize our tremendous opportunity to do good. The demandingness of ethics should scale relative to our ability to improve the world. Ethics should demand a lot from superman, for example, because he has a tremendous ability to do good.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection. If we came across a drowning child with a two thousand dollar suit, it wouldn’t be too demanding to suggest we ruin our suit to save the child. Singer argues that this is analogous to failing to donate to prevent a child from dying.
One could object that the child being far away matters. However, distance is not morally relevant. If one could either save five people 100 miles away, or ten 100,000 miles away, they should surely save the ten. When a child is abducted and taken away, the moral badness of the situation doesn’t scale with how far away they get.
A variety of other objections can be raised to the drowning child analogy, many of which were addressed by Singer.
Fifth, demandingness is required to obey cross world pareto optimality. Consider two possible worlds: world one has immense opportunity to help people, world two has very little opportunity to help people, such that in world two utilitarianism demands virtually nothing. If ordinary morality is similarly demanding across both worlds, then the demanding morality would be, across worlds, both better for you and for others.
Utilitarianism is not demanding because of some inherent reason to be a saint. Rather, utilitarianism is demanding at this place, time, and social location, because we have immense opportunities to make the world a better place. When the evidence changes, so should the demandingness of our morality--particularly if we want to obey cross world pareto optimality.
Sixth, Kagan (1989) provides the most thorough treatment of the subject to date up, and argues persuasively that there is no philosophically persuasive defense of morality not being demanding. Similar accounts can be found in (Pogge, 2005), (Chappell, 2009), and many others.
Kagan rightly notes that ordinary morality is very demanding in its prohibitions, claiming that we are morally required not to kill other people, even for great personal gain. However, Kagan argues a distinction between doing and allowing and intending and foreseeing cannot be drawn successfully, meaning that there’s no coherent account of why morality is demanding on what we can’t do, but not on what we must do.
Seventh, a non-demanding morality can be collectively infinitely undesirable. Currently, to increase the welfare of a person on the other side of the world by N, the cost is far less than N/2, for affluent people. However, we can stipulate that it’s only N/2, and the implication still goes through. Suppose two people can both endure a cost of N/2 to benefit another far away person by N. If everyone does this, everyone is better off. We can stipulate this process being iterated enough to make non-demanding moralities make everyone infinitely worse off. If you have infinite opportunities to make a person 1 utility better off at the cost of .5 utility, and they have the same, you’ll both be left infinitely better off, rather than infinitely worse off, if we stipulate that each time you take the option the .5 utility option, it costs the other person .6 utility.
Eighth, our intuitions about such cases are debunkable. (Braddock, 2013) argues that our beliefs about demandingness were primarily formed by unreliable social pressures. Thus, the reason we think morality can’t be overly demanding is because of norms about our society, rather than truth tracking reasons. Additionally, (Ballantyne and Thurow, 2013) argue that partiality, bias, and emotions all undermine the reliability of our intuitions. We have a strong partial and biased reason to oppose demanding morality, meaning that this is a prime target of debunking. This similarly provides us with a strong emotional opposition to demandingness.
Greater evidence for the social pressure thesis from Braddock comes from the fact that our intuitions about demandingness are hard to explain, except in light of social pressures. Several strange features of our obligations are best explained by this theory.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries. There are obvious social pressures that encourage paying taxes.
As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.
Braddock notes (p.175-176) “These processes include but are not limited to the internalization of social norms through familiar socialization practices, sanction practices, conformist pressures, modeling processes, and so on. What we think is too demanding is largely influenced by what people around us think is too demanding, much like, as a general matter, what we are likely to believe and do is influenced by what people around us believe and do. And even if those around us have not expressed these intuitions or ever explicitly entertained them before their minds, nonetheless from our earliest days, the content of our demandingness intuitions is plausibly influenced by which norms of beneficence people adopt and which attitudes they express about sharing and giving.”
Ninth, as (Braddock, 2013) notes, this problem applies to nearly all plausible theories, not merely consequentialism.
(Ashford, 2003), points out that the demandingness problem plausibly applies to Scanlon’s contractualism, meaning that it is a problem for other moral views, even ones designed specifically to avoid the demandingness of utilitarianism.
Tenth, scalar consequentialism, which I’m inclined to accept, says that rightness and goodness of actions come in degrees—there isn’t just one action that you’re obligated to take, instead, you just have actions ranked from most reason to least reason. Scalar utilitarianism gives a natural and intuitive way to defuse this. Sure, maybe doing the most right thing is incredibly demanding — but it’s pretty obvious that giving 90% of your money to charity is more right than only giving 80%. Thus, by admitting of degrees, demandingness worries dissipate in an instant.
And we should expect doing the best thing to be demanding. For you to, every moment of every day, do the best possible thing, that’s pretty demanding. The best thing that you can do should be pretty demanding. Scalar utilitarianism accounts for this fact, perfectly accommodating our demandingness intuitions.
There’s a puzzling asymmetry between the demandingness objection when applied to utilitarianism and Christianity. Christianity does seem to be a prime example of a demanding theory. It does, after all, imply that we’re all sinners. However, the demandingness objection is never, to my knowledge, raised against Christianity. It’s not clear why it’s thought to be uniquely a problem for utilitarianism given this fact.
Truth Teller says
What about the drowning child case? In that case, you are directly observing a rational agent, (or potential rational agent) drown. This difference is indeed relevant on Kantian ethics,
But this is clearly irrelevant. Suppose you could save a child by going into the pond and pressing a button to save them far away. You still ought to do it. Whether you can see a person makes no difference to your reasons to save them.
In not donating to charity, it does not follow that you are intending to permit children to squalor, starve, and languish. In the drowning case, it's plausible that, by failing to save them, you must judge them as not worthy of saving, thus not respecting their autonomy as an end in itself which is impermissible.
It’s totally unclear why this is the case! In the drowning child case, maybe you deem the child worthy of saving—just like a person who doesn’t donate to a children’s hospital deems the children worthy of saving—but you just prefer not to save them and spend your money on other things.
2.3 The Unintuitiveness Objection
This debate is about consequentialism vs deontology, not about theories of well-being. Thus, many of the unintuitive implications of consequentialism given by Truth teller do not apply because they’re just directed at hedonism. He gives a series of cases involving raping people where one’s pleasures allegedly outweighs the pain of the victim—a non-hedonist, or a desert adjusted hedonist, can just say that only some pleasures count, and ones from rape don’t. He gives the example of gang rape, but the badness of gang rape scales with the harm caused, also, this isn’t relevant to consequentialism, just hedonism. As for his first example about coma rape, see my article here, but I won’t delve too much into this, because it’s not too relevant to the topic.
Here's another. You aren't permitted to sacrifice yourself to save a friend with poorer life conditions than you on utilitarianism.
On scalar utilitarianism, there aren’t facts about what you’re permitted to do, you just wouldn’t have most reason to do—that wouldn’t be the best thing you could do. But this seems plausible! It doesn’t seem like you have most reason to save your friend.
Additionally, scenarios like this, when iterated, produce unintuitive results. If we say that you have equal reason to do them or some such, then if both you and your friend starts with 1,000 units of torture and can either reduce your torture by 1,000 or the others by 500, both would be fine. But taking the action that reduces the other’s suffering by 500 makes everyone worse off. The correct theory of what we have most reason to do shouldn’t run afoul of the pareto principle.
Here's another. Suppose a thief tries to steal grandma's purse but in the process of doing so, pulls her away and saves her from being hit by a car. On consequentialism, since it's only the consequences of an action that matter, the action of the thief was good, intuitively, it was not a good action since the thief's intentions were wrong.
This stops being unintuitive when we distinguish objective and subjective wrongness. Objective wrongness is about what one would do if they were omniscient, what a perfectly knowing third party would prefer, what they have most actual reason to do. The utilitarian judgment here about objective wrongness is obvious. But the utilitarian can agree that this action was wrong based on what the person knew at the time—it just coincidentally turned out for the best.
Here's another; You have a rich friend who is on his death bed and his dying wish is for you to make sure his fortune goes to his son. It is overall more optimific to lie and say his wish was for his fortune to go to charity. On utilitarianism, you therefore should lie. This is unintuitive.
I’ve written an entire article about this particular scenario. It’s obviously much more important that you save many lives than keep a promise, on any plausible view.
Finally, a classic. On utilitarianism, it's not impermissible, in fact it's probably obligatory, for a doctor to kidnap someone off the street to harvest their organs and save 5 others.
But this isn’t a good thought experiment. As Savulescu says
this is a dirty example. Transplant imports many intutions. For example, that doctors should not kill their patients, that those with organ failure are old while the healthy donor is young, that those with organ failure are somehow responsible for their illness, that this will lead to a slippery slope of more widespread killings, that this will induce widespread terror at the prospect of being chosen, etc, etc
He gives a better example that totally diffuses our intuitions.
Epidemic. Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
In this case, extraction seems obviously better. Richard has his own version of this involving aliens, but the example above seems to capture things well.
1. Utilitarianism fails to sufficiently account for human dignity and reduces ethics to a numbers game. Either by doing arithmetic to save the most lives, or calculating "units of utility" where if some threshold is crossed, it's ok to murder someone. By most people's intuitions, mine included, this seems to miss the point of ethics.
It sounds bad when phrased that way, but any agent who follows basic rational axioms will be modellable as maximizing some utility function. If we realize that lots of things matter, then when making decisions, you add up the things that matter on all sides, and do what produces the best things overall. This isn’t unintuitive at all. Indeed, it seems much stranger to think that you should sometimes do what makes things worse.
2. Utilitarianism is not egalitarian, nor does it entail justice or fairness.
But consequentialism can—just have the social function of welfare include equality, for example. Additionally, utilitarianism supports a much more egalitarian global distribution than exists currently. Truth teller raises the utility monster objection—I’ve replied here.
3. Utilitarianism fails to accurately diagnose what makes actions wrong. There seems to be something that makes breaking promises, discriminating on the basis of race and sex, violating bodily autonomy, cruelty, and lack of sensitivity to the plights of others wrong, that is not reducible to consequences.
These tend to be wrong, but they tend to produce bad outcomes. It seems that breaking promises is bad because it makes things worse—its badness doesn’t just float free. Any high-level theory is going to sound weird when applied to concrete cases—but it’s no mark against, for example, Rossian pluralism that it’s answer to why torture is wrong is “it violates the balance of prima facie duties.”
In the next section, Truth teller responds to various objections that are not, I believe, relevant to the dispute—none of them are arguments I’ve made for consequentialism. Let me just clear up one misrepresentation of my views—I do not think phenomenal introspection is our only way of forming reliable beliefs. I think it is a way. I was just summarizing Sinhababu’s argument.
For all these reasons, I remain a consequentialist.