Mostly meta-ethics
TT has written his reply to my reply to his opening statement. See here for the debate so far. TT’s quoting of me is in italics, his statements are not.
One first point that I raised in reply to TT was that his view requires denying moral realism, which I take to be a mark against it. He thinks that’s a mark in favor of it. We do not, unfortunately, have the time to launch a side debate about moral realism. Thus, these considerations should move moral realists towards consequentialism, and anti-realists towards Kantian constructivism.
Next, I raised the shmagency objection.
But another problem is David Enoch’s shmagency objection. The basic idea is this: the Kantian Constructivist says that you can, in theory, not respect other people as ends, but you, by doing so, revoke your status as an agent—to be a true agent, you have to not violate the rational will. But then the question arises: why should I care about being an agent?
TT replies
For one, I may not be able to offer you further reasons to be an agent rather than a schmagent, but nonetheless we are rational agents with sets of values and aims. Certain evaluative truths are entailed internal to this standpoint and whether you like it or not, you can't opt out of it. Enoch's response is even if you can't opt out of the game of agency, this does not solve the problem because there's still the further question of whether we have normative reason not to break the rules constitutive of it. However, I would retort that breaking those rules is being inconsistent, given that you are a rational agent, for standard Kantian reasons I've previously argued.
This seems to totally miss the point. If you say “absent doing X, Y, and Z, you won’t be an agent, the correct reply is “okay, I guess I’m not an agent then, instead I’m a shmagent, sort of like an agent, but who does X,Y, and Z. If you are constitutively an agent, then there would be literally nothing you could do to stop being an agent.
For two, the question is either asked internal to the standpoint of an agent, in which case constitutivism has a straightforward answer by appealing to agency's constitutive norm(s). Or the question is external, in which case it is unintelligible, as what it is to be a reason for action is embedded in, and only makes any sense for agents, "reason to be a schmagent" is thus, conceptually confused.
But this is false. You can have a reason to be a schmagent, if a schmagent is just exactly like an agent but lacking the various weird things that Kantian constructivists think are required for agency. I don’t think reasons have to be internal to an agent to be intelligible, and you can have a reason to be something you are not currently (E.g. I can have a reason to become a non-blogger).
Next, TT claims that I haven’t satisfactorily answered the questions posed for moral realism.
There is an answer to 1 on moral realism—the reason to be moral is because morality is about what you have most reason to do it
But this doesn't answer 1. Why think such-and-such moral fact is what the agent has most reason to do? If the agent doesn't already think that's what they have most reason to do, what reasons do you give them? If you say it is a primitive necessary truth, then you're falling into the problem I stated before, far from answering the normative question, you merely located the necessity where you wanted to find it.
Because the moral facts are by definition what one has most reason to do. If you ask why one should do what they have most reason to do—when should just describes what one has most reason to do—the question shows confusion.
I don’t think 2 is quite right—Jeffrey Dahmer should be moral even if he would need come to the belief that his action is justified and makes sense.
Ah, but this isn't what 2 says. What it says is that, on the condition that the agent knows what justifies their action being required, they come to form the belief that the action is justified and makes sense. The reasons' given have to be motivating, by the agents lights. If they aren't, then why should they accept them?
This is, once again, begging the question against desire independent reasons. They should accept them because they have a reason to accept them. If there are desire independent reasons, then obviously the reasons to do things won’t be justified by appealing to the actual desires of agents.
This assumes that the reason to be moral is about what happens to you. I think that dying for morality is probably worse for you, but it’s better for others. Several others dying is worse than death for you.
It is not about reducing morality to what is best for you. I'm far from an egoist. It is about what matters most, by your own lights, and what matters most seems to be your practical identity, which `is a description under which you see yourself as having value and your ends as worth-undertaking`. So, circumstances where you must give up your practical identity explain why you might be obligated to sacrifice your life.
Again, the realist account of why you should sometimes give up your life is “some things are really important, more important than your life, and you should give up your life for them.”
In terms of answering the questions of why all and only the things that actually matter matter, the answer would be grounded in facts about the things. There isn’t some extra further non-natural add-on—instead, the non-natural facts just serve to pick out which of the natural facts matter.
But why accord those natural facts value, rather than some other cluster of natural facts? For any natural fact you give, it seems the normative question will come back, "Do I have a reason to act in accord with, or regard as valuable these natural facts?". Merely positing it as a brute, or necessary truth doesn't answer anyone who doesn't already accept it, it just, once again, locates the necessity where desired. Whereas, on Kantianism the source of normativity, is rational willing/autonomy/deliberation as such, which every agent has, and is just one's capacity for setting one's ends and taking them to be worthy of pursuit, and deciding which desires or external facts are better to act on. You ought to accord rational nature unconditional value, because rational nature is the source which generates all your reasons for action, the condition for you valuing anything at all, and viewing your ends as worth-bestowing. It is motivated by a transcendental argument.
This question just seems confused. A helpful analogy here would be modal facts. Contradictions are impossible—they can’t happen. If you ask “why can’t contradictions happen, rather than some other cluster of natural facts?” the answer would be “that’s just a necessary truth. It’s not puzzling at all when you grasp what a contradiction is. Likewise, it’s no puzzling at all when you really grasp what pain is and what it’s like that it is, in fact, bad.” He says that on Kantianism, rational willing generates reasons for action, but that’s just a statement of the account, rather than an explanation of why it is true.
But this doesn’t seem like a good answer. It seems like the reason that pain is bad isn’t just that you’d care about it if you thought hard. It seems that the fact of the badness of pain grounds the fact that you wouldn’t like it.
I don't think so. There is nothing about pain that has normativity built into it, at least as far as I can tell, and as has been argued. If one is fully aware of what pain is like and is just not motivated to avoid it, what is the irrationality here? There's no contradiction, no practical inconsistency, no means-end incoherence. So, what's irrational?
But it seems that even if after suitable reflection one wanted to be miserable, their misery would still be bad. That is an incredibly obvious ethical truth that the Kantian constructivist seems committed to denying. In reply to my claim that there are irrational desires—for example, that it’s irrational to set oneself on fire for no reason, TT says
But what most people think doesn't matter. What matters is if the agent in question is actually being irrational not whether other people think they are. The reason most people think they are irrational is because they are imposing their own perspective whereby pain is undesirable, a perspective which the agent in question does not share. I fail to see any reason they should be taken to be irrational rather than simply out of accord with the instrumental reasons ordinary persons are motivated by. Perhaps the idea is that well-being and avoiding pain is necessarily good, but, once again the agent in these scenario's doesn't accept that, so there is no inconsistency entailed from their perspective.
I agree that the relevant question is not what other people think. Other people could wrongly believe that a desire is irrational. But the fact that a belief is obvious and widely shared gives some reason to think that it is true. It doesn’t matter what the agent accepts. The entire question is whether there are things we have reason to care about independently of our desires. If you try to eat a car or set your self on fire for no further reason beyond a brute desire for it, despite finding the associated experience deeply unpleasant, denying that that is irrational is wildly implausible.
Most people don’t want to have spent lots of time looking at art. However, they don’t think it’s irrational to do so, just that it’s not their thing. In general, the mere fact that someone doesn’t like something does not cause them to think liking it is necessarily irrational.
TT next claims this can be evolutionarily debunked. But the evolutionary account can’t explain our particular hodge podge of intuitions. We don’t think abstinence is irrational, despite it not being advantageous for passing on one’s genes, nor do we (generally) think homosexuality is irrational. Furthermore, if this were just a base instinct instilled in us by evolution, we wouldn’t expect it to survive careful, rational reflection. There are lots of weird desires that no one has that no one thinks are irrational.
I do not deny this. But my point is we shouldn't trust our intuition that they are irrational, rather than really strange to us. Because we can't actually show an incoherence from the agent's standpoint without positing further proposition's that we think are obvious from our own reflection on our phenomenology, but the agent for all we know, completely rejects from their own reflection.
But this once again assumes that what is rational is what would be approved of from an agent’s point of view. Merely repeating that desire independent reasons do not, in fact, appeal to the desires of the agent does not say anything important or interesting in reply to the obvious intuitions supporting desire independent reasons.
TT claims I’m not explaining what makes it irrational to pursue pain. What makes it irrational is that pain is bad! There is not a further account, just as there is nothing further that makes the mathematical axioms true. They just are.
TT says
I don't think your desires alone make them valuable either, your choosing them with your rational will makes them valuable, which involves deliberating between desires and picking out which one is a better reason for action. I think that does make things valuable, because I don't think there is anything further to value aside from what is entailed from the practical point of view of an agent who values things.
But this is a claim I already addressed. Here’s what I said.
But this doesn’t seem like a good answer. It seems like the reason that pain is bad isn’t just that you’d care about it if you thought hard. It seems that the fact of the badness of pain grounds the fact that you wouldn’t like it. Otherwise, our idealized preferences would be arbitrary—and one could have any crazy idealized desire, to starve to death for no reason, for example.
In his first article, TT said this
(4) So I must see myself as having an unconditional value—as being an end in myself and the condition of the value of my chosen ends—in virtue of my capacity to bestow worth on my ends by rationally choosing them.
In reply I said this
No, this just means you can value things. Even if you can value things and somehow make them valuable with your will, this wouldn’t mean you are an end in and of yourself.I
In reply, TT said this
It means all value is conditional on rational will. It is the source of all value and the only thing that has value which is unconditional. Sounds like an end in and of itself to me.
But this is just begging the question. It is assuming that the fact that one values something confers value on them. But there’s no reason to think that valuing confers value. It seems that valuing something recognizes its value, rather than conferring value on it.
Just asking whether something respects the ends of rational creatures seems undefined. Indeed, I don’t think there’s a fact about whether you use someone as a mere means.
We use someone as a mere means if we treat them as an instrumental tool in a way that fails to respect them and their ends. If you kidnap someone for ransom, you are treating them as merely instrumental for your aims, disregarding their consent, life projects, self-determination, autonomy etc. Here are a couple of articles which explain the ideas I have in mind here.
Suppose you value someone’s well-being slightly but subordinate it to your aims. Are you treating them as a mere means? If I steal 99% of someone’s wealth, but then don’t take the last 1% because I care about them a bit, is that treating someone as a mere means? It seems utilitarianism then treats no one as a mere means, because it regards everyone’s well-being as significant. Parfit discussed this point more in On What Matters.
TT admits that his view can’t explain axiology but says his view doesn’t intend to. That’s fine, but then you need to invoke more things to explain axiology, making utilitarianism more parsimonious. TT then reiterates the point that I’ve addressed above about there being no reasons to follow what one has reason to do.
They count in favor of what they’re reasons for. It’s unclear what about them is supposed to be incoherent. When I say “you shouldn’t torture babies for fun,” or “you have good reason not to set yourself on fire even if you want to” that seems coherent, even though it’s not appealing to any person’s desires.
But what does 'counts in favor of' mean here if it's non-goal-indexed? Is the question I'd imagine Lance would have. It seems unhelpful to introduce further concepts which will be understood in an anti-realist or instrumentalist manner.
It means the same thing as gives a reason, makes the action more fitting to choose, makes it more choiceworthy, etc. If you just lack the relevant moral concepts, you’ll plausibly be confused, just as you’ll be confused about physics if you lack the concept of atoms.
I think the idea that Mackie and Olson just call moral realism weird is uncharitable. They pick out the problematic features of the kind of normativity in question, which fit poorly with our background understanding of the world, and are completely unlike anything else we know, and would require a special faculty to access that doesn't fit with our ordinary ways of knowing everything else. Further, what is a counts-in-favor-of relation? The internalist has a straightforward account. Yet, if it's not some motivational fact or anything else about an agent, or set of agent's psychology and is a feature that is 'built into' the world, what could the truth-maker for such a relation obtaining even be? In virtue of what do certain natural properties instantiate this relation and others don't? It's unclear.
See my above comments about reasons and counting in favor. When most people hear “you have a reason to stop doing this,” they understand what is being said. When people hear “the fact that going to the store makes your legs fall off really gives a reason to not go to the store,” people know what this means, and it doesn’t necessarily mean that the person will be motivated by it. TT calls it strange and different from everything else we know, but gives no reason why this is true. What about it is weird and unique? It doesn’t seem that different from mathematical, modal, and epistemic facts.
Matthew cites his article on moral realism to off-set the concerns of ontological parsimony and explanatory impotence. I'll just make two notes here. The first is, I think most of Matthew's arguments are not gonna be compelling for those who aren't realists antecedently, I've already explained why I find the 'phenomenal introspection' and irrational desires' arguments to be highly dubious. The second is, a lot of the general advantages of realism, e.g binding reasons on all rational agents, is something you get on a constructivist view like mine for free without the extra ontological items posited by realism. Moral convergence is equally expected under Kantian Constructivism as well.
I don’t think the argument from phenomenal introspection is very directly relevant to the debate, and it was only a minute part of my case for realism. Constructivism doesn’t give binding reasons on all agents, if some people have a reason to set themself on fire in ways they find unpleasant to achieve no further goal. TT doesn’t explain how non-realism explains convergence, which is a point I press at some length in my article.
The epistemic objection
But it’s also likely to prevent very bad things. All we can do is rely on our best judgment—many of these huge ripple effects fizzle out.
This just seems like, at best, a reason to be agnostic about whether the action is good, it doesn't justify your belief either way. But if you're agnostic about it, how do you pick one course of action rather than another when deliberating what you should do? As I argued before, it's extremely implausible, and indeed I've even directly argued that it is astronomically improbable, that the known consequences are sufficient to break the tie on expected utility.
To make the point more clear. Here's an analogy I liked from Lenman, suppose it's D-Day and you are a military leader who has to choose between 2 plans, plan A and plan B. The plan you choose will have tremendous consequences for the war, civilians, and the soldiers on the battlefield. Let's suppose you know that if you go with plan B, a dog will break her leg but if you go with plan A she won't. The unknowns of going with plan A or B are such that they otherwise cancel each-other out. Does Matthew mean to seriously suggest that the knowledge of the dog breaking her leg is a sufficient reason to choose plan A? Keep in mind, if you make the wrong choice the consequences are many magnitudes greater in significance then the dog breaking her leg. Perhaps it is some reason, but it is, quite clearly, proportionally swamped by the other consequences. So, you should have basically no clue which plan to pick. This is similar to the consequentialists epistemic position in deliberating which actions one should do, since all and only consequences are salient in determining whether an action is right or wrong
But this isn’t a specific problem for consequentialist. It is a fact about the world. If TT agrees that the expected consequences of our actions are impossible to predict, then we all agree that we’re in a situation equivalent to that of the military leader deciding between two D-Day plans. In this case, of course you should choose plan A—there’s some reason to and no reason, in expectation, not to. You have very little knowledge of what has the actually best outcomes, but that’s just an unfortunate fact about our world. The fact that our actions have huge unpredictable ripple effects is a fact all theories have to grapple with.
Of course there are going to be possible worlds where a moral theory doesn’t have a super great grasp of which action is worth taking. The fact that this applies to the actual world is no mark against consequentialism. To quote utilitarianism.net
For example, suppose you must pull a magic lever either to the left or the right, and are told only that the fate of the world hangs on the lever’s resulting position. You have no way of knowing which option will save the world. But it would be strange to conclude from this that the fate of the world does not morally matter. It would seem more reasonable to conclude that you’re in a rough spot, and (in the absence of further evidence about which option is more likely to save the world) morality can offer you no useful guidance in these particular circumstances.
It later notes
The natural response to cluelessness worries is to move to expectational consequentialism: promoting expected value rather than actual value. Further, as a multi-level theory, utilitarianism allows that we may best promote expected value by relying on heuristics rather than explicit calculation of the odds of literally every possible outcome. So if saving lives in the near term generally has positive expected value, that would suffice to defang the cluelessness objection.
Next, TT says
Imagine a game way more complicated than chess, where it was hard to know which moves were best. Even if it is hard to know, it’s not impossible, so you could still make moves with the highest expected value.
My answer is; If your epistemic situation wrt to the game is analogous to ours and the long-term identity-affecting affects of our actions, then yes, you should be in doubt about what move you should do.
But in the game case, you should still obviously choose the act that is best in expectation. You shouldn’t just throw up your hands and declare that unpredictability means that you have no guidance.
Truth teller then argues that we have no a priori reason to expect this to cancel out. This is true, but we can just evaluate the expected consequences, taking into account uncertainty. No part of this reply requires accepting the strong principle of indifference.
If Matthew is not using a principle of indifference he owes us an explanation for how he partitions and distributes the probability of the set of long-term astronomical identity-affecting outcomes and their expected utility, otherwise, and again, we should be clueless about what to do. He hasn't offered this.
Here, I’ll once again quote utilitarianism.net
In response, Hilary Greaves argues that some restricted principle of indifference seems clearly warranted in simple cluelessness cases, whatever problems might apply to a fully general such principle.
After all, it would seem entirely unwarranted to have asymmetric (rather than 50/50) expectations about whether saving an arbitrary person’s life now was more likely to randomly cause or to prevent genocides from occurring millennia hence. So we can reasonably ignore such random causal factors.
Next, TT says this
Truth teller’s argument is more radical than he realizes. It results in the conclusion that we can’t calculate consequences, so consequences shouldn’t factor into our considerations at all. But this is absurd—if you could cure depression, for example, that would be good because it would have good consequences
It doesn't entail that no consequences factor in, just unforeseeable, long-term and indirect consequences.
But long-term unforeseen consequences do seem to genuinely count against various actions. For example, even if Hitler’s grandmother had no way of knowing this, it seems having a child is something she really shouldn’t have done. Of course, she’s not blameworthy, but from the point of view of the universe, it would have been preferrable for her not to do it.
Suppose you know that your action of going outside today will cause the extinction of all life on earth five years from now in an indirect way. On accounts that reject the significance of these consequences, these are, quite literally, no reason not to go outside.
If you know, then you've foreseen that some particular outcome will follow from your performing the act of going outside. I think that's sufficient to provide a reason not to go outside. What wouldn't be is if the consequence was both unforeseen, and indirect.
But what TT said was that if they’re longterm, indirect, and unintended they don’t count. In this case, it would be foreseen but not intended.
Demandingness
TT replies to my demandingness objection.
First, utilitarianism is intended as a theory of right action not as a theory of moral character.
But what makes one a good person on any ethical theory, should be a function of the rightness and wrongness of the actions they perform. Just as what a good mechanic is, should be a function of the efficiency and success of the car-fixing actions they perform. So, it's hard to see how this distinction is supposed to help. If Matthew denies this, then first I'm not sure what else is supposed to determine the value of moral character, and second it seems to rob utilitarianism of normative authority. Why care if the actions I perform maximize utility, if I'll still be a good person regardless?
Not at all! Suppose someone is fully paralyzed and can’t do any good actions. This doesn’t make them a worse person. If the nicest person on earth became unable to move, this wouldn’t make her a less good person.
Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection.
On utilitarianism what is the rightness of an act or omission determined by? Only the net utility produced. Therefore, it seems whether the weight of psychological costs is enough to make it right to abstain from donating to charity, donating your kidney etc. or choosing to purchase luxuries for yourself instead, is going to be determined by the net utility. But, donating will generate more utility on net even when we factor in the psychological costs to you. It is not just that you are falling short of perfection on utilitarianism, you are blameworthy for failing to do the right action. Matthew needs to give a principled basis for why, by the lights of utilitarian judgements, you wouldn't be blameworthy. But it's hard to see how that can be done.
There are going to be different accounts of this. I’m inclined to think rightness is a scalar property that is vague and naturalistic. Utilitarianism doesn’t hold there are precise facts about blameworthiness and praiseworthiness—for more on this see here. I think that blameworthiness is probably a combined function of how good ones acts are and how demanding they are. It makes sense to have a nominal standard for blameworthiness on precisely utilitarian grounds—we should have a clear line between generally good but imperfect people and Hitler.
Most of Matthew's responses to the the demandingness objection concede the demandingness of utilitarianism, arguing that it does not provide sufficient reason to think utilitarianism is false. After all, the correct ethical theory may well be demanding! This is fine, I think some of Matthew's responses here are reasonable. However, I never intended this objection to be a knockdown argument. Merely that our ordinary moral practice and beliefs are not nearly as demanding as utilitarianism entails, which is better explained and antecedently more expected on, non-consequentialist views than utilitarianism. There is also, the implication that utilitarianism wouldn't be a particularly helpful guide for humans, as, realistically, no-one truly follows it's demands. Not even Matthew does, he could have created a refugee fundraiser site rather than a blog dedicated to arguing for utilitarianism! From this point I'll only address objections of most interest.
I think my replies are decisive and show that being demanding does actually count in favor of a moral theory. In response to my claims of possible evolutionary debunking, TT says
This is probably true. But for one, my point with the demandingness is not just that it is unintuitive, it's that it implies a moral practice that is completely unlike ours, one that is impractical, nigh unlivable for humans.
But remember, the claim is not just that all actions that aren’t perfect are wrong. Rather, I claim we should adopt a scalar account of rightness and wrongness. It doesn’t seem at all plausible that the most moral possible thing one can do would be nigh unlivable. Utilitarianism doesn’t say that all imperfect actions are wrong, just that they are not maximally right. That’s not implausile at all.
For two, these same sorts of debunking considerations apply, mutatis mutandis, to pretty much all of our moral intuitions. It is after-all a fact that our moral judgements in general are highly sensitive and plastic in the face of various non-truth tracking cultural/social pressures
For reasons I have explained earlier, I’m unconvinced by general evolutionary debunking arguments. But this one is as strong as they get. There’s a totally obvious evolutionary reason for us to care more about our friends and family—and few think morality is too demanding in that it says that it would be very wrong not to pay a lot of money to save their family. We should conduct an inference to the best explanation. If all you knew was evolution explained our moral beliefs, you’d predict caring disproportionately about friends and family and less about far away people. But you wouldn’t predict utilitarianism. One is a very plausible and predictable result of evolution, the other is a farcical just so story.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries.
But why think that this is an intuition about a moral duty people have rather than it's just being a civic duty one recognizes as part in parcel of being a citizen of a country? In the same way we don't think putting up with shitty customers at Walmart is a moral duty, but it might be your duty as an employee at Walmart.
I think that the account that says there’s just an intrinsic duty to pay taxes is implausible, for the reasons Huemer explains in his book The Problem of Political Authority. But the point is that on the hypothesis that our intuitions are socially engrained, we’d expect them to favor being a good tax paying citizen, while we wouldn’t expect them to involve accepting the repugnant conclusion. Thus, social debunking is a better account of one than the other.
As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.
To some extent, that's right. But again, there is something that seems right about this intuition, that even Matthew surely must admit. We think responsibility is scaled by 1) how much control you exert/have over the situation
But you have control over whether other people die too. You can save a life by donating about 5000 dollars.
2) your vision/awareness of the situation. The less control you have, and the more inattentional you are to it, the less responsible you are. If you hear on the radio that a tsunami hits a distant country, you're less responsible for not hauling over there and saving who you can, than if a tsunami happens in your vicinity and you fail to save people you could save by extending your arm.
If the drowning child is far away and you can press a button that will prevent them from drowning at the bottom of a pool, your obligation to save them is just as strong. Perhaps our emotional reaction is affected by their proximity, but if we rationally reflect, it obviously shouldn’t be.
Tenth, scalar consequentialism, which I’m inclined to accept, says that rightness and goodness of actions come in degrees—there isn’t just one action that you’re obligated to take, instead, you just have actions ranked from most reason to least reason. Scalar utilitarianism gives a natural and intuitive way to defuse this. Sure, maybe doing the most right thing is incredibly demanding — but it’s pretty obvious that giving 90% of your money to charity is more right than only giving 80%. Thus, by admitting of degrees, demandingness worries dissipate in an instant.
This response is one of the more interesting, but again, it is hard to see how it helps. You'd still always have most reason to do the most utility-maximizing action. If I am deliberating among a set of options and there is one option I have most reason to do, if I don't do it, surely I'd be blameworthy for failing to do what I have most reason to do.
But every plausible moral view will say that there are some actions that are demanding that you have most reason to do. You clearly have most reason to save lives at minimal personal cost. Scalar utilitarianism doesn’t issue demands or give accounts of blameworthiness—it just gives an account of what you have reasons to do.
It looks like what Matthew has in mind is that there is no particular action that you ought do just actions which are better (more reason) and worse (less reason) to do. But for one, I'm not sure what a reason is if it's not something which tells you (or counts-in-favor-of) you ought to perform some action rather than another. For two, this leads to absurdities, suppose you're in a room and there are two buttons and you can only press one, B1 maximizes utility for a billion people, B2 maximizes utility for one person. Obviously, you ought to press B1 and not B2, especially if we think maximizing utility is the only thing that makes ethical decisions good. Yet, if there is no particular right action you ought do, that's false. Both buttons increase utility, B2 just does it much much less.
The scalar utilitarian account would say that it would be better to take hte one action and you’d have more reason to do it. The scalar account of rightness and wrongness could also say that one of the actions would be pretty wrong and the other pretty right. If an action is very wrong, we can just declare it wrong, just as if a person is very tall, we can just declare him tall. Reasons do count in favor of action—I’m not sure why TT thinks scalar utilitarianism must deny this.
In my article, I raised the drowning child analogy. TT in response brought up that the child wasn’t visible and is far away. I pointed out that if there was a button that could save them you should still press it. TT replies
But this is clearly irrelevant. Suppose you could save a child by going into the pond and pressing a button to save them far away. You still ought to do it.
Observing them is relevant in the sense that you are directly there, you are fully capable of acting in the now, and the situation confers reasons on you for you to directly choose to respond to, or ignore. Sure, if you magically know for a fact a kid is drowning far away and you have a magic button you should press it, since all the relevant features are shared! But when you perform ordinary actions you aren't being like "Heh, gonna buy expensive khakis even though I could use the money on charity because fuck starving children in Africa", were that the case, you'd be doing something wrong.
Suppose that the person didn’t have the thought “heh, gonna drive to the bank instead of saving the life of a drowning child.” Also, they couldn’t see the child and the button saved a far away child from drowning. They should still obviously press it.
It’s totally unclear why this is the case! In the drowning child case, maybe you deem the child worthy of saving—just like a person who doesn’t donate to a children’s hospital deems the children worthy of saving—but you just prefer not to save them and spend your money on other things.
If you deem them worthy of saving, in the sense that you see them as a end-in-themself then it's irrational not to save them. If you don't save them because you'll get your pants wet, that means you actually don't find them worth saving at least in the relevant sense.
But then if you don’t save a child from malaria because you’d rather go on vacation, that would also mean you don’t find them worthy of saving.
TT’s responses here seem to miss the point, and he ignores most of my arguments. I conclude that the demandingness objection is totally unpersuasive.
He gives a series of cases involving raping people where one’s pleasures allegedly outweighs the pain of the victim—a non-hedonist, or a desert adjusted hedonist, can just say that only some pleasures count, and ones from rape don’t.
I didn't make this explicit but the raping coma patients/gang rape cases don't only apply to hedonism. Even if you're a non-hedonist, you can think pleasures are good, and the known consequences of raping coma patients is more pleasure and no pain caused. You can also think there are other goods contributing to well-being, such as desire-satisfaction, the gang-rape satisfies the desires of more people, the coma patient doesn't have any active desires which are being violated, active desires count for more etc. I'm not convinced that non-hedonic consequentialisms' have a straightforward escape hatch here, but regardless I was attacking Matthew's view. Desert-adjusted hedonism strikes me as implausible for other reasons. It falls apart really fast when we realize there is no principled basis for valuing some pleasures over others.
You can hold the view that pleasure from rape doesn’t actually make one better off. But this is primarily just a criticism of theories of well-being, which is not the topic of the debate. Also, for more on this objection to hedonism, see here. TT gave a case where you can give up your life to save a slightly less happy friend.
On scalar utilitarianism, there aren’t facts about what you’re permitted to do, you just wouldn’t have most reason to do—that wouldn’t be the best thing you could do. But this seems plausible! It doesn’t seem like you have most reason to save your friend.
1. This seems to imply that scalar utilitarianism isn't action-guiding, why would I adopt it as an ethical theory if I want to know what actions I can and can't do? That seems like the bare minimum of what I'd want an ethical system to do.
2. I still have no idea what it means to say you have most reason to do something, if it doesn't imply that you ought do it.
3. If an ethical theory entails that there is no fact about whether you are permitted to, say, torture and abuse children for sadistic pleasure, I think that is evidence the ethical theory is a failure.
It is action guiding! It describes factors counting in favor of and against actions. Furthermore, the scalar account of wrongness can get the result that some actions are actually wrong. If a theory says it’s seriously wrong, blameworthy, impermissible, and bad to torture children, the fact that it denies that there are permissions woven into the fundamental moral reality doesn’t seem to count against it.
If we say that you have equal reason to do them or some such, then if both you and your friend starts with 1,000 units of torture and can either reduce your torture by 1,000 or the others by 500, both would be fine. But taking the action that reduces the other’s suffering by 500 makes everyone worse off.
This is really vague, because I don't know what 1000 or 500 units of torture is. I would say what I would normally say, it would be supererogatory for you to reduce your friends pain (it is a selfless, other-regarding act), but it would also be good for you to eliminate your torture, you're not obligated to reduce your friends. I fail to see how this is unintuitive. You're not making everyone worse off you're reducing your friends torture.
But this shows that allegedly praiseworthy acts collectively result in arbitrarily large amounts of gratuitous misery and are collectively self defating. This is a problem for the view. 1000 units of torture is a lot as is 500 units of torture—lets say one unit is the amount experienced by the average tortured person.
In regards to the case where a thief saves grandma's life while trying to steal Grandma's purse.
This stops being unintuitive when we distinguish objective and subjective wrongness. Objective wrongness is about what one would do if they were omniscient, what a perfectly knowing third party would prefer, what they have most actual reason to do. The utilitarian judgment here about objective wrongness is obvious. But the utilitarian can agree that this action was wrong based on what the person knew at the time—it just coincidentally turned out for the best.
But whether a given action is right on consequentialism tout court, does not depend on the subjective states of the agent, but only what is objectively right (objectively produces the best state of affairs). As a non-consequentialist, I take into account intentions and other subjective states of the agent when analyzing what makes an action right, but consequentialists don't think whether an agent subjectively acted with the goal of maximizing utility is right-making or wrong-making, what matters is if they actually maximized utility.
But every plausible view should hold that it was objectively right in that if they were perfectly rational and impartial, they’d take it over the other act. It seems this is best described as doing the right thing for the wrong reason, and as subjectively wrong. Third parties should prefer that the thief does this, even if the thief has bad motivation.
TT ignores most of my points in response to the organ harvesting objection but replies to this
Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
But I still have the intuition that this is wrong, assuming the donor doesn't consent. Though, much less strongly then the kidnapping doctor case, because in that case he is directly kidnaping them off the street and murdering them, as opposed to it's being a patient already in his care and is already unconscious due to an affliction, and the doctor isn't directly murdering them, rather performing an extraction procedure that will result in their death. So even if I didn't share the intuition, this does nothing to save consequentialism from the unintuitive answer given to the kidnapping doctor case.
But in the ordinary organ harvesting case, you are a patient in their care. The original organ harvesting hypothetical is just killing them as part of a procedure that extracts their organs. TT explains that he thinks it’s wrong to treat ethics as just a numbers game, but ignores the argument I give for this that any rational agent whose preferences meat various side constraints will be modelable as optimizing for a numerical utility function.
But consequentialism can—just have the social function of welfare include equality, for example. Additionally, utilitarianism supports a much more egalitarian global distribution than exists currently.
Utilitarianism is not principally egalitarian though, which I take to be a problem because egalitarianism is one of it's main motivations. Sure, you can define consequentialism in such a way that it is, but Matthew and I would both agree that such a view is implausible for other reasons (you should torture someone infinitely to reduce global inequality etc.).
Holding that egalitarianism is a prima facie good doesn’t require holding one should infinitely be tortured to reduce global inequality!?
These tend to be wrong, but they tend to produce bad outcomes. It seems that breaking promises is bad because it makes things worse—its badness doesn’t just float free. Any high-level theory is going to sound weird when applied to concrete cases
I take this to be Matthew agreeing that this is a case where his theory fails to track our intuitions and how we actually diagnose actions such as promise-breaking as wrong. We explicitly don't think it's wrong in virtue of the bad outcomes, it's wrong because you aren't respecting a prior commitment you made, you deceived them, you're saying what they want, and what they believe isn't important, etc. In fact, we'd agree that even in cases where these sort of actions do not lead to bad outcomes you still did something wrong.
I think utilitarianism tracks our most fundamental intuitions, including about promise breaking, as I’ve argued at some length (I can’t remember which article did this though, sorry).
In the next section, Truth teller responds to various objections that are not, I believe, relevant to the dispute—none of them are arguments I’ve made for consequentialism.
But you did. The entire reason I included them is because of this video.
But this is a totally separate video I made in high-school that I didn’t conclude in the debate.
Thus I conclude deontology remains false.