INTRODUCTION
Arjun has written his rebuttal to my opening statement. A significant majority of the points I raised in my opening statement were unaddressed, and the ones that were addressed were not adequately refuted.
THEORETICAL VIRTUES
The first point I raised was about theoretical virtues — utilitarianism is simpler, more parsimonious, more clear, and so on. This favors utilitarianism considerably, for theoretical virtues are crucial to evaluating a theory.
RETURN TO HISTORY
Two other features favor utilitarianism based on the historical record.
1 Utilitarians, when they diverge from common sense, tend to be right — often hundreds of years ahead of their time. Bentham, for example, supported legalizing homosexuality in the 1700s. We’d expect the correct moral theory to get things right far ahead of time, and that’s exactly what we observe.
2 All examples of moral atrocities throughout history have contradicted utilitarianism because they’ve involved systematic exclusion from the moral circle — something which utilitarianism rules out.
To both of these points, we saw no response. As is typical of the critics of utilitarianism, there was no refutation of the many features of the cumulative case for utilitarianism.
THE SYLLOGISM WHICH IF TRUE PROVES UTILITARIANISM
This is the syllogism in question.
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
Arjun asks for clarification if my argument is this
Call a “rational egoist” someone who does only what maximizes his self-interest.
Hedonism is correct.
So happiness is the only kind of good.
So a rational egoist only does what maximizes his happiness.
If something is in the self-interest of rational egoists but not good for people in general, then it must have unique benefits that only apply to rational egoists.
A person’s happiness is in his self-interest if he is a rational egoist, but it doesn’t have unique benefits to him only if he is a rational egoist.
So a person’s happiness must be good for him.
In order for something to be good in general, it has to be in the self-interest of some people.
Because hedonism is true, this is the same as saying that for something to be good in general, it must make some people happy.
Only the total happiness is like this.
So the total happiness is the only good.
We should act in a way that maximizes the good.
So we should act in a way that maximizes the total happiness, which is utilitarianism.
This seems similar to my argument but is in some ways different. Given that my argument is valid, Arjun should explain which premises he rejects.
I guess that “happiness” when used separately from “happiness for oneself” means “the total happiness,” because the argument might be trivially circular otherwise? I’m not really sure what’s going on here and it’s possible I’ve changed the argument in trying to rewrite it.
This relates to this conjunction of premises.
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
These establish that only happiness is good and that we should maximize good. Saying happiness is good means it’s good overall, not merely for selves.
Even if we accept that hedonism is correct, which I wouldn’t grant, claim (8) and (12) aren’t plausible unless you already accept the conclusion that utilitarianism is correct.
I already provided an argument for this
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
To advance the claim in greater detail, here are two specific implications
1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one. The person who denies it could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied it.
2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities. For example, suppose we take the violation of rights to be bad, even if it’s bad for no one. A world where everyone violated everyone else's rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.
There are things that are good that aren’t in the direct self-interest of any particular person, and you have reasons to act other than maximizing the good, like to meet your obligations.
No argument was given for this conclusion. Beginning with the claim that things can be good while being good for no one, Arjun gave no examples of such things, so the response will depend on which specific examples he has in mind. The claim that we have special obligations was addressed in the previous article. A few more objections.
1 This would make it distinct from other domains of practical reasoning like the epistemic domain, where we should believe what we have the most reason to believe, but there are no absolute epistemic obligations.
2 As I argue here, utilitarianism can explain all other normative concepts in terms of value. However, introducing the concepts of obligation — which are not reducible to other normative concepts — complicates the fundamental normative concept.
3 This argument is a syllogism
1 Things are good iff they are important
2 Obligations cannot be grounded in things that are not important
Therefore, obligations must be grounded in things that are good
3 If obligations are grounded in things that are good then things’ status as good explains the obligations
Therefore, things’ status as good explains the obligations
Non-consequentialism denies that things’ status as good explains the obligation
Therefore, non-consequentialism is false.
Moving on to hedonism, Arjun says he rejects it, but he hasn’t addressed the dozens of arguments I’ve given for it. Thus, hedonism is on solid footing.
HARSANYI’S PROOF
I’ll just quote what I said about Harsanyi’s argument in the opening statement.
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
RIGHTS
I gave a series of objections to rights, most of which were unaddressed.
Arjun responded to my first objection to rights as follows.
Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not.
Emphasis mine. This principle isn’t universally true. Rights violations aren’t just a kind of harm, since you can harm someone without violating his rights and violate someone’s rights without harming him. For example, it could make me unhappy that someone exists, but his existence doesn’t violate any of my rights. Someone could also force me to improve my diet, which wouldn’t harm me but would be a violation of my rights.
I didn’t claim that rights violations were always harmful. Rather, my claim was that the basis for rights is rooted in utilitarian considerations. All of the things we think of as rights generally make things go best — and when they don’t, we don’t think that they’re rights. The examples of entering and looking at houses and sounds illustrates this.
The diet counterexample is false. Third parties forcing diets on people wouldn’t make their lives better.
For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
I don’t endorse a position where you should impose horrific suffering to avoid violating any rights, just that respecting rights gives you some good reason for action irrespective of the consequences of your action.
This does not address my argument. I was explaining that looking at people would be a rights violation if it caused harm, because rights violations are rooted in consequentialist considerations. Arjun’s response is a red herring.
(Including pictures of red herrings doesn’t burn my word count).
My second objection to rights was as follows.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
Arjun replies
I don’t think this follows. It’s possible that there are different kinds of rights and different kinds of suffering and that some of these are incommensurable
But if any rights violations are worse than a tiny amount of suffering, then a sufficiently vast number of rights violations would be worse than a comparatively enormous amount of suffering. This is, however, implausible; a world where everyone’s rights are intact but everyone is horrifically miserable would be worse than one in which people’s rights were constantly violated but everyone was super well off.
I next said
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
Arjun replied
I would start with intuitions from particular cases.
But if the particular cases don’t coalesce into a coherent account, then it’s an immensely complex theory, which has to posit as a brute fact the vast coincidence that all the rights make things go best and has to sacrifice a nearly infinite amount of simplicity. When theories are both complicated and have to posit nigh miraculous coincidences, that’s when you know something has gone awry. As Bradley notes in his book
“This insistence on simplicity is far from universally shared among philosophers, who sometimes insist that the truth of the matter about ethics must be complicated. To these philosophers, I can only say that complicated views always go wrong somewhere; where exactly they go wrong is often concealed by the complexity. The more complex the view, the more work it takes to draw out the unwelcome consequences—but they are always there.”
I presented four more objections to rights that were not addressed — I won’t rehash them for this reason. They show that accepting rights requires biting enormous bullets and leads to paradox.
However, I did present one objection that Arjun replied to
Torture Transfer: Mary works at a prison where prisoners are being unjustly tortured. She finds two prisoners, A and B, each strapped to a device that inflicts pain on them by passing an electric current through their bodies. Mary cannot stop the torture completely; however, there are two dials, each connected to both of the machines, used to control the electric current and hence the level of pain inflicted on the prisoners. Oddly enough, the first dial functions like this: if it is turned up, prisoner A’s electric current will be increased, but this will cause prisoner B’s current to be reduced by twice as much. The second dial has the opposite effect: if turned up, it will increase B’s torture level while lowering A’s torture level by twice as much. Knowing all this, Mary turns the first dial, immediately followed by the second, bringing about a net reduction in both prisoners’ suffering.
Arjun admits
In general, weak deontology suffers from the problem that two actions A and B can both be wrong independently but acceptable if done at once or in quick succession as part of a combined action, which is unintuitive. The best argument for utilitarianism is the flaws in all other moral theories, but utilitarianism’s flaws are more severe and it’s more likely that there is an superior undiscovered moral theory than that utilitarianism is correct.
This is a general problem for rights, which are the most frequent counterargument against utilitarianism. Thus, non-consequentialism seems to be toast.
OTHER EXPLANATORY DEFICITS
In section 6, I present 6 cases that other theories can’t address. As evidence of this fact, Arjun addressed none of them.
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
Case 6
Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, Mogenson and Macaskill argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people by changing very slightly the time in which lots of other people have sex. They also change traffic distributions, potentially reducing and potentially increasing the number of people who die in traffic accidents. Thus, every time a person gets in a car, there is a decent chance they’ll cause an extra death, a high chance of changing the distribution of lots of future people, and a decent chance they’ll prevent an extra death. Given that most such actions produce fairly minor benefits, it is quite analogous to the scenario described above about the button.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist. The same is true if you ever have sex; you will change the identity of a future person.
ARJUN’S OBJECTIONS TO MY REBUTTAL
Arjun also posted some responses to my rebuttal to his opening statement. He didn’t address most of my responses, so I won’t repeat them.
Arjun raised a point about moral uncertainty meaning we shouldn’t act purely as utilitarians. (My rebuttal is in italics, Arjun’s response isn’t).
I’d agree that given moral uncertainty, we shouldn’t act as strict utilitarians. However, this fact does nothing to show utilitarianism is not correct. This debate is about what one in fact has most reason to do — be it the utilitarian act in all situations or some other — so pointing out what it’s reasonable to do given moral uncertainty (which is much like factual uncertainty) does nothing to show that utilitarianism is not correct. Discussion of how we should practically reason given uncertainty has nothing to say about which theory is actually correct.
Emphasis mine. It’s unclear to me whether Matthew intends this debate to be about what one has the most reason to do or about which moral theory is most likely to be correct. I don’t think these are the same question, since you could have the most reason to take an action that contradicts the moral theory that you find most likely. For example, even if you’d give 3 to 1 odds against moral realism being true, you should still act as if it’s true, since if it’s false then it doesn’t matter what you do anyway.
Perhaps I was ambiguous — a reason only counts as a genuine reason if it is relevant on the true moral theory. If deontology is false, and rights only give us reasons to do things assuming deontology, then rights don’t give us genuine reasons. Thus, these are the same question.
a strong form of impartiality . . . collectively self defeating. After all, if we all do what’s best for our families at the expense of others, given that everyone is part of a family, every person doing what’s best for their own family will be bad for families as a whole. . .
It’s not clear to me what’s meant by “strong” here. But more importantly, the hypothetical posed is meaningfully different from the question of what to do in an individual case.
Scenario A: A parent faced with a choice between saving his own child and saving the children of two distant strangers has a good reason—his obligation to act in the interests of his own children—to take the first option.
Scenario B: Suppose he were faced instead with the knowledge that 100 randomly selected people (including himself, potentially) would be presented the dilemma above. Given the opportunity to force them all to choose one option or the other, he should force them all to choose to save the strangers. This follows the same principle since it’s in the best interest of his own child as well as all children.
These are two distinct scenarios. Choosing your own child in Scenario A doesn’t somehow force other people to choose the wrong option in Scenario B, and there’s no contradiction in following a general principle that leads you to be partial to your own children in both cases.
This is why it’s collectively self-defeating. If we all privilege our families — as the prisoner’s dilemma shows — we’ll all be worse off. Thus, the decision one should endorse as a moral rule from the perspective of the standpoint that they want to maximize differs from the decision that they actually make. It’s the same basic idea as Huemer’s paradox of weak deontology — just applied to a different context.
I’m not sure exactly what’s meant by “intrinsic importance,” but from my reading, the idea that decisions should be made based on “intrinsic importance” assumes impartiality, so this is circular. Your particular agent-relative obligations give you good reasons other than anyone’s “intrinsic importance.”
The concept of being important, or really mattering, seems like a relatively simple concept. If morality isn’t grounded in what really matters — and if truly being important is an alien concept to our conception of morality — what useful thing are we even doing? Morality isn’t significant if it’s not about what really matters.
The other objections to partiality were not addressed, but I won’t repeat them here.
CONCLUSION
Given the multitude of unaddressed arguments, the ball is largely in Arjun’s court at this point. I look forward to seeing how he addresses the cumulative case for utilitarianism and the many arguments for it. So far, this has been a useful and fruitful exchange.
To be continued.