All My Writings On Utilitarianism
The response to Huemer and the utilitarianism wins outright series
Here’s part 2.
Contra Huemer on Utilitarianism Part 1
"The problem with this whole economics business is that it gets deeply unintuitive results. These economists talk endlessly about supply and demand curves. These graphs seem intuitive until you apply them to particular cases. Take the idea of a demand curve. It seems intuitive at first that at higher prices you purchase less of things. However, this general intuition gives deeply implausible conclusions, such as that the minimum wage causes unemployment, that rent control causes shortages, and that it's good when American's buy things from other countries, even though it costs American jobs. At this point this whole 'demand curve balderdash,' starts to seem absurd."
- A strawman
Michael Huemer is not a utilitarian for reasons explained in this article, illuminatingly titled "Why I Am Not a Utilitarian." As the name "Bentham's bulldog," might suggest, I am a utilitarian (I am not, however, a bulldog). Huemer has previously expressed some mild sympathy for utilitarianism, writing that it is "not a crazy view." However, as we learn in this article, our sanity does not justify trusting us with responsibilities as important as properly distributing cookies between serial killers and mother Theresa, or following death-bed promises.
Why is Huemer not a utilitarian? Well, Huemer is an ethical intuitionist, thinking that ethics must be based around making sense of a large number of particular cases. On Huemer's view, ethics should have acceptable intuitive implications. Thus, he rejects utilitarianism for the same reason one would be justified in rejecting a seemingly acceptable moral theory that they later learn holds that torturing infants for fun is justified. Huemer lays out a series of unintuitive results of utilitarianism and argues that these give us good reason to reject utilitarianism. Later posts will argue against those specific intuitions.
Huemer says that utilitarians biting the bullet is problematic (dentists tend to say similar things). He says "The utilitarian faces a dilemma: a) If you don’t accept ethical intuition as a source of justified belief, then you have no reason for thinking that enjoyment is better than suffering, that satisfying desires is better than frustrating them, that we should produce more good rather than less, or that we should care about anyone other than ourselves. b) If you do accept ethical intuition, then at least prima facie, you should accept each of the above examples as counter-examples to utilitarianism. Since there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall."
I accept the second horn of the dilemma, but weakly. I think that ethical intuitions do provide some support to beliefs, but it very much depends on the nature of the intuition. Much like it would be absurd to reject economics based on intuitively compelling counterexamples, the types of intuitions that justify general principles are often more reliable than the types of intuitions about particular cases. If we look throughout human history we see a whole host of terrible moral intuitions. Huemer (rightly) argues that our practice of eating meat is the worst thing in the world. A practice that most people regard as permissible but Huemer regards as akin to the holocaust shows the frequent unreliability of our intuitions. For much of human history people regarded torturing and killing innocent people as permissible. Slavery was tolerated. All of this shows that our intuitions about particular cases are very often wrong.
Huemer asks how we can justify privileging utilitarian intuitions, writing "So how might one justify this? a. Maybe general, abstract intuitions are better than concrete intuitions about particular cases. Problem: It’s not obvious that utilitarian intuitions are any more abstract or general than non-utilitarian intuitions. E.g., imagine a case of a very selfish person causing harm to others, and you’ll get the intuition that this is wrong. Talk about the Shallow Pond example, or the Trolley Problem. It’s about equally plausible to say that core utilitarian claims rest on intuitions about cases like those as it is to make a similar claim about deontology. You can also represent deontology as resting on abstract, general intuitions, e.g., that individuals have rights, that we have a duty to keep promises, etc. It’s about equally plausible to say deontology rests on general intuitions like these as to say the same of utilitarianism."
There are numerous justifications for privileging utilitarian intuitions.
1) There seem to be good reasons for thinking that greater reflection leads to more utilitarian conclusions. Being smarter, less influenced by emotions, and more reflective all correlate with being more utilitarian. This is what we would expect if utilitarian intuitions were correct.
2) Utilitarians tend to be on the right side of history. Bentham supported legalizing homosexuality in the late 1700s. Mill was the second member of parliament to support women's suffrage. Sidgwick supported rights to education for women. Utilitarians like Singer are at the vanguard of the battle against factory farms, which Huemer regards to be one of the most important moral causes. Bentham's Bulldog is writing awesome articles filled with correct opinions, especially on ethical issues. A long track record of utilitarians weird an unintuitive ideas turning out to be right shows that we should privilege those intuitions.
3) Utilitarian intuitions realize that we should shut up and multiply--non-utilitarian intuitions don't.
4) Many non-utilitarian intuitions just seem to be emotional reactions to particular words in the sentence. We spend lots of time talking as a society about how lots of things are very bad--eg murder. Thus, when utilitarianism asks if murder is sometimes good, based on implausible stipulations that we have trouble imaging, the part of our brain that says "ick murder, that's bad," overpowers reflection about the particular case.
Consider a parallel question to many of the objections to utilitarianism. "Suppose that abusing one's spouse was the right thing to do. Should you do it." In this case, even though it's stipulated that abusing one's spouse is morally good, instinctively, one still has the urge to answer "no," or more accurately "get the hell out of my house." The non-utilitarian intuitions invokes concepts that are constantly discussed as being terrible--and for good reason. However, if we stipulate that those terrible things become good for strange reasons, often invoking unnamed people with whom we have no emotional connection, it's very hard come to the right (utilitarian) conclusion.
5) Perhaps most fundamentally, non utilitarian intuitions can't stop running into contradictions and absurdities. We'll see that more in the later parts of this series. Non-utilitarian's have to hold that third party observers should often hope for you to act wrongly, or hold that giving perfectly moral omniscient decision makers extra options can make the world worse.
On top of this, they have to posit an increasingly strange and convoluted moral ontology with various good making features, with no common tether.
Very often objections are given to utilitarianism, only for the utilitarian conclusion to be discovered to be rationally inescapable. People very often use the repugnant conclusion to reject utilitarianism. However, Huemer himself has argued that we should all accept the repugnant conclusion. They worry that utilitarianism justifies some number of dust-specks being worse than torture--even though that conclusion is rationally inescapable . As I argue here (in section 2) dozens of common counterexamples to utilitarianism are better described as cases where utilitarianism gets the correct but unintuitive results, and that we have independent reason to accept utilitarian judgements about each of these cases.
I, like god, am a bayesian. So we should consider the evidence for and against utilitarianism based on how likely it would be if utilitarianism were true, versus if it were false. If utilitarianism were correct we would expect it to often be counterintuitive, given how terrible human intuitions are about ethics. Thus, the fact that utilitarianism is unintuitive in many specific narrow cases is very weak evidence against utilitarianism. Given how much we'd expect the correct ethical view to diverge from intuition, it might on net favor utilitarianism. However, it would be very surprising for the incorrect moral view to have al of the following features.
1 Being more widely believed by smarter more reflective people.
2 Do much better in terms of theoretical virtues like simplicity.
3 Be derivable from a smorgasbord of plausible theoretical principles.
4 Have it's adherents be constantly on the right side of history hundreds of years ahead of their time.
5 Avoid strange contradictory results.
6 Generally only have counterintuitive implications in realms of sacred values.
7 Have alleged counterexamples to it almost always be independently debunkable on alternative grounds, such that smart people create theorem's demonstrating that believing the counterexamples produces absurdity.
Even Huemer said that he grows more sympathetic to utilitarianism as time passes, in his book Knowledge, Reality, and Value. The truth may appear false at first but when one considers enough ethical cases they realize that utilitarianism gets the right result every damn time, even in cases that were specifically designed to be counterexamples to utiltiarianism. So yes, utilitarianism has some counterintuitive results. But they're counterintuitive in the way that good economics is counter-intuitive. They're counter-intuitive in the way that Parfit's reductionism about personal identity is counterintuitive, or that quantum physics is counterintuitive.
David Friedman wrote “The modern economist reading Ricardo’s Principles feels rather as a member of one of the Mount Everest expeditions would feel if, arriving at the top of the mountain, he encountered a hiker clad in T-shirt and tennis shoes.” A similar experience is true of reading utilitarians, for utilitarianism solves the troublesome ethical questions of the modern age. Utilitarians are very concerned about animal suffering. Turns out there are good independent reasons to have that concern. Non-utilitarians have a very difficult time accounting for rights. However, all of the rights can be explained extremely easily on utilitarian accounts. The right to life is worth enshrining into law because it's conducive to utility. That's why we have the right not to let other people enter our house but not the right not to let other people look at our house. That's why it's a violation of our rights to shoot sharp projectiles at us, but not a violation of our rights to shoot sharp soundwaves at us by talking.
Political philosophers struggle with questions of whether or not taxation is theft. Utilitarianism provides a very easy answer: who cares. Call it what you want, but it's not bad if it's generally conducive to utility. Utilitarianism very easily solves problems in population ethics . The reason Huemer becomes more sympathetic to utilitarianism over time is because of the ease at which it solves (correctly) very difficult questions.
Prior to writing my post on the effective altruism forum I had not realized that utilitarianism could be verified to have correct results in all of the cases that I considered. For nearly all of them I had no idea what the independent arguments would be against the counterexample. But every single time it quickly became apparent on independent grounds that utilitarianism was right. Like Ricardo deducing modern price theory without working out the math, utilitarianism got the right answer every single time. For some of Huemer's examples, I have not already worked out independent justifications for "biting the bullet," on the alleged counterexamples. However, I have strong confidence that careful reflection will reveal a justification, for it always has in the past. Enough times of utilitarianism getting to the top of the mountain immediately, it's conclusion always intersected by a path forged by independent logical arguments, makes it very clear that the utilitarians are on to something.
In the bible, numbers 22-24 tells the story of Balaam and Balak. Balak was the king of Moab, and he wanted Balaam, a sorcerer, to curse the jews. However, every time he tried to curse the jews, he ended up accidentally blessing them (god has a tendency to make curses fail and become blessings). That somewhat reminds me of the tendency for objections to utilitarianism to backfire. Derek Parfit (a far greater individual than Balak, who also had the good fortune of existing, unlike Balak) tries to curse utilitarianism by devising a conclusion so repugnant that he names it the repugnant conclusion, arguing that it serves as a devastating counter-example to utilitarianism. Turns out that it's basically impossible to escape the repugnant conclusion without replacing one's diet entirely with bullets. Nozick and others curse utilitarianism with the charge of devaluing rights, only for independent arguments to reveal that non-utilitarian accounts of rights are superfluous, ungrounded, and absurd. Enough curses turned to blessings should reveal that utilitarianism is fundamentally right.
Utilitarianism is the modern price theory of ethics. It allows us to quickly reach the top of the mountain in just a T-shirt. People like Huemer are surprisingly diligent and resilient climbers, who manage to quickly reach places relatively near the top of the mountain. However, the mountain contains many treacherous turns--many of which result in ethical disaster, even relatively near the top of the mountain . I wish you well fellow travelers!
Contra Huemer on utilitarianism part 2
Organ Harvesting
Michael Huemer has an unfortunate reluctance to kill people and harvest their organs. This reluctance is so strong that he thinks that utilitarianisms toleration of killing people and harvesting their organs counts against it. Huemer gives the following case “a. Organ harvesting Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?”
Of course. It’s notoriously difficult to disentangle the morality of particular acts and the morality of people who would take those acts. Those who would actually kill patients and harvest their organs would almost certainly be evil—in the real world we can’t have certainty that it will save five lives. Thus, this thought experiment is counterintuitive because it has a few characteristics.
1 It pits our sacred values of not killing people against some preventing some amorphous extra deaths. Excess deaths are far too common for people to get bent out of shape about them.
2 It presumes that extraordinary recklessness turns out to have good results.
3 It has the murder word which people tend to be averse to.
Maybe you’re still not convinced. Perhaps you believe in ghosts rights, and you think that not violating rights is morally more important than preventing other rights violations. However, beliefs in rights are wrong, so let’s right the wrong as is our right.
1 It seems a world without any rights would still matter morally. For example, imagine a world with sentient plants, who can’t move, where all harm is the byproduct of nature. It seems plants being harmed, despite their rights not being violated, is bad. We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that beings matter if and only if they can experience happiness. This serves as strong evidence for utilitarianism.
One could object that rights are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this requires deeply implausible strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. They are purely the result of atoms functioning. However, strongly emergent properties are not reducible to weaker properties. Chalmers argues that there is only one thing in the universe that is strongly emergent; consciousness. Whether or not this is true, it does prove the broader principle that strong emergence is prima facie unlikely. Rights are clearly not reducible to happiness; no amount of happiness magically turns into a right. This renders this claim deeply implausible.
2 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. Additionally, if things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights to look at people.
3 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our magic rights based forcefields from infringement that produces no harm for us. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
4 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
5 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. Preventing a murder is no more morally relevant than preventing another death. A doctor should not try any harder to save a person's life on the basis of them being shot, than on the basis of them having a disease not caused by malevolent actors. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This is obviously false.
6 We’ve already seen that all of the things that we thought of as rights are conducive to happiness generally. However, this is not the extent of the parity. The things we don’t think of as rights would start being treated as rights if it were conducive to utility. Imagine a world where every-time you talked to someone it busted their eardrums and caused immense suffering. In that world, talking would and should be considered a rights violation. Thus, being conducive to utility is both a necessary and sufficient condition for something to be a right.
7 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. As Greene et al argue “A substantial body of evidence indicates that utilitarian judgments (favoring the greater good) made in response to difficult moral dilemmas are preferentially supported by controlled, reflective processes, whereas deontological judgments (favoring rights/duties) in such cases are preferentially supported by automatic, intuitive processes.”
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. While there is some dispute about this thesis, the largest data set from (Fornasier et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies. They write “The influential DPM of moral judgment makes a basic prediction about individual differences: those who reason more should tend to make more utilitarian moral judgments. Nearly 20 years after the theory was proposed, this empirical connection remains disputed. Here, we assemble the largest and most comprehensive empirical survey to date of this putative relationship, and we find strong evidence in its favor.”
8 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders should obviously prefer the one killed to prevent 5 murders. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
9 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
Rights have been officially debunked. They are no more. Yet there are some more specific objections to the organ harvesting objection.
First, there’s a way to explain it away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes. Thus, there’s a strong sociological reason for us to regard killing as worse than letting people die. However, this developed as a result of societal norms, rather than as a result of accurate moral truth tracking processes. This intuition about the badness of killing only exists in areas where killing to save people is usually not conducive to happiness. Many of us would agree that the government could kill an innocent person in a drone strike, to kill a terrorist who would otherwise kill ten people. The reason for the divergence in intuitions is that medical killings are very often a bad thing, while government killings via drone strikes are often perceived to be justified.
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctors offices for fear of being murdered. Cases of one being murdered to save five would be publicized by the media resulting in mass terror. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5. It can also be reasonably argued that things would go worse if doctors had the disposition to kill one and save five. Given that a utilitarian's goal is to take the acts, and follow the principles who make things go best in the long term, a more valuable principle that entails that one does not take this act, can be justified on utilitarian grounds.
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
A) Imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
It could be objected that even with family members the intuition is the same. Yet this doesn’t seem plausible, particularly if no one had any knowledge of the doctor's action. If no one knew that the doctor had killed the one to save the other five, surely it would be better for this to happen. An entire family being killed would clearly be less bad than one family member dying.
It could be objected that adding in family makes the decision making worse, by adding in personal biases. Yet this is not so. Making it more personal requires us to think about it in a more personal way. It is very easy to neglect the interests of the affected parties, when we don’t care much about them. Making it entirely about close family members matters, because we care about family. If we care about what is good for our family, then making the situation entirely about our family is a good way to figure out what is good, all things considered. Yet this is not the only case that undercuts the intuition.
B) Suppose that a Doctor was on their way to a hospital with organ transplants that would save 5 people, who would otherwise die. They see on the side of the road a murder that they could prevent, yet it would require a long delay, that would cause the death of the five people in the hospital. It seems clear that the doctor should continue to the hospital. Thus, when we merely add up the badness of allowing 5 to die, versus one murder the badness of five people dying outweighs.
C) Imagine that 90% of the world needed organs, and we could harvest one person's organs to save 9 others, who would live a perfect life. It seems clear that it would be better to kill the ten percent, rather than to let the other 90% die.
Finally, let’s investigate the principle behind not harvesting organs.
We could adopt the view NK, which says that one ought not kill innocent people. Yet view NK is clearly subject to many counterexamples. If the only way to stop a terrorist from killing a million people was by killing one innocent person, we should surely kill the innocent person. And most people would agree that if you could kill a person and harvest every cell in their bodies to save a million people, that action would be permissible.
WE could adopt view NKU, which says one ought not kill unless there is an overriding concern that involves vast numbers of people. Yet this view also seems to run into a problem.
It seems the objection differs depending on the context in which a person is killed. A terrorist using a human shield who is about to kill five people could be killed, yet it seems less intuitive to kill the person to harvest their organs. Thus, the badness of killing is context specific. This adds credence to the utilitarian view, in that the context seems to generally follow rules that determine if killing in most similar cases would make things go best.
We could take the view DSK, which says Doctors shouldn’t kill. However, this view is once again very easily explainable sociologically; it is very good for society that doctors don’t generally kill people. But in a deeper meta ethical sense it seems to make less sense.
We can consider a similar case that doesn’t seem to go against the obligations of a doctor. Suppose that a doctor is injecting their patients with a drug that will cure disease in low doses, but in too high of doses, it will kill someone. Midway through injecting the drug, they realize that they’ve given a lethal dose to five other patients, in another room. The only way that they can save them is by leaving immediately, and allowing their current patient from getting too high of a dose, who will die. It seems intuitive that the doctor should save the five, rather than the one.
One might object that the crucial difference is that the doctor is killing, rather than merely failing to save. However, we can consider another case, where the doctor realizes they’ve given a placebo to five people, rather than life saving medicine. The only way to give the life saving medicine would be to abandon the room that the doctor is in, much like in the previous example. It seems very much like the doctor should go to the other room, even though it will result in a death, caused by their injection. It seems clear that the cause of the lethal condition shouldn’t matter in terms of what they should do. As Shelly Kagan has argued that there is no plausible doing vs allowing distinction that survives rigorous scrutiny. Given the repeated failure to generate a plausible counter theory, we have reason to accept the utilitarian conclusion.
Additionally, imagine a situation where very frequently people were afflicted by flying explosives, which would blow up and kill five surrounding people, unless they were killed. In a world where that frequently happened, it starts to seem less intuitive to think we shouldn’t kill one to save five.
Humans follow patterns. Enough times of seeing “murder is bad,” leaves them to conclude that generally murder is bad. This corrupts their judgement about particular cases where the bad things about murder cease.
Alphazero was a chess playing AI that learned to play chess by playing against itself many times—never influenced by human strategy. As a result, it didn’t follow the common strategies and heuristics of humans, and it played much better than other AI. Humans were baffled by the strategies of alphazero and described it as playing chess like an alien. Utilitarian ethics is somewhat similar. It’s the optimal ethical system, but it often ignores the rules that humans use for ethics. Our bafflement at its seemingly strange mandates is no more surprising than bafflement at the chess of alphazero. Sometimes, heuristics hold us back. Yet as I hope to have shown, despite utilitarianism playing chess like an alien, it’s playing better chess.
Contra Huemer on utilitarianism part 3
A promise to keep
Content warning
Please do not read if you are planning on giving me money on your death bed that you hope I’ll deliver faithfully to your child, for reasons that will become apparent.
1 A not totally terrible action
The old man spoke to his contemporary, slowly dying. “My friend, you know I have never made a will.” His contemporary nodded. “I want you to tell everyone,” the man coughed, “I want you to tell everyone that when I am dead my money should be taken and delivered to the the top of mount Everest by Jim, who has a reasonable chance of death. He did, however, swear that he would do it.” His contemporary squirmed.
“Erm, why,” he asked, evidently deeply confused as the strange request.
“Well, that’s just reducible to psychological facts about me. I have a strange sort of desire. Also, my sons said they thought it would be really funny, though didn’t expect me to do it.”
His contemporary spoke more sullenly “my friend, surely you don’t mean that. Come, you must be addled, so near death.”
“No, no,” said the man. “I’m perfectly mentally fit. This extended short story wouldn’t really work if I were impaired now would it. Here, I’ll demonstrate. Probability of B given A equals the prior probability of B times the probability of A given B, over the probability of B. Oh also by the way, I’ve recently proven the Goldbach conjecture.” The man presented his proof of Goldbach’s conjecture in excruciating detail and it was correct.
“Okay, so you’re not addled,” said his contemporary. “Does anyone else know about this particular desire of yours.”
“No,” said the man. My children at one point just mentioned that they thought it would be funny. My children are already millionaires so they wouldn’t benefit from the money, the only way they’d benefit from it is if I put it at the top of mount Everest.”
“Well, even if they replied that it was funny, surely that doesn’t mean that they would actually want that to be the particular thing you’d do with your money, right.”
“No,” said the man. “I’ve actually derived from first principles that they’ll find it really funny. And before you doubt me, remember that I proved Goldbach’s conjecture very recently. In fact, I derived from first principles that they’d find it so funny that they’d chuckle about it for fifteen minutes every day.”
“But you said Jim will probably die.”
“Oh yes, and Jim provides for his family of five people, so they’ll probably die as well. However, Jim made a deal that he would bring it to the top of Mount Everest when I died, in exchange for one hundred thousand dollars. Jim was actually hypnotized to forget about it unless someone else brings it up, so Jim isn’t currently aware of my strange desire.”
The man coughed. “My friend, I will die within 30 seconds. Promise to me you’ll tell Jim to take my money up to Mount Everest.”
“I will.” Replied his contemporary.
The man died. His contemporary stepped out of the room. “Any last wishes or indication of what should be done with his money,” one person asked.
“He said it should be given to his sons,” the contemporary replied.
2 Replying to Huemer
You dear reader probably have the inclination that not following on his promise to save multiple people is, in the aforementioned case, justified. However, in this case, we’ve stipulated that you’re lying to a person on his death bed, in a way that would harm his son but likely save lives. Huemer thinks that a similar case provides a decisive counter-example to utilitarianism, writing “c. Deathbed promise
On his death-bed, your best friend (who didn’t make a will) got you to promise that you would make sure his fortune went to his son. You can do this by telling government officials that this was his dying wish. Should you lie and say that his dying wish was for his fortune to go to charity, since this will do more good?”
Yes! This is structurally analogous to the case I gave above. Why is it analogous? Well, a persons fortune can save many lives. The cost to save a life is about 4500 dollars. Presumably the mans fortune is more than 4500 dollars. Suppose that the mans fortune is a fairly modest 45,000 dollars. In this case, lying about the deathbed promise saves twice as many lives as sparing Jim from dying on the slopes of Everest. In both cases you are lying to a dying man to save the lives of many people, but in a way that fails to benefit his sons. This case is analogous to lying about lying to a murderer to prevent your room-mate from being murdered.
Utilitarianism gives us good reason to follow common moral heuristics like honesty. Being the type of person who adheres to promises nearly all of the time is conducive to utility. Thus, utilitarians can account for the value of the promise. If every-time a utilitarian made a promise they violated in cases where it maximized utility to do so, that general practice would not be conducive to utility.
Part 3 Another dialogue
The old man sat on his death bed and coughed. “My friend,” he said to his contemporary. “My wealthy sons do not need money. In contrast, ten daughters all have terminal illnesses that can be cured for 30,000 dollars total. However, I don’t like my daughters very much, so please give the money to my sons.”
“Why don’t you like your daughters,” asked the contemporary.
“A few reasons,” replied the man. “First, they’re very far away. I care less about people the farther away they are. Second, I have a lot of daughters, so each one matters less. Third, there are lots of other people who could in theory help my daughters, but are not doing so, so why should I. Fourth, they’re adopted, so they don’t have my super cool genes.” “And before you ask,” he added “here’s a decisive proof of Goldbach’s conjecture.”
“Promise me you’ll do this,” the man demanded.
“I promise,” replied his contemporary.
With adept timing, the man died. His contemporary emerged. “He wanted the money to be given to his daughters,” he said.
Part 4 This is actually a really good analogy if I say so myself and I do say so myself therefore, it’s a really good analogy (by modus ponens)
In the above case, the contemporary broke a promise to save the lives of ten people. However, the action seems clearly justified. In both cases, the fact that they’re far away, there are many people who can be helped, others could help in theory, and that they are not a family member is not morally relevant.
Why are our intuitions so far off? Well for one, even after hearing that we can save a child’s life for less than the cost of a cheap car, it’s very difficult to internalize that. Thus, when we think of donating, it’s hard to emotionally resonate with the full value of the action.
Part 5 Look, I know you’re probably getting tired of the analogies but I think they’re fun and some of you do too and also they illustrate why Huemer’s objection is wrong. So here’s another analogy, just bear with me on this one, much love, Bentham’s Bulldog.
The man approached his contemporary. “First things first, I’m not addled, here’s a proof of Goldbach’s conjecture yadda yadda,” he said.
“Second, here is all of my money. Please put it very deep in your pockets such that it would take about two minutes to get it out, given your enormous mazelike pockets. Deliver it to my sons and their family and don’t stop for anything, not even drowning children. Promise me this.”
“I promise,” said his contemporary, who was hoping for more dialogue like he’d had in the previous scenarios (he was an actor from Los Angeles). The man died, right after he’d given his 30,000 dollars to his contemporary.
His contemporary started to drive to give the money to the man’s sons families (not to be confused with the Manson family). On his way (shocker, I know) he saw 37 drowning children in a shallow pond. There was a button that was fairly deep under the water, a button which he could press to flip up the shallow pond, saving all 37 children. However, he would have to wade into the water, ruining the money and breaking his promise. “Hmm, this Bentham’s Bulldog guy makes a pretty compelling case for violating promises sometimes,” he said, before jumping into the water and saving the thirty seven children.
Part 6 Some more analysis of why the previous case is pretty analogous and also the above guy was not totally awful
The above scenario makes it clear that he should ruin the money and violate his promise to save many lives. It’s not clear what the morally relevant difference is between this and Huemer’s scenario.
Maybe the disanalogy is that the action is more direct. However, even if his jumping in the water was the only way to press a button that would save people on the other side of the world, it still seems like he should jump in. Maybe the disanalogy is that the scenario was unexpected. Well, people who make that disanalogy should read more carefully, because I specifically stipulated that the promise included not stopping for drowning children. These imaginary straw-men need to stop straw-manning my position.
Maybe the disanalogy is that other people have the ability to save the lives of the people who are dying of malaria. However, that doesn’t seem to matter. Suppose that there are other people who are standing by and doing nothing. Surely, he should still jump in.
Maybe the disanalogy is that there are endless people who could be saved, but it’s impossible to save them all. However, even if there were lots of extra drowning people, it doesn’t seem like that would weaken the obligation to save the 37 children.
Perhaps we think that the relevant difference is that only a very callous person would ask another to promise to ignore drowning children. However, even if in Huemer’s case the person is callous, it doesn’t seem to undermine the intuition. Additionally, the argument above was intended to show that failing to donate the money to givewell is a seriously wrong action, akin to ignoring drowning children. Finally, even if we suppose that the people on their death beds are saintly and have dilligently devoted their entire life to helping others, the intuition still applies.
Huemer might reply that in other cases where no one’s life is on the line—for example if the amount donated is only twenty dollars—the analogy disappears. However, in this case, a utilitarian wouldn’t endorse violating the promise. Being an honest person makes things go best. Honestly should only be violated when the stakes are large. If there were only twenty dollars worth of stakes, it wouldn’t be worth lying.
Sidenote: If you’re considering giving me any money that you hope me to faithfully deliver to your child when you’re on your death-bed, ignore everything I wrote, I deeeeeeeeefinitely would never violate the death bed promise :).
Sidenote 2: The reason I responded to Huemer’s objections out of order was because I got mixed up and was so invested in responding to this one that, by the time I realized I was out of order, I decided to finish this post.
Sidenote 3: The views expressed here are my own and do not reflect utilitarians broadly. I’m sure all of them would never in a million years break a promise made on a death bed to donate money.
Contra Huemer on utilitarianism part 4 (alternatively, a post to get me out of jury duty)
Why you should sometimes frame innocent people (though not usually).
(Something Michael Huemer is very against)
Michael Huemer has another objection to utilitarianism, thinking that it justifies framing innocent people to save lives, which he thinks is bad. Huemer writes “b. Framing the innocent
You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?”
Yes!
One first point that can be made is that in most realistic situations, one ought not frame people. Thus we have strong reason to say no to the question of whether innocent people ought to be framed, even if we can imagine sparse situations in which it would maximize happiness to do so.
Second, we can explain away our revulsion sociologically, by appealing to the revulsion which we rightly feel for framing an innocent person.
Third, we can make modifications like the ones made in part five by making the people family members. Surely you would rather one family member was framed, rather than all your family members killed.
Fourth, suppose we could prevent either five murders from a lynch mob, or one innocent person from being framed and killed by a lynch mob. Surely we should prevent the former. One could appeal to the act omission distinction in this case. However, we can modify the case to avoid this.
Imagine a case in which a person (we’ll call him Tim) wrote an anonymous letter that would be delivered, which would frame an innocent person, who would be killed by the mob. However, after writing and mailing the letter, Tim had a change of heart, and decided to prevent the letter from being delivered. When he uncovered the stash where his letter was stored, it turned out to be in an iron box, with a robust security system, such that if two letters were taken out and destroyed, it would sound the alarm, and letters would be returned to their original location. As Tim is about to take out his letter, he sees another letter, which has the names of five people on it, who Tim knows are being framed, and will be killed by the mob. If Tim takes his letter out, he will not have framed an innocent person, and no one will be left worse off, as a result of Tim’s actions. However, if Tim takes out the letter containing five names, he will prevent a mob from killing five innocent people. In this case, it seems very intuitive to take out the letter with five names, yet it is very much like the earlier case. When one is taking out the letters, it should not matter who wrote the letter. Additionally, suppose one wrote the letter when they were asleep (sleep letter forging is considerably rarer than sleep walking, but not impossible). In that case, it seems even more bizarre to take out the letter, because it was written by them, rather than the five that would save more lives.
One could object that the cases are not parallel. However, the cases have been designed to be identical in morally relevant respects. In both cases, one is framing an innocent person to save five people. The only difference is that one framing is dragged out over a longer period of time, and is done over mail. Those, however, are not morally relevant differences.
Fifth, let’s imagine another parallel case. A mob is planning on killing a large number of people, because they think that they all conspired to do the crime. They believe that conspiring to do the crime is especially horrendous (they take a hardline stance against conspiracies), so if they believed that only one person committed the crime they would beat that person severely, but not kill that person. However, that person would otherwise be killed by the mob, along with everyone else. Thus, it would be better for literally everyone if that person is framed. In that case it seems intuitive that you should frame the person to reduce the severity of harm to them, and save everyone else’s life.
Sixth, we can consider a case similar to the case I appealed to when arguing against magic rights. Consider the following case.
There are 100 concentric circles, the first with one person, the second one with five, the third with 25, fourth with 125, etc. Each person has 5 corresponding people in the circle outside of them. (Eg, person 1 in circle one corresponds to all five people in circle 2, Jim in circle 2 corresponds to Mary, Sue, Edith, Rampampam, and Gulac in circle 3m etc.) The first 99 circles are comprised of perfectly moral people. Each person has two options.
They can either
1 Frame an innocent person
2 Give the five people corresponding to you the options you were given.
However, the people in the 100th circle will certainly frame the innocent person. If we hold that it’s not worth it to frame an innocent person to prevent 5 murders, presumably we’d hold that it’s not worth it to frame an innocent person to prevent five framings. However, if it’s not moral to frame an innocent person to prevent five framings then it seems clearly immoral to frame an innocent person to give a perfectly moral person two options, one of which is framing an innocent person. Giving perfectly moral people extra options can’t be bad, because if the extra option is worse than framing an innocent person they won’t take it. They’ll only take it if it’s better than framing an innocent person. Thus, we have the following conclusion.
If you should frame an innocent person to prevent 5 murders, you should frame an innocent person to prevent 5 framings. If you should frame an innocent person to prevent 5 framings then you should frame an innocent person to give a perfectly moral person two options, one of which is framing an innocent person. However, if this is true, then one ought to take option 2 always, because they should not frame an innocent person to prevent 5 perfectly moral people from having two options, one of which is framing an innocent person. Unfortunately, if everyone takes option 2, we’d have to stipulate that a ring of perfectly moral people acting rightly would result in 5^99 innocent people getting framed, when, had they acted differently, only one person would have been framed. This is far more unintuitive than the utilitarian conclusion.
Morality doesn’t care about who takes actions—it instead cares about what actions are taken. Our intuitions are wrong. This conclusion may seem repugnant. Fortunately, Huemer already accepts the repugnant conclusion.
Non utilitarians will have to do better than this rights nonsense in order to demonstrate a decisive counter-example.
Contra Huemer on utilitarianism part 5 (Alternatively, a post to get out of being forced to be an electrician during a sports game)
Why sports are maybe better than torture is bad
Note: This argument only applies to non yankees games—Huemer is obviously right in the context of yankees games.
“I hate torture as much as the next guy.
That much is truer
(than other false things) But I shut up and multiply
Unlike Mike Huemer”
—Mark Twain
Michael Huemer has another objection to utilitarianism, based on it subordinating torture avoidance to sports games. He writes “d. Sports match
A sports match is being televised to a very large number of people. You’ve discovered that a person has somehow gotten caught in some machine used for broadcasting, which is torturing him. To release him requires interrupting the broadcast, which will decrease the entertainment of a very large number of people, thus overall decreasing the total pleasure in the universe. Should you leave the person there until the match is over?”
Say it with me now, yes!! Let’s call this person Jones, as Scanlon did.
In this case, we can apply a similar method to the one applied to the repugnant conclusion, or torture versus dust specks. Suppose that we compare Jones’ situation to two people experiencing painful electric shocks that are only 90% as painful as Jones’ shocks. Surely it would be better to prevent the shocks to the two people. Now compare each of those two shocks to two more shocks, which are 60% as painful as Jones’ original one. Surely the 4 shocks are worse than the two. We can keep doing this process until the situation is reduced to a large number of barely painful shocks. Surely a large number of people enjoying football can outweigh the badness of a large number of barely painful shocks. A similar point has been made by (Norcross 2002).
Additionally, as Norcross points out, we regularly make similar trade offs. When we lower the speed limit, we recognize that some number of people will die, to increase the speed at which people can reach their destination.
If every bad thing is just as bad as a few things that are a little bit less bad than it, then we can keep swapping out bad things for more slightly less bad things, until we get to a point of lots of barely bad things. However, lots of barely bad things can obviously be outweighed by lots of barely good things.
Huemer could argue that, while there are some number of football minutes viewed that would be worth one torture, utilitarianism identifies the number as being way too low. If there’s some number of football viewer-ships that can outweigh the badness of a torture, it ceases to be a persuasive objection. Maybe Huemer says the number required would actually be 3 million, but utilitarianism says it would be 1.1 million. However, if we know there’s some amount, it’s hard to maintain that our ethical intuitions are at all reliable relating to the exact number of inconveniences whose badness equals that of one torture.
Aaaaaaaaaaaand now, for an overwrought dialogue, as must be included in every post.
Huemer: “You utilitarians think that some number of football games being seen is good enough to outweigh a torture.”
Much cooler utilitarian Winston Churchill: “As per the above principle, surely you’d agree that it would be worth it to allow one torture to continue to prevent all football from being canceled in one hundred billion worlds.”
Huemer: “I suppose so.”
Churchill: “Would you do it for 300,000 football games.”
Huemer: “Of course not. What kind of philosopher do you think I am.”
Churchill: “We’ve already established what kind of philosopher you are, now we’re just haggling over the price.”
Huemer: “There’s a big difference between 300,000 and 300 quadrillion.”
Churchill: “Would you do if for 300 million.”
Huemer: “Hmm—not sure.”
Churchill: “It seems strange that you claim that our ethical system is incoherent when you don’t even have an order of magnitude estimation of the precise amount.”
Huemer: “Well, whatever the amount is intuitively it seems like yours is waaaaay too much.”
Churchill: “Well, given that we make tradeoffs like this all the time it seems to reflect people’s revealed preferences. Additionally, if your intuitions were already wrong here, why trust that they’d be right about the precise amount, if they got the question of whether there was an amount wrong.”
Huemer: “I think intuition favors opposition to torture for reasons beyond the mere pain it causes.”
Churchill: “Sure, but presumably you think that based on other cases—cases which will be discussed in future articles. This was just showing that continuing torture for a football game is sometimes worth it. The point at which you’re tolerating some torture for the sake of continuing football games, your view seems intuitively insane to the average person. It’s like a schizophrenic person who thinks he’s jesus making fun of another person who thinks he’s Gandhi.”
Contra Huemer on utilitarianism part 6
Against cookies for desert.
(Let’s all take a moment to be amused by the cleverness of the pun).
Like most of you, I frequently have the opportunity to distribute cookies between Ted Bundy and Mother Teresa. Huemer worries that utilitarianism gets the wrong result, writing “e. Cookie
You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?”
Those who have the intuition that it should be given to Ted Bundy presumably think so on the basis of thinking that reward should be proportional to how much one deserves the reward. However, the concept of desert runs into thorny ethical questions.
1 If it is good to punish bad people, then we should trade off a certain amount of pleasure with the punishing of bad people. To give an example of a metric, imagine we say that a year of punishment for bad people is good enough to offset an amount of suffering equivalent to one punch in the face. If this is true, googolplex bad people being punished for a year each, combined with as much suffering of benevolent people as googol holocausts, would be better than a world where everyone, including the unvirtuous bad people is relatively happy. Given the absurdity of this, we have reason to reject this view.
The retributivist may reply by arguing that there’s some declining value of retributivism, such that punishing one bad person is worth a punch in the face, but repeated punches in the face outweighs any amount of punishments of bad people. However, this is implausible, given the mere addition paradox. It seems clear that one torture can be offset by several slightly less unpleasant tortures, each of which can be offset by several even less unpleasant tortures. This process can continue until we get a large numbers of “tortures'' equivalent in pain to a punch in the face, being worse than a torture. If the number of bad people punished is large enough, it could thus outweigh the badness of horrifically torturing galaxies full of people.
They might hold the view that happiness for bad people is neither good nor bad. However, if they hold this view then they’d have to think that the moral relevance of one’s happiness doesn’t scale in proportion to how moral they are. If Ted Bundy’s happiness is morally neutral, then he would be equally deserving of happiness if he’d killed 100 extra people. This is implausible.
They might think that no one deserves to suffer but people deserve well-being inversely proportional to how moral they are, such that as a person tends towards being as immoral as Bundy, the moral relevance of their happiness tends towards zero. This runs into a few problems.
1 It would say that a great enough differential in cookie taste quality is worth giving to Bundy over Teresa, which is about equally counterintuitive.
2 It would say that the more bad things one does, the less marginal effect bad things have on their moral character.
They could bite the bullet, however, this is a view that’s so unintuitive that we have decisive reasons to reject it.
Here are some more objections.
2 (Kraaijeveld 2020) has argued for an evolutionary debunking of retributivism. It’s extremely plausible that we have an evolutionary reason to want to prevent people from doing bad things. It’s unsurprising that we feel angry at bad people, and want to harm them.
3 There’s an open question of how exactly we determine who to punish. Do we punish people for doing bad things? If so, should we punish politicians who do horrific things as a result of bad ideas? Would an idealistic communist leader who brings their country into peril be worthy of harm? If it’s based on motives, then should we punish egoists, who only do what makes them happy, even if they help other people for selfish reasons? If we only punish those who possess both characteristics, would we not punish nazi’s who truly believed they were acting in the greater good? Additionally, should we punish people who think meat is immoral, but eat it anyways? If so, we’d punish a large percentage of people.
4 Our immorality is largely a byproduct of chance. Many serial killers would likely not have been serial killers, had they been raised in a different family. Additionally, many violent criminals would not have been violent criminals had there not been lead in the water. Is it truly just to punish people for things that occurred outside their control, that are causally responsible for their crimes. As we’ve seen throughout history, if we’d been in nazi germany, we’d likely have been nazi’s. In scenarios similar to the Stanford prison experiment, we’d do horrible things. It seems arbitrary that one deserves to suffer because of particular events, such that in the absence of those events they wouldn’t intrinsically deserve to suffer. Desert should not be held captive to real world contingencies. Mother Teresa’s happiness shouldn’t matter hundreds of times more than the happiness of the average person.
5 This view runs into some other issues. Suppose that a person has thirty second every day when they turn evil. Would they deserve to suffer, but only for those thirty seconds? If they killed someone during those 30 seconds, would they deserve to suffer, even not during those thirty seconds. What if Ted Bundy had a change of heart? Would he still not deserve cookies? Suppose a person takes ADHD medication, which incidentally makes them more moral. Should they try to eat cookies when they’re on ADHD medication, because during that time they’re more moral. What if a person took a drug that made them temporarily a psychopath. Should they not eat tasty food during that time, so as to redistribute happiness towards themselves when they’re more moral. Either horn of the dilemma poses problems.
If they say that ones desert waxes and wanes over time based on how moral they are, this has several counterintuitive implications.
1 It would say that people’s happiness doesn’t matter during their dreams, to the extent that they do immoral things in dreams. At one point I had a very strange dream in which I killed many people (it occurred in a way that made no logical sense—as dreams often do. I think it has to do with a button or something that caused people to cease existing—it was hard to explain). In that dream, I clearly had no moral qualms about anything. (Note, this doesn’t reflect anything about me broadly, just like dreams make some nonsense ideas seem to make sense, this one made moral constraints not even factor into my consideration). It seems like my happiness in that dream is no less relevant morally than my happiness in any other dream.
2 It would say that people should redistribute the rewards that they’ll receive to occur when they think they’ll be maximally moral. For example, if one is planning on taking an ethics class, and thinks it’s more likely that they’ll be extra moral after the ethics class, this would say that they should save cookies for after the ethics class is done, even if they’d enjoy them less.
3 It would also say that a person who takes a drug that turns them into a psychopath who wants to kill people for a half hour deserves to suffer during that time (even if they’re locked up so that there’s no risk that they kill anyone).
4 It would say that the happiness of babies and small children matters proportionately less, because they’re less moral.
However, if they take the other horn of the dilemma, and argue that one’s desert remains unchanged over time, this runs into problems of its own.
1 It would say that Ted Bundy would become no more deserving of happiness if he reformed completely.
2 It would say that if Ted Bundy and mother Teresa switched morals, such that Ted Bundy became compassionate and mother Teresa became interested in killing lots of people, Teresa would be less worthy of cookies.
3 It becomes impossible to measure how moral someone is. If one starts out moral but becomes immoral, this metric doesn’t give us a way of determining what they deserve.
Perhaps you think that what matters is someone’s average moral character. Well, in that case, Bundy would be no more worthy of happiness prior to reforming than he would after reforming. Additionally, this view runs into a problem with mostly benevolent beings with very long lives. Suppose that a person is a horrendous serial killer for 1000 years, but after that they become perfectly moral. Given that not even mother Teresa is perfectly moral, on this account, this person would be more worthy of a cookie than mother Teresa, even when they’re going around murdering people.
Utilitarianism gives a perfectly adequate account of why punishing bad people is justified. Punishing bad people deters bad things. We develop an emotional attraction to punishing bad people, so we can feel good about it, as if we’re punishing those who deserve it. Yet the concept of desert is, at the end of the day, not what matters. Dessert, on the other hand, does matter.
Contra Huemer on Utilitarianism part 7
For torture--a non rights based defense
Subtitle is based on this.
“Then the Nazi’s tortured…”
—A story that never ends well
“What if (really bad thing) maximized utility. Utilitarianism says it would be good in that case."
—A lot of arguments against utilitarianism.
I, like Michael Huemer, don’t like Nazi’s. Huemer’s next objection to utilitarianism is the following.
“f. Sadistic pleasure
There is a large number of Nazis who would enjoy seeing an innocent Jewish person tortured – so many that their total pleasure would be greater than the victim’s suffering. Should you torture an innocent Jewish person so you can give pleasure to all these Nazis?”
A few things are worth noting about this case.
1 Making them sadistic Nazi’s makes it seem like a larger bullet to bite. Defending sadistic Nazi’s is never a good look.
2 In any plausible case in the real world torturing an innocent Jewish person will not make things go best. Even if Nazi’s pleasure that they experience when they torture an innocent Jewish person is prima facie good, it’s instrumentally bad. It would plausibly result in more torturing of Jewish people and require millions of Nazi’s who take immense pleasure in their suffering for it to increase overall suffering.
However, thought experiments don’t have to be realistic, so it’s perfectly fair game for Huemer to stipulate an unlikely situation. However, intuitions about farcical situations, where the net utility is precisely opposite of what it is in real world cases are likely to be unrealistic. Much like our intuitions about physics are likely to be less reliable if we stipulate very different situations—our intuitions about ethics are similar.
If the Nazi pleasure is good, then enough Nazi pleasure outweighs the pain of Jews being tortured. Thus, the person who holds Huemer’s view would seemingly have to accept that sadistic pleasure of bad people is not morally good.
Maybe you think that sadistic pleasure is necessarily bad. Well, this principle has compelling counter-examples. We can consider a man called Tim. Tim derives immense satisfaction from watching scenes that appear to depict tortures. Tim would never torture anyone and abhors violence. In fact, he sometimes feels guilty about his strange desires and donates vast amounts of money to charities. Tim also makes sure that the content that he watches that depicts people being tortured does not actually involve people being tortured. Tim spends hours searching for content that looks like people being tortured but has no actual people being tortured. Additionally, we can suppose that this is the highlight of Tim’s life. He enjoys it so much that, without it his life would be miserable. Despite suffering clinical depression, Tim finds the experiences so enjoyable that he regards his life as generally good. It seems in this case, Tim is truly made better off by the joy he derives from this sadistic content.
However, suppose additionally that, despite Tim’s incredibly careful selection process, Tim is deceived by an evil demon, who manipulates the laws of physics to make people actually be brutally tortured, despite any reasonable observer concluding that no one was truly being tortured. It seems that in this case, while the person who is tortured is made worse off by the torture, Tim is made better off by it. All else equal, it seems that making Tim enjoy viewing the torture less (assuming he’d view the same amount of torture) is bad.
Imagine another case of an alien civilization who views the suffering of humans. This alien civilization starts in a state of vast agony, yet becomes less miserable each time they watch a human suffer. If they view all human suffering their overall hedonic state drops to zero, when it starts off significantly worse than being boiled alive. Again, it seems like the aliens' sadism is not a bad thing.
If we think that enjoying the suffering of others is actively bad, independent of the suffering of others, then it would be morally good to make the aliens unable to see suffering. This is deeply implausible.
One might object that deriving pleasure from sadism is morally neutral, neither good nor bad. However, in the scenarios both posited, it seems obvious that the world is better because the aliens enjoy suffering enough to not be as miserable as beings being boiled alive. If the only way for the aliens to relieve their unfathomable agony was to torture one person this seems justified.
We can imagine another case of a person, Wyatt, who takes immense satisfaction in eating meat because he knows that the animal suffered. He feels deeply guilty about this fact, but cannot enjoy eating meat unless he knows that the animal suffered. Wyatt continues to eat meat, but donates to charities that help animals because he feels guilty. In this case, it seems that Wyatt enjoying the meat, assuming it won’t cause him to eat any more meat, is not a bad thing. To the extent that Wyatt enjoys meat because he knows about the suffering, and others enjoy meat that causes enormous suffering, but don’t care whether or not they know about the suffering, it’s hard to see how Wyatt’s meat enjoyment is any worse than any of ours. Much like there seems to be no morally relevant difference between a person who tortures other because they like when others suffer and one who likes the taste of people after they’ve been tortured, there’s no difference between one who enjoys the suffering of animals that they eat and one who merely enjoys the taste of the animals.
If Wyatt is morally no different from the rest of people when he eats meat, then either Wyatt’s sadistic meat eating is morally good or the joy that most people get from eating meat is morally neutral. However, this is deeply implausible. If meat tasted less good, but people ate the same amount of meat, that would be a worse world. If sadistic pleasure can be good, then enough sadistic pleasure can outweigh the badness of torture.
Additionally, there are many cases where people enjoy the suffering of others, which are not found objectionable. If the parents of a murder victim derive satisfaction from knowing that the murderer is rotting in prison, it wouldn’t be desirable to deprive them of that satisfaction.
Additionally, we can imagine a world exactly like our own, except humans would never be happy in their lives unless they, upon their sixth birthday, slap someone in the face and revel in the enjoyment. In this case, all of their enjoyment is coming from the fact that they slapped someone, but it wouldn't be good to condemn everyone to never being happy.
Additionally, we can imagine a scenario where every person will be given an extra year of happy life by torturing one person. In this case, their happy life is only existing because of the torture, but this case seems clearly justified.
Our anti-sadism intuition comes from the fact that sadism is not conducive to happiness. However, if it were conducive to happiness, it would be justified. In the cases like Schadenfreude, where we derive joy from the suffering of others in ways that make things go best, we don’t find it counterintuitive.
Maybe you think that what matters is avoiding sadistic pleasure from bad people. However, as the last article argued, attaching moral relevance to desert cannot be justified.
Perhaps you think that it’s actively bad when people derive sadistic pleasure from torturing people. If this were true, then in Huemer’s case most of the badness of the torture wouldn’t come from the torture—most of it would come from the Nazi’s getting pleasure. On this account, enough Nazi’s getting pleasure from the holocaust would be worse than the holocaust itself. This is deeply implausible.
Maybe you think that sadistic Nazi pleasure is morally neutral. This view runs into issues. Consider the above case about aliens. This alien would be horrendously miserable absent deriving pleasure from suffering. Let’s also stipulate that the alien is a Nazi’s and the only way to relieve their suffering is to torture one Jewish person. Absent one Jewish person being tortured the aliens hedonic state is more negative than the sum total of all suffering experienced by Jews during the holocaust. There is just 1 alien. In this case, it seems intuitively like the torture is justified.
Now we can switch up the case. Instead of there being one Nazi alien who experiences 100^100 times as much suffering as an individual, now there are 10^10000000 Nazi’s who start at a state of moderate unpleasantness which turns positive if a Jewish person gets tortured. In this case, there are so many Nazi’s that there’s more total Nazi utility. It doesn’t seem like lots of Nazi’s getting a little bit of pleasure is morally different from one Nazi getting a little bit of pleasure. Thus, if we accept that the Nazi alien should torture the Jewish person, we should also accept that the cluster of all the aliens should collectively torture the Jewish person.
This case may still seem a little bit counterintuitive. However, remember a few things.
1 If you had an equal chance of being all of the affected parties you would torture the person.
2 If you lived the lives of all of the people affected you’d be in favor of torturing the person. If we think of all bad things in the real world these two characteristics are not pleasant. If one was equally likely to be Ted Bundy’s victim as they were to be Ted Bundy, they would oppose Ted Bundy killing people. Likewise if you lived the lives of both Bundy and his victims.
Maybe you’re still not convinced and you think that it’s intuitively just too counterintuitive. I think your strong intuition is caused by a few separate intuitions that all combine in this case.
1 Retributivist intuitions, that say that bad people deserve to suffer.
2 Anti sadism intuitions, that say that it’s bad when people derive pleasure from the suffering of other’s.
3 General opposition to Nazi’s and to torture.
However, I think I’ve argued persuasively against the first two and the third is perfectly compatible with this view. Nazi’s and torture are bad because they cause lots of suffering.
A lot of people eat meat. When they eat meat, particularly from factory farms, they are getting pleasure from the suffering of others. Maybe not directly from the suffering, but that’s not morally relevant. If Nazi’s got pleasure from Jews being tortured because they thought it would deter more insidious Jews from doing bad things, rather than from pure sadism, that would presumably not change the moral status of the torture. So getting pleasure from a byproduct of torture seems morally equivalent to getting pleasure from torture. Thus, it seems like this view would likely commit you to thinking that torturing and killing one animal to give the entire world meat would be unjustified. This is deeply counterintuitive.
Maybe you think that there’s some fundamental difference between humans and animals, such that it’s fine to revel in a byproduct of the torture of animals. Then the question arises—in virtue of what does that difference arise (lots of arising in this sentence)?
I offer the following lengthy dialogue (signature of Bentham’s Bulldog posts).
Jim: “There’s a morally relevant difference between humans and animals that justifies the difference.”
Michael Huemer appears from nowhere with a bang (Huemer, like me, is strongly ethically opposed to current meat eating practices).
Huemer: “What’s the cause of the morally relevant difference?”
Jim: “Well, they’re a different species.”
Huemer: “Would it be ethically permissible to revel in the suffering of aliens that look exactly like us, but are different species from us.”
Jim: “I guess not. Well, they’re not as intelligent as us.”
Huemer: “Would it be ethically permissible to revel in the suffering of severely mentally disabled people.”
Jim: “It would depend on how mentally disabled.”
Huemer: “Animals like cows and pigs and chickens are certainly smarter than babies.”
Jim: “Well, babies become smart.”
Huemer: “What about terminally ill babies.”
Jim: “Hmm… well, but it’s natural to eat meat.”
Huemer: “If it were natural to eat terminally ill babies, it would still presumably be unjustified.”
Jim: …
Presumably most of you think that if every year only a single animal had to be factory farmed and slaughtered to feed everyone meat, killing that animal would be justified. Thus, deriving pleasure from a byproduct of torture is sometimes worth a torture.
Maybe at this point you think that I’m anti-semitic. Well, I’m Jewish. I merely include everyone, even horrendous people and Nazi’s in my moral circle.
Contra Huemer on Utilitarianism Part 8
The extreme ethical importance of donating
“We live in a very morally weird world.”
William Macaskill.
Michael Huemer thinks that utilitarianism, much like the stereotypical Jewish or Asian parent, cares far too much about you being a doctor or a lawyer. Huemer argues the following “Consider two people, A and B. A is a professor who gives away 50% of his modest income to charity each year, thereby saving several lives each year. However, A is highly intelligent and could have chosen to be a rich lawyer (assume he would not have to do anything very bad to do this), in which case he could have donated an additional $100,000 to highly effective charities each year. According to GiveWell, this would save about another 50 lives a year.
B, on the other hand, is an incompetent, poor janitor who could not have earned any more money than he is earning. Due to his incompetence, he could not have given any more money to charity than he is giving. Also, B is a serial murderer who kills around 20 people every year for fun.
Which person is morally worse? According to utilitarianism, A is behaving vastly worse than B, because failing to save lives is just as wrong as actively killing, and B is only killing 20 people each year, while A is failing to save 50 people.”
In terms of which person is morally worse, I’d comfortably say that person B is morally worse. Judging people is different from judging actions. Utilitarianism holds that what fundamentally matters are consequences, and that actions are only valuable if they promote what fundamentally matters. Utilitarians and society broadly encouraging people to not be serial killers is morally more important than encouraging people to donate. Society would be worse if people could go on killing sprees but get off scot free based on donating money. Utilitarianism holds that we shouldn’t judge people merely based on the consequences of their actions. Hitler’s grandmother shouldn’t be deemed to be a terrible person, because it’s impossible to deter actions with bad consequences if those consequences are unpredictable. However, in terms of the question of which action is morally worse, the professor is worse.
One caveat is that being the professor may be conducive to overall utility if professors publish important information. If the professor encourages students and other philosophers to donate to give well, or has important research, that could be more important than merely being a lawyer who donates to givewell.
This judgement seems very counterintuitive (that was the point of Huemer’s thought experiments. Let’s consider a parallel case. Suppose that a person is making a choice of whether or not to both assault 50 people who would otherwise have died of malaria, and donate to givewell to save those 50 people. Maybe you think that the person should not take the conjunction of actions. This may seem intuitive at first, until you realize that all the people who will be assaulted by the person are made better off on net by having their lives saved. An action that’s better for literally everyone, such that all the “victims,” want it to happen surely isn’t bad. Well, this shows that assaulting 50 people is less bad than donating to Givewell to save 50 lives. Consider another case, for all of the people who would otherwise die of malaria, you can give them a drug that will kill 90% of them, but save the other 10%. In this case you should obviously give them the drug. All of them are made better off in expectation and in actuality, none of them are made worse off.
From this we learn that failing to donate to givewell to save 50 people is morally equivalent worse than causing a 90% chance of death for 50 people. Giving them a the drug has a 90% chance of killing them but a 10% chance of saving them—it would be functionally identical to donating to give-well to save 50 people and then giving a drug to all of them that kills 90% of them. So if giving them the medicine is good in the first case, failing to donate to givewell is seriously morally wrong. If it’s as morally wrong as killing 90% of 50 people, it’s certainly morally worse than killing 20 people, as the janitor does.
Maybe you think the relevant symmetry breaker is consent. Well, in the above case with saving people’s lives and then assaulting them, we can add an additional stipulation according to which the person is unconscious, so they don’t have the opportunity to consent. It still seems like it’s good to take the combined actions of giving the person medicine and then assaulting them.
Note, it would obviously be better to give them the medicine without assaulting them. We’re stipulating that, in this case, on can only do both or neither.
Another objection argument for the utilitarian conclusion can be made. Suppose that a Tim had two options of jobs. He could either work as a lawyer or a professor. However, he knows in advance that if he becomes a professor Hannibal will fill in for him as a lawyer. Tim knows that Hannibal will become a serial killer and kill 50 people per year if he becomes a lawyer. Surely it would be very morally bad for Tim to become a professor and let 50 people be horrendously killed by Hannibal. Taking an action that you know will cause a serial killer to murder 50 people makes the world worse. You shouldn’t do things that make the world worse.
Maybe you think that there’s a disanalogy between taking a job to prevent a serial killer from killing people and taking a job to prevent people from dying of malaria. There doesn’t seem to be any consistent way of parsing out the difference. In both cases you’re taking a job to save 50 lives. Surely deaths caused by serial killers don’t just automatically become more important than deaths from disease. If I died of cancer, I would be no worse off if that had happened because I was given lethal radiation doses by a serial killer.
Maybe you think this still sounds counterintuitive. A few things are worth noting.
1 As we’ve already seen, rights run into lots of paradoxes and impossibilities.
2 Presumably most people would accept that a world where 50 people are saved per year but 20 extra are killed is a better total world. Fewer people die. Well, if we accept that, then agreeing with Huemer’s intuition requires thinking that ethics sometimes demands that we make the world worse. That’s deeply counterintuitive.
3 The world is very counterintuitive. Our moral faculties evolved for facilitating cooperation while we were foraging the plains. Having the ability to save hundreds of people on the other side of the world doesn’t factor neatly into our moral calculus. It’s no surprise that our intuitions about it are systematically off track.
People care much more about things nearer to them. Even if they know on an intellectual level that donating lots of money to givewell saves lots of lives, it’s hard to really get, on an emotional level, that your donations save so many lives that each year you save enough lives to more than offset Jeffrey Dahmer’s murders.
I once read through stories of children who died of disease at a young age, to try to understand on an emotional level just how much good can be done by effective charities. When you realize that the people being saved by givewell are not just statistics—they are real people with names, parents that love them, and grieving family members, it becomes more clear that donating to givewell is very morally important. Morally important enough to overturn Huemer’s judgement.
While we’re here, it’s worth plugging effective charities.
Click here to donate generally to the top charities
https://secure.givewell.org/
Here’s a link to donate specifically to the most effective health charity, the malaria consortium.
https://www.malariaconsortium.org/sup... )
Here’s a link to donate to the most effective animal charities
https://animalcharityevaluators.org/blog/announcing-our-2021-charity-recommendations/
If you donate a few thousand dollars you will save a person’s life. Doing so would be great!
Contra Huemer on utilitarianism part 9
John joins Ted Bundy in the should have cookies club
Huemer is, once again, concerned about utilitarianism’s distribution of cookies. He writes “h. Excess altruism
John has a tasty cookie, which he can either eat or give to Sue. John knows that he likes cookies slightly more than Sue, so he would get slightly more pleasure out of it. Nevertheless, he altruistically gives the cookie to Sue. According to utilitarianism, this is immoral.”
Not necessarily, it’s conceivable, and true in the real world that people often are made happy by other people helping them. Offering to hold the door open for someone is good not just because they’re intrinsically less able to hold open doors, but also because it signals a nice gesture. Giving Sue a cookie is similar. So for utilitarianism to yield the wrong verdict, Huemer would have to stipulate that either Sue gets no happiness from being given the cookie independent of its taste, or that John likes the cookie slightly more than Sue likes the combined experience of the taste of the cookie and experience of being given a gift by John.
Additionally, immorality usually refers failing to care about others, so semantically it’s slightly strange to call it immoral, but it’s certainly bad. John should have done otherwise. Presumably we’d all agree that it would be morally neutral to donate the cookie if John and Sue liked cookies equally. However, if Huemer thinks that it would be morally neutral to give the cookie to Sue if Sue enjoys it more, then his obligation to give it to Sue wouldn’t change at all from Sue liking the cookie more. However, this is implausible. Surely one’s obligation to give someone a cookie is sensitive to how much they like cookies.
A plausible theory of good and bad action should not result in everyone taking good actions making everyone worse off. However, suppose that we stipulate that Sue also has a tasty cookie that John prefers. If John gives sue his cookie and Sue gives John her cookie, everyone is worse off. This is bad.
To raise the stakes, suppose that instead of cookies they’re offering each other agony reduction devices. John can either reduce his agony by 2 units or reduce Sue’s by 1 unit. Sue can similarly either reduce her agony by 2 units or John by 1 unit. They both start in a state of 100 billion units of pain and with 50 billion agony reduction devices. Additionally, suppose that when one is brutally tortured they’re in 10,000 units of agony. In this case, it seems clear that John and Sue should both use the suffering reduction devices on themselves. If they both use it on the other then they’d both end up in a state of 50 billion units of agony, which is 5 million times worse than horrific torture. So that seems erm, not great.
Huemer’s intuition here doesn’t seem very deep. The notion that it would be immoral to be excessively altruistic seems semantically weird, but if we replace immoral with bad, it seems perfectly reasonable.
We can see this with another argument. If we accept
1 Correct moral theories should inform agents of what they should do in all cases
2 In Huemer’s case, John can only give the cookie to himself or to Sue
3 Thus, correct moral theories should inform John of whether to give the cookie to himself or Sue
4 The correct moral theory would not inform John that he should give the cookie to Sue
Therefore, the correct moral theory would inform John that he should give the cookie to himself.
Well, that wraps up Huemer’s objections pretty well. The next post will conclude. Thanks everyone for reading this far.
Contra Huemer on utilitarianism part 10 Conclusions
Non consequentialist cookie distributions delenda est
Looking through Huemer’s objections to utilitarianism all of the alleged counterexamples seemed intuitively shallow. They only appeared reasonable prior to careful reflection. However, upon careful reflection, they are revealed to be false.
Huemer writes “c. Maybe there are specific problems with each of the above intuitions.
This is the only approach that I would accept as a reasonable defense of utilitarianism. I.e., you look at each of the cases from section 1, and in each case you show a way in which that intuition leads to some sort of incoherence or paradox (see, e.g., https://philpapers.org/archive/HUEAPF.pdf), or you find specific evidence that the intuition is caused by some factor that we independently take to be unreliable at producing true beliefs (where this factor doesn’t cause standard utilitarian intuitions), or you can argue that these intuitions are produced by some feature of the cases that everyone agrees is morally irrelevant.
So that leaves some room open for a rational utilitarianism, but this would require a lot more work, so we don’t have time to investigate that approach here. But until someone successfully carries out that rather large project, we should default to deontology.”
I have undertaken that project, and I think the defenses of most of the cases were pretty decisive. Thanks for reading everyone.
Utilitarianism wins outright part 1
Non utilitarianism delenda est
“All life is only a set of pictures in the brain, among which there is no difference betwixt those born of real things and those born of inward dreamings, and no cause to value the one above the other.”
- HP Lovecraft.
Many philosophical questions are difficult. One boxing vs two boxing in Newcomb’s problem, 1/3 vs 1/2 in the sleeping beauty problem, how we can be justified in believing in induction, and many others. However, many philosophical views that were once mainstream are now reduced to the dustbin of history, with logical positivism being the clearest example. Normative ethics is widely considered to be in the first category—people often say that there are no knockdown arguments in normative ethics. However, I think that normative ethics should be in the second category—it should be settled. Utilitarianism wins outright. Deontology and virtue ethics should join logical positivism in the dustbin of history.
Maybe you think such a bold proclamation is arrogant. How can a random blogger claim that an issue that’s bedeviled so many philosophers for such a long time should be clearly settled. Well, you probably have reasons based purely on smarter people than I being non-utilitarians to think that it’s reasonable to believe that utilitarianism is incorrect. The problem of peer disagreement should undermine our confidence in many of our views.
However, I shall hope to show in this series of articles that the problem of peer disagreement is the only good reason to doubt utilitarianism. The balance of considerations overwhelmingly favor utilitarianism. The objections to utilitarianism backfire and end up supporting utilitarianism. The considerations favoring and disfavoring utilitarianism are not best described as a balance, for they are quite lopsided.
2
“We may assume the superiority ceteris paribus of the demonstration which derives from fewer postulates or hypotheses.[1]”
-Aristotle
Theories have virtues, elements that count for and against the theory. If given a choice between positing 30,000 laws to explain gravity and positing just a single one, we have decisive reason to posit just one law. Well, ethical theories have similar theoretical virtues. Utilitarianism does incredibly well in terms of the theoretical virtues.
One virtue is simplicity. Simpler theories are more likely to be true because they posit fewer components. The odds of A being true are strictly higher than the odds of A, B, C, D, E, F, and G being true. Utilitarianism does quite well in terms of simplicity, because it posits that only one thing matters and thus should be maximized. I’ll argue that the one thing that should be maximized is happiness. Deontological theories have to posit an idiosyncratic jumble of rights and virtue ethics based accounts have to posit a confusing jumble of virtues. Additionally, all plausible deontological or virtue ethics based accounts will agree that happiness is prima facie good and suffering is prima facie bad. Maybe other things matter, but certainly vast amounts of suffering are bad, all else equal. Likewise, the world is better when people are happier. That’s why depression, stubbing one’s toe, and reading continental philosophy are all bad, but reading this excellent blog, having sex, and eating tasty food are all good.
So utilitarianism posits that mental states can be good or bad. The bad ones should be avoided and the good ones should be pursued. And that’s it. No extra right, no extra virtue ethics based accounts, no confusing account of who deserves what, no weird act omission distinctions are needed. Other theories need to posit vast numbers of confusing things, which sacrifices lots of simplicity.
Physics is simple—our best models of physics seem like they’ll be reduceable to just equations. Mathematics is simple—it proceeds from just a few starting axioms. Economics, likewise proceeds from simple axioms. What are the odds that ethics would be different, with dozens of unconnected idiosyncratic things that allegedly matter.
How could we resist this H. Well, maybe you think that simplicity isn’t really a virtue. Huemer has argued for this.
In my view, the basis for simplicity being virtuous rest on basic probability theory. Probability of A and B are lower than just the probability of C. Huemer gives an alleged counterexample “Case 2: You go away on a sabbatical in Europe for a year. When you leave, for some reason, you leave your only two electrical devices on at home: a desktop computer and a lamp. You return a year later and, as you enter your house, see that both of them are now off. You consider two hypotheses:
S: There is a power failure occurring now.
C: The light bulb burned out, and the computer crashed.
Problem: Comparatively evaluate these, without collecting further evidence.
Answer: Again, S is simpler than C, for exactly the same reason as stated above: it cites a single cause for both data, whereas C cites two separate causes. Nevertheless, this time, C is more likely than S. (If you’re not sure of this, make the time period longer, e.g., ten years.)”
Well, simplicity is a virtue in this case. It’s just outweighed by other considerations.
The probability of a desktop computer crashing after a year away is pretty high—maybe .8. The probability of the lamp burning out is also pretty high—maybe .4. So the odds of both happening are .8 x .4 = .32. The odds of a power outage are less than 32%, therefore, that hypothesis is waaaaaaaaaaay less plausible. However, consider another case. Suppose we have two computers that are connected by a cable such that if the connection is hacked they’ll both shut down. However, if they’re both individually hacked, they’ll also shut down. If we see them both shut off, assuming that the odds of a hack on the connection are equal to the odds of a hack on either of them individually, it’s rational to assume the connection was probably hacked.
The explanation above was a bit of an oversimplification. The question is not the odds of A vs the odds of A and B, it’s the odds of A and not B vs the odds of A and B. Thus, the relevant question is whether prima facie B is unlikely. However, things that confer value seem to be rare. It’s hard to provide an explanation of why anything matters in an objective sense. So for any particular thing it seems unlikely that it matters. Thus happiness mattering but rights not mattering is prima facie more likely than happiness and rights both mattering. If you disagree, consider whether you think it’s more likely that happiness matters and that lettuce matters, or that happiness matters but lettuce doesn’t. Obviously the latter is more likely. In physics, it would be foolish to posit extra laws that we have no need for.
3
“Phlogiston was the eighteenth century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the eighteenth-century chemists answered, “phlogiston.”
. . . and that was it, you see, that was their answer: “Phlogiston.”
Phlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material.” Flames in enclosed containers went out because the air became saturated with phlogiston, and so could not hold any more. Charcoal left little residue upon burning because it was nearly pure phlogiston.
Of course, one didn’t use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to explain it. It’s not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, “The air must have become saturated with phlogiston.” You couldn’t even use phlogiston theory to say what you ought not to see; it could explain everything.”
“With Derrida, you can hardly misread him, because he’s so obscure. Every time you say, “He says so and so,” he always says, “You misunderstood me.” But if you try to figure out the correct interpretation, then that’s not so easy. I once said this to Michel Foucault, who was more hostile to Derrida even than I am, and Foucault said that Derrida practiced the method of obscurantisme terroriste (terrorism of obscurantism). We were speaking in French. And I said, “What the hell do you mean by that?” And he said, “He writes so obscurely you can’t tell what he’s saying. That’s the obscurantism part. And then when you criticize him, he can always say, ‘You didn’t understand me; you’re an idiot.’ That’s the terrorism part.” And I like that. So I wrote an article about Derrida. I asked Michel if it was OK if I quoted that passage, and he said yes.”
Ambiguity and unclarity lets nonsense fly under the radar. Absent a theory having precise results, it’s hard to conceive of it adequately. Unclear theories can be sufficiently amorphous to be consistent with any prediction—for after the result comes in, they can claim that their ill defined moral architecture lays the foundation for justifying the ethical result that was just discovered. Consider the following two principles.
A) “Rights are justified based on being well-being maximizing heuristics.”
—Me
B) “Throughout the domain of man’s often fraught conception, one principle reigns supreme within the domain of rights. This principle, oft ill conceived renders unambiguous the necessary maxims underlying any account of rights. That principle is, roughly, that one’s conception of themselves as a rational agent demands that they respect intrinsic extensions of rationality, inasmuch as they are present in others. For when a man kills or maims another, this barbaric act is clearly not a fulfillment of the rational will properly conceived, for the rational will properly conceived shies away from acts of barbaric violence, who necessarily inhibit its flourishing.”
—Me making fun of the way Kant writes
Both of these could be used to explain why all of the things we think of as rights are rights. Consider the following explanations of rights.
A)
“We have the right to life because life is required for having good experience so respecting the right to life results in the best outcomes. That’s also why we have the right to not have other people take our stuff—that doesn’t produce good outcomes, and it’s also why we have the right to not let other people into our house but not the right to let other people look at our house because trespassing can cause violence, undermine people’s feelings of ownership, and overall reduce happiness. However, a society that banned looking at houses would not have the most happiness.”
B)
“As has been demonstrated as the supreme maxim of morality, a necessary and sufficient condition for anything mattering is that which is conducive to the rational exercise of the will. Obviously the right to life is a necessary constraint on the rationality of the will—dead men can’t rationally exercise will. Yet clearly, the rational exercise of the will requires that we have rights to do with what we have as we wish, rights which include, as is conclusively an extension of the rational will, the right to avoid letting other persons enter your house but not the right not to let other persons look at yourself. For it is clear that the gaze is a non rational instrument that follows based on the necessary workings of eyes, and is not limited to rational beings, while entering a house must be an extension of rationality as it exerts it’s workings over ones limbs. The rational will exerts a penumbra over and above one’s property, one which can’t be violated by the exertion of another rational will on pain of contradiction—of two rational both with full primacy over those which they control.”
Both of these explanations do explain why one of them is a right and the other one is not. But only one of them is clear. Ambiguity allows a theory to appear to explain the data, but be sufficiently malleable to have no concrete predictions. So being clear and unambiguous is clearly a virtue. Compare utilitarianism to a theory like deontology. Well, deontology posits a vast unclear, unconnected stream of rights. Most plausible accounts of rights don’t hold that those rights should never be violated—most would agree you should steal a car to save the world. So the rights have no unifying factor and undefined moral significance that seems to vary depending on the situation. This renders the theory much less clear. Thus, utilitarianism does much better in terms of clarity. A theory with unclear verdicts has no clear predictions, so conclusions about particular cases can’t affect its plausibility.
4
“Me being omnipotent explains ALL ten plagues, you being omnipotent can only explain one of the plagues.”
—The god of the old testament, talking to an allegedly omnipotent frog.
Theories are better if they can explain more things. If phenomena A through F are more likely on a theory than they are on the negation of the theory, then the existence of A through F supports the theory. A friend has recently objected to explanatory power being a virtue, on the grounds that what matters is not explanatory power per se—a theory that posits an explanation for A through F is not intrinsically more likely than a theory that doesn’t. It’s only more likely if there’s a non ad hoc reason to suppose that A through F would occur on the theory than not on the theory.
To illustrate this with an example, two murders are not prima facie more likely to have been done by a serial killer than by two people, merely because the serial killer explains both. It merely depends on the odds of each murder on the hypothesis of a serial killer, vs on the hypothesis of a non serial killer.
However, this worry doesn’t plague utilitarianisms explanation of ethics. Utilitarianism is able to explain all of ethics from a very simple starting point in non ad hoc ways. The data of torture, not donating, and eating meat being wrong are all more strongly predicted on utilitarianism being true than on it not being true, because they follow straightforwardly from utilitarianism.
The other virtues don’t take as long to explain, so I’ll include them in this section. A good theory should make novel predictions about which things are good or bad. As I’ve argued previously , the correct ethical theory would diverge from our moral intuitions in many cases. Our moral intuitions throughout history have very often been wrong. The true ethical theory should be able to make predictions about what will turn out to be justified on reflection, that will differ from our starting intuitions, but turn out to be correct. Previous articles have, and future articles will argue that this is true.
Good theories should also be internally consistent. Utilitarianism is—something can’t both maximize and not maximize happiness.
Additionally, theories should not be ad hoc. If a theory has no plausible theoretical basis, that gives us decisive reason to reject it. Utilitarianism is not ad hoc. As we’ll see, it can be deduced from a variety of different plausible moral principles.
These considerations strongly favor utilitarianism. The fact that it blows other theories out of the water in terms of theoretical virtues strongly favors it. A theory that avoids dozens of ad hoc stipulations, unclarity, internal inconsistency, and also makes good predictions should be strongly favored.
We could make up lots of extra stipulations to save Newtonian mechanics. However, we shouldn’t do that. Even though quantum mechanics is super weird and unintuitive, we shouldn’t reject it to save a version of Newtonian mechanics with dozens of extra stipulations.
Ethics, unlike physics, is true across all possible worlds. We also can’t run very good experiments on ethics. So it’s not surprising that sometimes our moral intuitions—the best but deeply flawed way we come to ethical conclusions—would often yield results that contradict utilitarianism. However, careful reflection will bare out that utilitarianism gets the right answer every time. As we’ll see in due time, dear reader.
Utilitarianism wins outright part 2
History favors utilitarianism
1
The inquisitor brought forth the utilitarian.
“You are charged with expressing hideous moral views,” he said in a low voice.
“Yes,” was the curt reply.
“Why shouldn’t we execute you?” demanded the inquisitor.
“Because I was right. Look a few centuries from now and you’ll see, everything I think is mainstream. Your morality is lost in the dark ages. I have a time machine. Let’s go forth 300 years and see if my views about racial egalitarianism are mainstream.”
The inquisitor nodded.
“Okay, but if you’re not then you will be executed. Understood?”
The utilitarian nodded.
“Wait, though before we do, would you be willing to place some bets on whether or not I’ll be right.”
The inquisitor looked baffled.
“Bet with a dead man,” he asked incredulously.
“I think you’re selling me a little short,” said the utilitarian. “If you’re probably right, you’ll get lots of money. I’ll bet you fifty bushels of grain.”
The inquisitor nodded and accepted the deal. The time machine whirred. They popped into the future.
“Excuse me,” said the inquisitor to a random street dweller. “Do you think black people matter just as much as white people.”
“Of course,” replied the man. “I’m not a bigot.”
The inquisitor handed over the 50 bushels of grain and returned to the past.
“There, I was right,” said the utilitarian. “Want to make the same bet about my views that women should be allowed to vote.”
“Sure,” said the inquisitor, but more sullenly this time.
They popped into the future, back to the past, and he handed in 50 more bushels of grain.
“How about for gay rights,” asked the utilitarian.
“Sure,” he replied. 50 more bushels of grain were given to the utilitarian after they entered and exited the time machine.
“How about the moral importance of animals,” he asked.
50 more bushels of grain went over to the utilitarian.
“Okay, how about for the repugnant conclusion,” asked the utilitarian.
The inquisitor wisely didn’t bet.
2
What do all of the examples of moral progress have in common?
Well for one, they’ve had to do with expanding the moral circle. The abolition of slavery involved expanding the circle of moral concern to include black people. Women’s suffrage involved expanding it to include women. The gay rights movement involved expanding it to include gay people.
When we see acts of gratuitous evil, like torturing other people for fun, they’re marked by excluding the victims from the perpetrators circle of moral concern. Bundy, Dahmer, and the Manson families actions showed no regard for the interests of their victims.
The Nazi’s actions were evil, because they totally ignored the interests of their victims. Likewise with the actions of Vlad the Impaler and all of history’s other despots. Utilitarianism has the theoretical framework to address all of these evils. If history’s greatest crimes have come from an overly narrow moral circle, then expanding the moral circle to include all sentient beings is plausibly required by morality. Numerous times throughout history people have thought that the particular group of sentient beings that they harmed were not worth including in their moral circle, while being horrified at previous atrocities which were caused by too narrow of a moral circle. The moral lesson of history seems to be that if you’re unsure whether or not to include a new being in your moral circle, you probably should. All the atrocities would have been solved by following utilitarianism, and expanding the moral circle to include all sentient beings. This trend gives us a very good reason to be a utilitarian.
The second thing that the atrocities have in common is that the utilitarians at the time were against them.
As (Utilitarianism.net ) writes, utilitarian philosophers were often on the right side of history. Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human animals. Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality. Sidgwick advocated for religious freedoms. In contrast, philosophers like Kant harbored far less progressive views. As Utilitarianism.net says “However, Kant also defended many ideas that would be unacceptable to express today:18 He called homosexuality an “unmentionable vice” so wrong that “there are no limitations or exceptions whatsoever that can save [it] from being repudiated completely”.19 He believed masturbation to be so wrong it “exceed[s] even murdering oneself”.20 He argued that organ donation is impermissible and that even “cutting one’s hair in order to sell it is not altogether free from blame.”21 Kant stated that women, servants and children “lack civil personality” and that they are “mere underlings” that “have to be under the direction and protection of other individuals”;22 thus he believed they should not be permitted to vote or take an active role in the affairs of state.23 Further, he wrote about the killing of bastards that “a child that comes into the world apart from marriage is born outside the law” and that society “can ignore its existence (...) and can therefore also ignore its annihilation”.24 Finally, Kant argued for the idea of racial superiority, claiming that “humanity exists in its greatest perfection in the white race”.25.”
Maybe you think this isn’t very good evidence because utilitarians were often on the wrong side of history, with Mill supporting colonialism. This is true, however, if we compare utilitarians to other philosophers, they seem empirically to have been far more progressive than other philosophers. The odds are extremely low that the wrong moral theory would conclude that homosexuality is permissible hundreds of years before it became acceptable to even suggest as much. Most people throughout history have harbored dreadful moral views that clash with our modern sensibilities. The fact that utilitarians were far less accepting of these barbaric practices counts strongly in favor of utilitarianism.
One misguided objection is that this does not count in favor of utilitarianism because it requires a meta morality, to decide which moral principles are true. Thus, we can’t say utilitarianism is best, for getting the correct answer to moral questions, without a mechanism for identifying the moral conclusion. If the mechanism for identifying the moral conclusion is utilitarianism, then the argument would be circular. It would merely suggest that utilitarianism identified the conclusions that utilitarianism suggests.
This, however, is false. We don’t need to decide upon a precise mechanism for identifying the moral conclusion in all cases to conclude that slavery is immoral, gay people shouldn’t be killed, and women should have the right to vote. For those who accept the immoralities of the aforementioned actions, this argument for utilitarianism should hold weight. If we can agree upon certain moral principles, then the moral theory to conclude them originally is more likely to be true.
3
The inquisitor brought forth the heretic.
“You claimed the universe is larger than 100 times the size of the earth.”
“Yes,” replied the heretic. “Your model of the size of the universe is waaaaaaaay too small. In terms of things that are possible in this world, your modal circle is way too small. I have a magic universe size measuring device—want to place bets on whether it will be more than 100 times the size of the earth.”
“Betting goes against my sacred values,” replied the inquisitor.
“Oh come on—surely you’ve read Robin Hanson,” the heretic replied. “Vote values, bet beliefs. You know it’s super accurate.”
The inquisitor hadn’t read Robin Hanson but thought that wasn’t worth pointing out in a public inquisition. He was supposed to be the one calling out the heretic’s ignorance.
“We can’t just bet on anything.”
“Look,” replied the heretic. “I know that being an inquisitor doesn’t pay super well. Remember that suggestion to fund investigative reporting through prediction markets. I think that’s how we should fun inquisitors. That would encourage killing real heretics with demonstrably false views. Surely you want good incentives for inquisitors—unless…you’re the one working for the devil.”
The Monty Python esque mob began muttering.
“Yeah—good incentives and prediction markets,” they shouted. They had all obviously read Robin Hanson.
The inquisitor caved. The button was pressed and it delivered the verdict that the world was larger than 100 earths. The inquisitor payed up.
“How do we know that this truth generator is really finding true things,” the
inquisitor asked suspiciously.
“Look,” replied the heretic. “It is possible necessary that it is finding true things—therefore, it is finding true things in the real world.”
The inquisitor caved.
“Want to bet that it’s more than 1000 times the size of the earth. And also 10000 times the size of the earth, and also 1 million times and also 1 billion times.”
The inquisitor took the bets and payed after the verdict was revealed. He wisely made no more bets.
4
You’re likely still not convinced of utilitarianism if you were not already convinced :(. However, the arguments we’ve already established provide a strong case—and we’re just getting started with the overwhelmingly lopsided case for utilitarianism. But just consider for now how unlikely it is that an incorrect theory would have the three following features.
A) Totally blowing other theories out of the water in terms of theoretical virtues.
B) Its proponents were consistently very far ahead of their time, discovering important moral truths hundreds of years before they were widely accepted.
C) The theory is the only one that broadly expands the process that’s common in all examples of moral progress.
Let’s say our prior in utilitarianism was 1/20—so 1 to 19 odds. The odds of A being the case if utilitarianism were false seems very low—maybe 5 to 1 on the truth of utilitarianism. So we update to 5 to 19. B seems similarly pretty unlikely—maybe 3 to 1 so we update to 15 to 19. C seems also pretty unlikely, maybe 4 to 1. So we update to 60 to 19. And we’re just scratching the surface of the terrifying case for utilitarianism, but we’re already in a territory pretty close to justified belief. Part three will take things up a notch though.
Utilitarianism wins outright part 3
This is where we take things up a notch
0
Sidenote: I wrote this argument when I was younger and more foolish so the syllogism has some unnecessary parts to it.
“I do not attack you for if I were you I wouldn’t want me to. I do not let you attack me, for if I were me (As I am), I wouldn’t let you attack me. I don’t pay you five thousand dollars for no reason, for if I were both of us I wouldn’t want to.”
—Words of wisdom
The third argument that shall be presented for utilitarianism is based on universalizing egoism. It has 12 premises and is relatively technical. However, before discussing and defending the premises I shall provide a brief overview of the argument.
Many intuitions motivate utilitarianism. However, perhaps the most fundamental is the intuition that there are certain things that determine how well our lives go, that we should maximize, in the absence of impacts on others. From this starting point, we imagine morality being similar to what we do when we act in our own best interest, except taking the interests of everyone equally. The problem with one who does only what’s best for themselves is the “for themselves,” part. Morality is the same as what we would do if we acted in our own self interests, but possessed all possible interests. The reason why torturing another is wrong is because one who experienced the lives of both the torturer and the tortured would prefer not to be tortured, even if it increases their quality of life when they are the torturer. The reason why racism is wrong is because one who experienced the lives of both racists and victims of racism would not treat their interests as any less valuable based on the color of their skin. If one had a skin condition that caused their skin colour to change over time, they would have no rational reason to privilege their interests when their skin was a particular color.
Now for the premises.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 Only the types of things that are good for selves who are rational egoists are good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists or unique benefits that apply to non rational egoists.
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
This argument is valid so the conclusion follows logically and inescapably from the premises—as William Lane Craig is fond of saying. So now let’s argue for the premises.
1
The first premise says that a rational egoist is defined as someone who does only what produces the most good for themselves. If you reject this premise you are confused. This is a definition.
2
The second premise says that a rational egoist would do only what produces the most happiness for themselves. So roughly, if someone was only doing what benefitted themselves, they would maximize their own happiness.
This has several supporting arguments.
1 This seems to be common sense. When we see someone being selfish, it seems to roughly track doing what makes them happy, even if it doesn’t make others happy. Other criterions for morality include rights and virtue, but it seems strange to imagine an egoist would maximize their virtue, or minimize the risks of the violations of their rights. If they did, they would spend all their time indoors, in a bunker, to minimize the risks of their rights violations, or immediately commit suicide, to prevent the possibility of anyone violating their rights. It also seems strange to imagine an egoist who merely tries to be as virtuous as possible, despite the personal suffering caused. When we analyze the concept of virtue, it seems clear that the motivation for virtue ethics is based on it being good for others, or good in some other abstract sense, rather than good for the virtuous agent. If a person gets shot in the face, they have no reason to care about whether the gun that shot them was caused by a person intentionally violating their rights. Covid deaths wouldn’t matter any more morally if covid were made in a lab.
Interlude A brief dialogue
(The man took out the sharp objects and began to torture the utilitarian.
“Why are you doing this?” asked the utilitarian.
“Well,” said the man. “A genie told me that if I did it he’d increase my virtue such that my overall virtue would increase.”
“But surely it’s unvirtuous to torture me?”
“Oh of course,” replied the torturer. “But it’s stipulated that my overall virtue would increase, and that’s what I care about being a virtue ethicist egoist.”
The utilitarian growled.
“Look,” said the strange virtue ethicist egoist. “I don’t like this any more than you do. In fact, I’m an empath, so I’m actually experiencing all of the agony that you’re experiencing and I find it super unpleasant. But I only care about maximizing my own virtue so…sacrifices must be made.” )
—A monologue from a pretty bad egoist.
“Safety first,” said the anti Bryan Caplan. He jumped off the building, dying immediately, to ensure no one violated his rights.
—Another pretty bad egoist albeit better than the first one because of the avoidance of torture.
2 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize the other criterions for morality. Virtually no one is a consequentialist, who cares about maximizing virtue, or rights protection.
“I tortured the little child to prevent those two torturers from doing torture because it increased net virtue.”
—Not a good virtue ethicist.
3 We can imagine a situation with an agent who has little more than mental states, and it seems like things can be good or bad for them. Imagine a sentient plant, who feels immense agony. It seems reasonable to say that the agony they experience is bad for them, and that if they were rational and self interested they would try to end their agony. However, in this imagined case, the plant has no ability to move and all it’s agony is the byproduct of natural features. Given that its misery is the result of its own genetic formation, it seems strange to say it’s rights are being violated, and given it’s causal impotence, it seems unclear how it could have virtue. Yet it seems despite that it could still have interests.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that beings matter if and only if they can experience happiness. This serves as strong evidence for utilitarianism.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible. This would require strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. They are purely the result of atoms functioning. However, strongly emergent properties are not reducible to weaker properties. Chalmers argues that there is only one thing in the universe that is strongly emergent; consciousness. Whether or not this is true, it does prove the broader principle that strong emergence is prima facie unlikely. Rights are clearly not reducible to happiness; no amount of happiness magically turns into a right. If you have to posit dozens of instances of a bizarre phenomena that can’t be conclusively established to apply to anything, your theory is probably wrong.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings about happiness for them. When deciding upon what food to eat, we act based on what food will bring us the most happiness. It seems strange to imagine any other criterion for deciding upon food, hobbies, or relationships. We generally think someone is acting reasonably if they were in a romantic relationship, given the joy it brings to them, but if someone spent their days picking grass, we would see them as likely making a mistake, particularly if it brought them no happiness. Additionally, the rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. Additionally, we care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit.
“I ate that ice-cream because it increased my virtue slightly even though I hated it.”
—A very weird dude.
5 As Neil Sinhababu argues, we reach the belief in pleasures goodness via the process of phenomenal introspection, whereby we think about our experience and determine what they’re like. This is widely regarded as being reliable, organisms that were able to form accurate beliefs about their mental states are more likely to reproduce, and psychologists and biologists generally regard it as being reliable.
6 Hedonism provides the only plausible account of moral ontology. While any attempt at accounting for moral ontology will be deeply speculative, hedonism provides a plausible account. It seems at least logically possible that there would be certain mental states that are objectively worth pursuing. Much like it’s possible to have vivid mental states, involving colors that we as humans will never see, it seems equally probable that there could be mental states that provide us with reasons to pursue them. It’s not clear why desirable experiences are any stranger than experiences of confusion, dizziness, new imaginary colors, echolocation, or the wild experiences people experience when they consume psychedelics. Given this, there seems to be a possible way of having desirable mental states. Additionally, there seems to be a plausible evolutionary account of why we would have desirable mental states. If those mental states are apprehended as desirable, then organisms would find them motivating. This serves as a mechanism to get organisms to do things that increase fitness, and not do things that decrease fitness. On the other hand, there is no plausible evolutionary account of how rights could arise. How would beings evolve rights? What evolutionary benefit would accrue from evolving rights? Only the desirability of happiness can be explained by the fine tuning process of evolution.
7 An additional argument can be made for the goodness of happiness. A definition along the lines of “good experiences,” is the only way to explain what happiness is. It’s not merely a desired experience, given that we can desire bad experiences, for example, one who thinks they deserve to suffer could desire experiencing suffering. Additionally, if we are hungry, but never think about it, and wish it would stop, it seems that our hunger is still causing us to suffer, even though we never actually desire for it to stop. The non hedonist could accept this, but argue that it’s not sufficient to prove that happiness is desirable. It merely proves that we perceive happiness as desirable mental states, not that the mental states are in fact desirable. However, if the only way we can explain what happiness truly is is by reference to their desirability, this would count in favor of hedonism. Other accounts require us being systematically deluded about the contents of our mental experiences.
8 Only happiness seems to possess desire independent relevance. In the cases that will be discussed with preferential accounts, it seems clear that agents are acting irrationally, even if they’re acting in accordance with their desires. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights. A person who waives their right to their house and gives their house to someone is not being irrational. A person who gives a friend a house key, allowing them to come into their house without their consent is similarly not acting irrationally. Rights seem to be waivable. happiness does not.
Non-hedonists could argue that hedonism doesn’t adequately explain the things that make people well off. For example, if a person is molested while unconscious without their knowledge, this seems bad for them, even if they don’t find out about it.
However, this intuition can also be explained as a case of caring about happiness maximizing heuristics. Generally, people being molested causes harm, so we have a strong intuitive reason to think it makes people worse off, even if we remove the hedonistic components.
We can consider another case that has fewer real world components to get a clearer picture. Suppose that it were the case that every second aliens molest every human googol times. No human ever finds out about it. Additionally suppose that aliens start out experiencing enormous amounts of agony, which is diminished every time they molest a human, such that if they molested no humans, their suffering would be greater than the sum total of suffering in human history. However, if they molest humans googol times, they are able to have enjoyable mental states. In this case, it seems that what the aliens are doing is justified. No human ever finds out about it or is made hedonically worse off. However, if this is bad for people, we must stipulate that not only are the aliens' actions bad, they’re by far the worst act in the history of the world.
If we believed the aliens molestation was one one thousandth as immoral as a traditional molestation, and stipulate that there are as many molestations done by humans each as there are humans, we’d have to stipulate that the aliens molestation that causes no one’s experiences to be worse is 10^97 times worse than all sexuall assaults done by humans. Thus, if one could either prevent all sexual assaults done by humans or have a 1 in 10^97 chance of preventing the aliens actions.
One might hold the intuition that the alien’s actions are still wrong. However, it seems hard to hold that the aliens actions are the worst thing in the world by orders of magnitude--such that the holocaust is functionally a rounding error.
If one holds the intuition that the aliens' actions are wrong, we can change the situation to make it even more counterintuitive. Suppose the aliens are comprised fully of air. Thus, when they molest people, they just swap out with the air in front of people. Surely the air deriving sexual pleasure from entering us would not be the worst thing in the world.
One could object that the things that it is rational for one to pursue is their own desires, rather than their own happiness. However, this view seems mistaken. It seems clear that there are some types of desires that are worth pursuing, and others that are not. I shall argue that only the desires for happiness are worth pursuing.
1 Imagine several cases of pursuers of their own desires.
A A person who is a consistent anorexic, who has a preference for a thin figure, even if it results in them starving.
B A person who desires spending their days picking grass, and would prefer that to having more happiness.
C A person who has a strong preference for being in an abusive relationship, even if it minimizes their happiness.
D A person who is indifferent to suffering that occurs on the left side of their body. They experience suffering exactly the same way and apprehend it as undesirable, BUT they have an arbitrary aversion to right side suffering infinitely greater than their aversion to left side suffering
It seems intuitively like these preference maximizers are being irrational. These intuitions seem to be decisive.
2 What should we do about a person with infinitely strong preferences? Suppose someone has a strange view, that being within 50 feet of other humans is unfathomably morally wrong. They would endure infinite torture rather than be within 50 feet of other humans. It seems like having them be around other humans would still be less bad than inflicting infinite torture upon them. Infinite preferences seem to pose a problem for preference utilitarianism.
3 What about circular preferences? I might prefer apples to bananas, bananas to oranges, but oranges to apples.
4 If preferences are not linked to mental states, then we can imagine strange cases where things that seem not to be good or bad for an agent, are good or bad for the agent according to preference utilitarianism. For example, imagine a person who has a strong preference for their country winning the war, despite being trapped in an inescapable cave. It seems strange that their side losing the war would be bad for them, despite them never finding out about it. It also seems strange to imagine that Marx is made worse off by communism not being implemented after his death, or that Milton Friedman is made worse off by regulations that pass after his death. It seems equally strange to imagine that slave owners were made worse off by racial equality occurring long after their death.
5 Imagine a dead alien civilization who had a desire for there being no other wide scale civilizations. If the civilization had enough people, preference views would imply that humanity would have an obligation to go extinct, to fulfill the preference of the alien civilization, despite none of them ever knowing that their preference was fulfilled.
One might object that we should look at peoples rational preferences, rather than the preferences that they in fact have. However, this is circular. When analyzing what a rational self-interested person would desire, merely saying their rational self interest is not helpful. However, using this criterion explains why a consistent anorexic, grass picker, or one who desires being in an abusive relationship seems to harbor irrational preferences. Similarly, preferences for states of the world after one dies seem irrational, for one cannot benefit from them if they are dead. It’s hard to imagine a rational preference that is not linked back to happiness. For one to hold this view they’d have to give a non hedonistic account of rational preferences.
One might additionally argue that what matters is people's mental experience of having their preferences fulfilled. Yet this seems not to be the case. If a person had got addicted to a drug that was free, had no negative side effects, and that they had easy access to. Each day they would have a preference for consuming the drug, rather than not consuming it, yet it seems hard to imagine that it would be good for them to consume the drug, assuming it does not make them happy at all.
Additionally, imagine a case where, for most of a person's life, they have had a strong desire to die a democrat. However, on their death they convert to become a conservative, knowing that their desire to be a registered democrat upon death has not been fulfilled. It seems it would be good for them to register as a republican, if it made them happy, even if it reduced their overall preference fulfillment.
One might object that this is a negative preference, rather than a positive one. This person has no positive preference for the drug, merely a preference for not missing the drug. Yet this seems hard to justify. In this case of the drug, they would not be harmed by not consuming the drug, their desire would merely not be fulfilled. It seems nonetheless like the drug would not benefit them.
Conversely, it seems clear that other types of preferences are good to create. Creating a preference for reading books that brings one immense joy is a good thing. Preferences should be created if and only if they improve the happiness of the subject. This refutes the negative preference objection. If one had a strong preference for not going through a day without reading a book, and the books that they read brought them great joy, it would be good to create the preference, even if the preference were a negative preference.
One might adopt objective list theory, according to which what matters is fulfillment of a list of objectively good things, including happiness, friendship, honor, health, and others. This view runs into several issues.
First, it’s difficult to give an adequate account of what makes something part of the objectively good list. Hedonism is monist, saying that there’s only one type of thing that is good. Objective list theories problematically say that there are a series of unconnected things that are good. This is less parsimonious and fails to provide an adequate account of how things are good. It seems a priori more plausible that there would be some good experiences than that there would be an unrelated bundle of good things that aren’t tied to experience.
Second, objective list theories can’t account for why things are only good for sentient beings. It seems conceivable that, on objective list theories, non-sentient beings could fulfill things on their objective list. Objective list theories just say that things are good in virtue of being part of an objective list, however, there’s no necessary correlation between beings experiencing happiness or suffering and things being able to be part of the objective list.
Third, objective list theories can’t account for why all the things that are on the objective list are generally conducive to happiness. Virtue, friendship, love, and decency are generally conducive to happiness.
Fourth, objective list theories are elitist (Crisp 2001), holding that things can be good for people even if they neither want them nor derive any positive experience from them. It’s counterintuitive that an unenjoyable experience that one doesn’t want can be good for them.
Fifth, all of the things on the objective list only seem good if they’re generally conducive to happiness. We might hold that knowledge is good, but it would be strange to suppose that arbitrary facts that benefit no one are good. The world would not be a better place if we all had the information about whether the number of particles in the universe were even or odd. Friendship might be good, but only if the friends are made mutually better off.
Sixth, objective list theories would have trouble accounting for strange counterfactual alien scenarios. We can imagine an alien civilization that derives primary satisfaction from producing helium. This alien civilization reproduces asexually when helium is produced and only cares about knowledge to the extent that it maximizes helium. This alien civilization finds friendship and love to be deeply unpleasant--strange deviant things that only bizarre subsets of the population care about. These alien philosophers adopt an objective list theory where the only member of the objective list is helium maximization and see it as an absurd implication of hedonic views that they tolerate bizarre deviations from the good like love, friendship, and (non helium maximization related) knowledge.
The aliens do not actively detest love or friendship, they merely find them far less good than helium maximization. They view love and friendship the way one might view watching paint dry. Alien Hume famously said that it is no better to prefer the destruction of all helium to the scratching of one's finger.
It seems clear that, for these aliens, helium maximization is the thing that they should do. They should not pursue friendship if they don’t want it. Nor should they pursue love. If we could convince the aliens that they had an obligation to maximize some non-helium related thing, it would be bad to do so, for they’d be far less happy.
Perhaps you don’t share the helium intuition, thinking that this practice is so strange that it can’t be actually good. However, if we think broadly, many of the things that we think are good are no less arbitrary than helium maximization. Sex is good, yet deriving pleasure from sex is no stranger than deriving pleasure from helium. Music is good, despite it being arbitrarily tuned noises. Presumably listening to loud clanging would not be good, to the extent that it is not enjoyable. It seems nearly impossible to give an account of music according to which it’s part of the objective list, but listening to frying pans clanging together is not.
We might adopt the view that what matters morally is that one derives happiness from things that are truly good. This would avoid the counterintuitive conclusion of utilitarianism that we should plug into the experience machine, where we would have a simulated life that we believe is real, with more happiness and it avoids Sidgwick’s objection that all the things that seem to be rights or virtues are happiness maximizing heuristics. However, this view has a variety of issues.
The previous objections to objective list theories still apply. Additionally, there are compelling counterexamples. Suppose one gained infinite joy from picking grass. Surely picking grass would make them better off. Additionally, suppose that a person was in a simulated torture chamber. Surely that would be bad for them. Unless there’s some fundamental asymmetry between happiness and suffering, the same principle would apply to happiness. A simulated experience of happiness would still be a good thing. Additionally, it’s unclear how this would account for composite experiences of happiness from two different things. Suppose that someone gains happiness by the combination of gaining deep knowledge and engaging in some strange sexual act. Would the happiness they got from that act be morally relevant? If so, then if there is at least some good act relating to any “impure” pleasure they get, such as knowledge acquisition or careful introspection, then that act would be morally good. Additionally, suppose that one was in the experience machine, but exercised wisdom and virtue. Then would their happiness be good for them? This shows the logistical troubles with such a view.
If we say that happiness is not at all good, absent virtue, then it would be morally neutral to make people already in the experience machine achieve far less happiness than they would otherwise. This is a very counterintuitive view. Additionally, if we accept this view, we would have to accept one of two other conclusions. If the suffering of the people in the experience machine is morally bad, but their happiness is not morally good, giving them five million units of happiness and one unit of suffering would be morally bad, because it brings about something bad, but nothing good. This is a very difficult pill to swallow. However, if neither their happiness nor their suffering is morally relevant, then it would be morally neutral to cause them to suffer immensely (aside from issues of consent).
This broader point can be illustrated with an example. Imagine a twin earth, identical to our own, but where no one had a preference for not being in the experience machine. To them, only their mental experiences of things matter. It makes no difference, to them, whether they are in the experience machine or not. It seems that in this world, there would be nothing wrong with plugging into the experience machine. The only reason plugging into the experience machine seems objectionable is because most people have a preference for not plugging into the experience machine, and find the thought distressing.
Additional objections can be given to the experience machine (Lazari-Radek, 2014). Several factors count against our intuitions about the experience machine. First, there is widespread status quo bias. As Singer explains “Felipe De Brigard decided to test whether the status quo bias does make a difference to our willingness to enter the experience machine. He asked people to imagine that they are already connected to an experience machine, and now face the choice of remaining connected, or going back to live in reality. Participants in the experiment were randomly offered one of three different vignettes: in the neutral vignette, you are simply told that you can go back to reality, but not given any information about what reality will be like for you. In the negative vignette, you are told that in reality you are a prisoner in a maximum-security prison, and in the positive vignette you are told that in reality you are a multi-millionaire artist living in Monaco. Of participants given the neutral vignette, almost half (46 per cent) said that they would prefer to stay plugged into the experience machine. Among those given the negative vignette, that figure rose to 87 per cent. Most remarkably, of those given the positive vignette, exactly half preferred to stay connected to the machine, rather than return to reality as a multi-millionaire artist living in Monaco.23”. This strongly counts against the conclusion that we have an intrinsic preference for reality. Additionally, there is a strong evolutionary reason for organisms to have a preference for actually doing things in the real world, rather than wireheading.
A final point that can be made is that the preference for the real can be explained on a utilitarian account; preferring the real tends to maximize happiness. In cases where it does not, this intuition seems to fade. It is counterintuitive to think that there would be something wrong about plugging into a virtual reality game, for a short time, because that is something we have familiarity with, and tends to maximize happiness.
The objective list theorist could argue against hedonism based on bad sources of pleasure. For example, even if a crowd of jeering spectators derived immense pleasure from one person being tortured, it would still be bad to torture the person.
However, hedonism is able to accomodate the intuition against bad pleasures. Every pleasure that we think of as a bad pleasure is not conducive to happiness generally. Sadistically torturing people does not generally maximize happiness.
Additionally, this principle has compelling counter-examples. We can consider a man called Tim. Tim derives immense satisfaction from watching scenes that appear to depict tortures. Tim would never torture anyone and abhores violence. In fact, he sometimes feels guilty about his strange desires and donates vast amounts of money to charities. Tim also makes sure that the content that he watches that depicts people being tortured does not actually involve people being tortured. Tim spends hours searching for content that looks like people being tortured but has no actual people being tortured. Additionally, we can suppose that this is the highlight of Tim’s life. He enjoys it so much that, without it his life would be miserable. Despite suffering clinical depression, Tim finds the experiences so enjoyable that he regards his life as generally good. It seems in this case, Tim is truly made better off by the joy he derives from this sadistic content.
However, suppose additionally that, despite Tim’s incredibly careful selection process, Tim is deceived by an evil demon, who manipulates the laws of physics to make people actually be brutally tortured, despite any reasonable observer concluding that no one was truly being tortured. It seems that in this case, while the person who is tortured is made worse off by the torture, Tim is made better off by it. All else equal, it seems that making Tim enjoy viewing the torture less (assuming he’d view the same amount of torture) is bad.
Imagine another case of an alien civilization who views the suffering of humans. This alien civilization starts in a state of vast agony, yet becomes less miserable each time they watch a human suffer. If they view all human suffering their overall hedonic state drops to zero, when it starts off significantly worse than being boiled alive. Again, it seems like the aliens' sadism is not a bad thing.
If we think that enjoying the suffering of others is actively bad, independent of the suffering of others, then it would be morally good to make the aliens unable to see suffering. This is deeply implausible.
One might object that deriving pleasure from sadism is morally neutral, neither good nor bad. However, in the scenarios both posited, it seems obvious that the world is better because the aliens enjoy suffering enough to not be as miserable as beings being boiled alive. If the only way for the aliens to relieve their unfathomable agony was to torture one person this seems justified.
We can imagine another case of a person, Wyatt, who takes immense satisfaction in eating meat because he knows that the animal suffered. He feels deeply guilty about this fact, but cannot enjoy eating meat unless he knows that the animal suffered. Wyatt continues to eat meat, but donates to charities that help animals because he feels guilty. In this case, it seems that Wyatt enjoying the meat, assuming it won’t cause him to eat any more meat, is not a bad thing. To the extent that Wyatt enjoys meat because he knows about the suffering, and others enjoy meat that causes enormous suffering, but don’t care whether or not they know about the suffering, it’s hard to see how Wyatt’s meat enjoyment is any worse than any of ours. Much like there seems to be no morally relevant difference between a person who tortures other because they like when others suffer and one who likes the taste of people after they’ve been tortured, there’s no difference between one who enjoys the suffering of animals that they eat and one who merely enjoys the taste of the animals.
If Wyatt is morally no different from the rest of people when he eats meat, then either Wyatt’s sadistic meat eating is morally good or the joy that most people get from eating meat is morally neutral. However, this is deeply implausible. If meat tasted less good, but people ate the same amount of meat, that would be a worse world. If sadistic pleasure can be good, then enough sadistic pleasure can outweigh the badness of torture.
Additionally, there are many cases where people enjoy the suffering of others, which are not found objectionable. If the parents of a murder victim derive satisfaction from knowing that the murderer is rotting in prison, it wouldn’t be desirable to deprive them of that satisfaction.
Additionally, we can imagine a world exactly like our own, except humans would never be happy in their lives unless they, upon their sixth birthday, slap someone in the face and revel in the enjoyment. In this case, all of their enjoyment is coming from the fact that they slapped someone, but it wouldn't be good to condemn everyone to never being happy.
Additionally, we can imagine a scenario where every person will be given an extra year of happy life by torturing one person. In this case, their happy life is only existing because of the torture, but this case seems clearly justified.
Our anti-sadism intuition comes from the fact that sadism is not conducive to happiness. However, if it were conducive to happiness, it would be justified. In the cases like Schadenfreude, where we derive joy from the suffering of others in ways that make things go best, we don’t find it counterintuitive.
The objective list theorist could argue that what matters is deriving pleasure from appreciating the good. Thus, in the case of deriving unrelated pleasure from good things, this wouldn’t increase well-being because, the pleasure is not related to appreciation of the good. On this account, the formula for determining how much one is benefited by an event is the following.
Well-being (which describes that which is non instrumentally good for an individual) from event Q = Pleasure + (Pleasure x objective list based value of Q). Thus, the value of pleasure from being in love or from friendship are greater than the value of pleasure from things like eating ice-cream. To illustrate, eating ice cream produces well-being of P, where P is the raw pleasure of the experience, because eating ice cream is not part of the objective list. Thus, for eating ice cream (W is well-being, P is pleasure, O is objective list based value) W= P + P x 0= P. This view, however, runs into issues.
1 Suppose that the value of having a deep conversation with someone has an objective list fulfillment value of 1 and brings a great deal of pleasure. If this were true then having the epiphany while mulling over the ideas with someone will be twice as valuable as having the epiphany while not mulling over the ideas. It seems strange to think that whether the epiphany was a result of talking with the other person is relevant to how good the epiphany is.
2 There seem to be troubles with disentangling what types of pleasures are generated by appreciating someone else. Suppose that every time a person interacts with another they begin laughing uncontrollably. In this case, is their happiness fulfilling the objective list.
3 It seems to result in equal pleasures and pains not offsetting. Suppose that every time I see someone I get a headache, such that the overall quality of interacting with them is neutral. The pleasure precisely cancels out the badness of the headache. In this case, it seems strange to say that it's actively good to interact with them. However, on an OLT account, it seems it would be, because the pain's badness is left unchanged by the interaction, but the pleasure's goodness increases because I'm interacting with another person.
4 This seems to run into strange conclusions relating to a modified utility monster. Suppose that there’s a being who derives a trillion times as much pleasure from all experiences as the average human. Additionally, suppose that the mini utility monster has the property of universal experiencing, where, after they die they will have all experiences that they did not have in life. For example, if the modified utility monster eats chocolate ice-cream and then immediately dies, after death it would have every possible experience, except those of eating the precise type of ice-cream. However, if the mini modified monster dies without ever being in love, then in the afterlife it will derive pleasure from the experience of being in love, but will check “being in love,” off it’s objective list. Suppose we think that the objective list based value of being in love is 1, such that half of the benefits of being in love come from the happiness of it, and half of them come from the intrinsic value of love. Now, we have the option of sacrificing a billion people to allow the utility monster to find love. In this case, being in love will not change the composition of the utility monster's experiences. Every positive experience that the utility monster has from being in love will be an experience that they will be deprived of when they die. However, on objective list theories, the value of the utility monster being truly in love with an actual person is twice as good as the value of having the experience of being in love, but not actually being in love with a real person. Thus, the objective list theory would entail that it would be good to sacrifice a billion people to cause the utility monster to find love, even though the utility monster’s subjective experience will be fully unchanged. This is deeply implausible.
5 We can create a newly modified utility monster. This utility monster experiences far more happiness than all humans combined. This utility monster does not have all experiences after its death. However, this utility monster is a hedonist. It’s view is that love is only instrumentally valuable. Thus, when it is in love, it is not appreciating the love for its own sake--it’s only appreciating the pleasure it gets from being in love. Thus, on objective list theories, the happiness it gets from being in love is no more valuable than the pleasure it gets from eating ice-cream, because it is not getting pleasure from appreciating things that are truly good.
Suppose additionally that one has the option to sacrifice a billion people to convince the utility monster to be an objective list theorist. The utility monster is currently in love, and the happiness it gets from being in love is greater than the sum total of happiness for the aforementioned billion people. On objective list theories, it seems that sacrificing a billion people to change the mind of the utility monster would be good, for it would raise the value of the pleasure by causing it to be experienced as a result of appreciation of love, even though the total amount of happiness doesn’t change. In fact, we can even suppose that becoming convinced of the objective list theory causes the utility monster to become distraught, bringing about half as much suffering as the pleasure they get from being in love. Nonetheless, objective list theories seem to hold that sacrificing a billion people to make its hedonic state worse, but to convince it of objective list theory would be good, because the extra value of the appreciation of love outweighs both the deaths of a billion people and the decrease in happiness from the utility monster becoming distraught over being convinced of objective list theory. This is similarly deeply implausible.
6 Suppose that Jim and Mary are married. Jim is out of town, but is going to watch a movie. Mary is also going to watch a movie. However, the next day Jim shall return, and they shall watch a movie together. Consider the following happiness distributions of movies.
Movie A brings Jim 10 units of happiness and Mary 10
Movie B brings Jim 150 units of happiness and Mary 1
Movie C brings Jim 1 units of happiness and Mary 150
While watching the movies, both Jim and Mary appreciate the presence of the other and bond over the movies, such that watching the movies with the other increases their enjoyment of the movie by 5 units of each. It seems intuitive that the optimal arrangement would be for Jim to watch movie B while out of town, Mary to watch movie C when Jim is out of town, and then they together watch movie A. However, objective list theorists cannot accommodate this intuition.
Suppose that an objective list theorist says that, in this case, the objective list based value of watching the movie with their Spouse has value of .5, such that two thirds of the value of watching the movie comes from the pleasure it brings, and one third comes from the value of watching it with their spouse. On this account the total value of watching the intuitive arrangement, where Mary watches Movie C by herself, Jim watches Movie B by himself, and them watching movie A together is 150+150+ 1.5 x ((10+5)(2))= 345. However, if Jim watches B by Himself, Mary watches movie A by herself, and they watch movie C together, the total value would be 150 + 10 + 1.5(155+6)= 394. Thus, on this account the world is better if Jim and Mary watch together a movie that only one of them enjoys very much, rather than each watching independently the movie that they individually enjoy, and together the movie that they both enjoy. This is deeply implausible.
One might object that the well-being they get from watching the movie is not truly a Movie result of appreciating the other. However, it is hard to draw a distinction between this and other cases. Suppose that Jim and Mary hold hands during the movie and take great comfort in the presence of the other. The extra value they get from watching the movie seems to be derived clearly by the appreciation of the other. Additionally, suppose that when they watch the movie with the other, the only enjoyment they get from the movie relates to focusing on the other. When they watch the movie with the other they ignore the movie, and simply focus on each other, deriving precisely the same amount of pleasure from it as they would have from watching the movie. In this case, it still seems like it would be better to watch A together, and each separately enjoy their preferred movie between B and C.
7 Suppose that Tim and Sue are planning to have a date. Tim is a big fan of the philosopher Derek Parfit, who advocates for objective list theories. Parfit is planning to give a talk on Thursday arguing for objective list theory. Tim is currently a hedonist but thinks it’s very likely that Parfit will convince him that hedonism is false, and objective list theory is correct. Tim and Sue can either have their date on Wednesday or Friday. Suppose additionally that being convinced of objective list theory would not increase Tim’s happiness at all. However, if he became convinced of objective list theory, the happiness he’d get from his interactions with Sue would come from appreciating his relationship with her as being part of the objective list, rather than merely as a tool for increasing mutual happiness. It seems an objective list theorist would think it rather important that the date is scheduled for Friday, because it increases the value of the date. If Tim becomes an objective list theorist, the value of the date would increase dramatically (its value would approximately double). In fact, on objective list theorists' accounts, it seems that convincing hedonists of objective list theory would be extremely important, and dramatically increase the value of their lives. This is deeply implausible. Peter Singer’s marriage is no less valuable for him being a hedonist about value.
8 Suppose we adopt a particular account of consciousness, according to which one’s experience of an object or event is wholly produced by their brain, and does not truly observe the external world. Instead, the brain receives inputs from the external world and hallucinates a conscious experience that best makes sense of the inputs. Regardless of the plausibility of this account of consciousness, it seems that objective list theory would hold that it would have significant normative implications. If one’s experience of being in love with their wife is truly a mental image generated entirely by their brain in a way that makes sense of the signals it’s receiving from the external world, then when one appreciates their loved one, they’re truly only appreciating the joy they get from the mental hallucination. On objective list theories, this would seem to dramatically undercut the value of all things, for being in love is truly just a mental hallucination that roughly tracks what’s going on in the external world. However, this is implausible. Whether or not we are directly observing things as they are, or our brains are making up the world that we see, in a way that roughly mirrors the real world but is understandable to us should not undercut the value of being in love.
To clarify the case, suppose that when one sees someone with whom they’re in love, the only inputs from the person that they’re truly seeing as they actually are in the real world are a series of photons. The rest of the person is hallucinated by their mind, in a way that roughly tracks what the other person is actually doing, but is not totally identical. If this is true, then consciousness is very much like the experience machine, with our mind creating an experience machine-esque experience in ways that roughly track what’s happening in the real world.
One might object that the well-being we’d experience if this were the case would still fulfill our objective list because it roughly tracks the real world. Thus, it is close enough to actual perception of the good things for it to count as appreciating the good. However, this reply runs into several problems.
First, the objective list theory holds that even if events roughly track what is going on in the real world, as long as they’re not in the real world, they don’t possess the value. Objective list theorists would presumably object placing one in a modified experience machine, where their actual self is replaced by a robot version of themself, which acts exactly like they would actually act but is not conscious and placing their actual self in an experience machine, where their experiences are identical to what they would be in the real world. Even if the experience machine caused one to experience a life exactly like the one they were living prior to being in the experience machine, and replaced them with a robot twin who acted like them, such that the world they hallucinated tracked perfectly the life they were previously living, an objective list theorist would presumably find that objectionable.
Additionally, we can modify the case slightly. Suppose that the version of the world that they hallucinated was subtly different from the actual world. For example, in the actual world their wife is relatively physically unattractive. However, their hallucination of the actual world perceives their wife as being very attractive. Additionally, the version of the world that they hallucinate has the loudness of often fail to track their true loudness. Suppose their wife speaks with many decibels, such that direct perception of the world would hear their wife being very loud. However, the version of reality that their brain hallucinates has their wife appear quieter than she actually is. In this case, the reality that they’re experiencing does not truly track the external world. However, this still does not seem to have any significant normative implications. The objective list theorist, if they hold that they still gain non-hedonic value from true love under this theory of consciousness but don’t gain non-hedonic value from true love in the experience machine, would have to suppose that there’s some threshold at which one’s hallucinations become enough like reality that they gain value from their objective list. This, however, seems implausible. A schizophrenic who believes their neighbor is an angel sent by god who they’re in love with does not gain value in proportion to how realistic their view is.
To illustrate this point more, suppose that my true self is a million miles away. I am a brain in a vat, but I experience everything exactly as it would affect me if I were actually generated by the brain inside the body that I experience being part of, and the choices I make have exactly the same effects as they would if I were actually generated by the brain inside the body I experience being a part of. This case is exactly like the case of my actual self, except the thing generating my subjective experience is merely located out of my body and far away. This doesn’t seem to possess any significant normative implications.
Now suppose we accept epiphenomenalism--the view that consciousness is causally inert. Thus, while I’m still the brain in the far away vat, my thoughts don’t cause events. Instead, it’s the brain waves in the body that I feel like I’m part of that both generate my thoughts and the actions that the body takes. Surely epiphenomenalism wouldn’t undercut the value of objective list fulfillment. However, in conjunction these make it so that I’m a discombobulated brain millions of miles away that’s merely experiencing what life would be like inside my body, but that is causally unconnected from my body. The objective list theory holds that being a causally inert consciousness is not enough to undercut the value of love, but being a causally inert consciousness a galaxy away, whose experiences are projected information sent by observing brain waves does undercut the value of true love. This is deeply implausible.
It seems like if we accept
1 Having the true source of our consciousness be far away is morally irrelevant
2 Having the true source of our consciousness be causally inert is morally irrelevant
We’d also accept that having the true source of our consciousness far away and causally inert is morally irrelevant. However, this undercuts the view that switching out a person for a robot copy of themself in a world comprised entirely of robots, such that they don’t realize that they’re interacting with non sentient robots, and their loved ones in the real world don’t realize they’ve been replaced by a robot, but the total composition of their experiences does not change, is morally bad. However, this case is fundamentally identical to the aforementioned case. In both cases their consciousness does not causally interact with the world, but they receive information as if they causally interact with the world. Additionally, in both cases the source of their consciousness is very far away and is receiving inputs identical to the information they’d receive in the real world, but they’re not truly receiving information from the real world. The only difference between the two cases is that in the robot world the information is being transferred by robots, but in the other world it’s being transferred by faster than light technology that projects information to their brain, which is located very far away.
9 It seems that objective list theories would have to hold that, if one had the ability to cause imaginary friends had by four year olds to exist, it would be extremely good to do so, even if the overall composition of their mental states is neutral, because it would make the relationships that the four year olds had with them morally good. This holds even though the stipulated imaginary friends would disappear the moment that small children grow out of them. However, this is implausible--it does not seem like a tragedy that imaginary friends that small children interact with do not exist.
3
This premise took a very long time to defend—future posts will argue for the other premises. Can’t have all of the defenses of the premises be in this post. We’ll also see in future posts that there are many independent plausible ways of deriving utilitarianism. This serves as very good evidence. One rarely finds several different arguments for a false conclusion—each with super plausible premises. When arguments seem to converge on a conclusion it’s likely to be true. This was a long one, but thanks everyone for sticking it out. The road is long and twisted, but it leads inescapably to utilitarianism.
Utilitarianism wins outright part 4
Continuing to defend the syllogism laid out in part 3
“really penetrating criticism, especially in ethics, requires a patient effort of sympathy which Mr Bradley has never learned to make, and a tranquillity of temper which he seems incapable of maintaining.”
“[The book] seems smashing, but he loses by being over-controversial. There should be at least an affectation of fairness in a damaging attack of this kind.”’
—Henry Sidgwick
Welcome back. Where we left off, I was defending the premises of the following syllogism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
Premises one and two have been defended. Onward
3 (I obviously know that I skipped one and two in this article—it’s a defense of premise three so I have it labeled 3)
Premise 3 says “Therefore only happiness is good (for selves who are rational egoists).” This follows deductively from 1 and 2.
4
Premise 4 says only the types of things that are good for selves who are rational egoists are good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists or unique benefits that apply to non rational egoists. This is trivial. Better is a synonym of more good, so this sentence is essentially the types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they are more good for rational egoists. We merely stipulate that rational egoists have a particular motive—being a non rational egoist does not bring intrinsic value to other non hedonic things.
5
Premise five says happiness does not have unique benefits that only apply to rational egoists. This premise is deeply intuitive. It makes no difference to our judgement that the joy of friendship, soup, and enlightenment whether we are a rational egoist.
6
Premise six says therefore only happiness is good for selves who are or are not rational egoists. This follows from the previous premises.
7
Premise seven says all selves either are or are not rational egoists. This premise is trivial.
8
Premise eight says therefore, only happiness is good for selves. This follows from the previous premises.
9
Premise nine says Something is good, if and only if it is good for selves.
This claim is hard to deny. It seems hard to imagine something being good, but being good for literally no one. If things can be good, while being good for no one, there would be several difficult entailments that one would have to accept.
1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one. The person who denies it could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied it.
2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities. For example, suppose we take the violation of rights to be bad, even if it’s bad for no one. A world where everyone violated everyone else's rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.
3 Those who deny this principle usually do so, not on the basis of the principle sounding implausible, but on the basis of the principle denying other things that they think matter—generally either desert or rights. I’ve argued those things don’t matter. If you’re read my full series responding to Huemer then you don’t need to read this part because the objections were expressed in parts 2, 6, and 7.
10
Premise 10 says Therefore only happiness is good. It follows from the previous premises.
11
Premise 11 says we should maximize good. First this can be supported through the following argument
1 If something is good this gives us a reason to pursue it
2 The most good thing gives us the most reason to pursue it
3 We should pursue what we have the most reason to pursue
Therefore, we should pursue the most good thing.
Second, this is deeply intuitive. When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy. Better is a synonym of more good, so if an action produces more good things it is better that it is done.
If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness
Third, as Parfit has in on what matters the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject. Parfit made this argument in the context of rules, but it applies equally to acts.
Fourth, an impartial observer should hope for the best state of the world to come into being. However, it seems clear that an impartial observer should not hope for people to act wrongly. Therefore, the right action should bring about the best world.
Fifth, as Yetter Chappel has argued, agency should be a force for good. Giving a perfectly moral agent control over whether some action happens shouldn’t make the world worse. In the trolley problem, for example, the world would be better if the switch flipped as a result of random chance, divorced from human action. However, if it is wrong to flip the switch, a perfectly moral person being given control over whether or not the flip switches by accident would make the world actively worse. Additionally, it would be better for a perfectly moral person to have muscle spasm which results in the switch flipping, than to have total control of their actions. It shouldn’t be better from the point of view of the universe for personally benevolent agents to have muscle spasms resulting in them taking actions that would have been wrong if they’d voluntarily taken them.
The other objections can be found in part 2 of my response to Huemer.
12
Premise 12 says therefore, we should maximize only happiness. This follows from the previous premises.
Conclusion
Each of these premises are very hard to deny. In conjunction, they justify utilitarianism. As we saw based on last post, our credence in utilitarianism was 60 to 19. I think that there being arguments this persuasive for utilitarianism, reasoning purely from first principles should shift are more likely if utilitarianism were correct by a factor of 5 to 1—so we move our credence to 300 to 19. This will obviously depend on the plausibility of the premises. That is up to you to decide, dear reader. However, there are plenty more plausible arguments for utilitarianism to come, so stay tuned. I’m not the first one who has been able to derive utilitarianism from plausible first principles and this is not the only way I’ve been able to derive it from plausible first principles.
But unlike other ethical systems that are attempted to be derived a priori—utilitarianism has plausible principles and doesn’t rely on equivocation. Additionally, reflection about specific cases verifies the general principles. Let these principles together settle the debate once and for all.
I’ve read timid utilitarians describe utilitarianism as a plausible contender—just as good as other theories, yet too hastily dismissed. No! Utilitarianism is not just as good as other theories. It is better. It is better like disbelief in homeopathy is better than belief in homeopathy. It is better like atheism is better than theism.
Those timid utilitarians often suggest that their papers are intending to level the playing field—establish utilitarianism as on par with other theories. This is far too modest. The leveling of the playing field should not be the kind that produces parity, but rather the type that produces utter destruction, the type that produces a razing of the opposing theories. Alternative theories should be leveled, reduced to rubble, placed upon the dustbin of history, considered as ludicrous as the theory before quantum mechanics that said that atoms are just bouncing billiard balls. Let the playing field be leveled the way Magnus Carlsen would level a 1300.
Yetter Chappel is right to be baffled by the number of non consequentialists. Other views don’t make any sense. They’re just fundamentally mistaken about what matters. Naming 5 unintuitive results of utilitarianism is not sufficient to justify dismissing a theory with this much support, especially when there’s always a powerful argument for the utilitarian conclusion in those thought experiments. The arc of moral theorizing is long, but it bends towards utilitarianism.
Utilitarianism wins outright part 5
Giving more arguments for utilitarianism
“And God feels intellectually inadequate compared to John von Neumann.”
—Scott Alexander
John was famous for his thought experiment in which one was equally likely to be all members of society. He argued that ethics has to be impartial—it’s about what we’d do if we were equally likely to be everyone, but acting as a rational agent. As a result of this and other things, he was one of the most revered thinkers of his time with numerous citations.
I’m of course referring to John Harsanyi. I know, not Rawls, who had his seminal idea long after Harsanyi did, before becoming famous by reasoning badly about what one would do from behind the veil of ignorance. Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational. This means that their preferences should have the following four features.
Completeness. When choosing between A and B they should either prefer A, prefer B, or be indifferent between A and B. They have to be able to compare any two situations and decide whether one is better, or whether they’re equal.
Transitivity. If they prefer A to B and B to C, they should prefer A to C. Likewise, if they’re indifferent between A and B and indifferent between B and C, they should be indifferent between A and C.
Continuity. If A≥B≥C, then there’s some probability P such that P(C) + (1-P) (A) = B. To illustrate, if I prefer bananas to apples and apples to oranges then there’s some probability, let’s say .5 for which I’d judge certainty of apples to be just as good as (1-.5)=.5 probability of bananas and .5 probability of oranges.
Independence. For a probability between 0 and 1, B≥A only if the value of P(B) + (1-P)(C) ≥ P(A)+(1-P)(C). Basically if B is better than or equal to A, then some probability of B and some probability of C will be better than or equal to the same probability of A and the same probability of C. To illustrate once more, if I prefer bananas to apples, then (using the probability of .5), I’d prefer a .5 chance of bananas and a .5 chance of a car to a .5 chance of apples and a .5 chance of a car.
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
All of the premises pose major problems if they’re denied that involve dutch book arguments—it would be pretty complex to explain why, but let’s just accept these axioms of rational decision making for now. Well if all the individuals in the group should be vnm rational when making decisions, the group as a whole should similarly be vnm rational. If everyone in the group prefers apples to bananas and bananas to oranges then the group as a whole should prefer apples to bananas and bananas to oranges.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument leaves unspecified what things should factor into an individuals utility functions. It could be the case that a person’s utility function corresponds with how many marbles they connect (though the person who invented that utility function would have clearly lost their marbles). This just shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints. You cannot care about retributivism—if you had an equal chance of being the murderer you wouldn’t support decreasing expected utility by n/p where n is the amount of utility they’d experience absent retribution and p is the number of total people. Additionally, I’ve provided compelling arguments as to what the utility function should be in other articles.
Those who are impartial and rational must be utilitarians. We now have two independent derivations of this principle. And there are many more to come. For those still on the fence, how likely do you think that the incorrect theory would get the right answers historically, win on theoretical virtues, and be derivable from two independent sets of plausible axioms, one deduced by a nobel laureatte.
It’s certainly possible, but I wouldn’t bet on it.
Utilitarianism wins outright part 6
A few more ways of deriving utilitarianism
0 Why we modern utilitarians don’t use Bentham’s argument for utilitarianism
Answer: Because it was garbage—here it is.
1
(I have not read Hare, so this might be a bit off). He was also an accursed non cognitivist so…)
Hare’s basic argument was something like this. Morality must be universalizeable by definition. If a person makes a moral claim that only benefits themselves in an unprincipled way that wouldn’t count as morality. So morality has to apply generally. Well, if morality is applying generally then we have to weight everyone’s interests equally and maximize generally, what selfish people maximize for themselves. Sord’ve like the earlier argument I presented. Thus, Hare thinks that morality is about acting as one would if they experienced all possible experiences. This seems to adequately fit the concept of both universalizability and morality. If Jeffrey Dahmer lived the lives of his victims, he wouldn’t kill them. Hare argued that this resulted in preference maximization. I think Hare was wrong—I’ve argued previously that we have reason to maximize our happiness. However, Hare was on the right track about universalizability.
2
I’m just going to rip off Katarzyna de Lazari-Radek and Peter Singer
“Sidgwick finds three principles that meet these requirements.
• Justice requires us to treat similar cases alike, or as Sidgwick puts it: ‘… whatever action any of us judges to be right for himself, he implicitly judges to be right for all similar persons in similar circumstances’.
• Prudence tells us that we ought to have ‘impartial concern for all parts of our conscious life’, which means giving equal consideration to all moments of our own existence. We may discount the future because it is uncertain, but ‘Hereafter as such is to be regarded neither less nor more than Now.’
• Benevolence, like prudence, considers the good of the whole, rather than of a mere part, but in this case it is not our own good, but universal good. Hence, Sidgwick says, the principle of benevolence requires us to treat ‘the good of any other individual as much as his own, except in so far as he judges it to be less, when impartially viewed, or less certainly knowable or attainable by him’. This principle of benevolence is, for Sidgwick, the basis for utilitarianism, although for the principle to lead to hedonistic utilitarianism, we still need an argument saying that pleasure or happiness, and nothing else, is intrinsically good.”
Each of these principles seems very plausible and is argued for extensively here.
3 Yet another way of deriving utilitarianism
Suppose you accept
1) Hedonism, which says that the happiness that one experiences during their life determines how well their life goes for them.
2) Anti egalitarianism, which says that the distribution of happiness is irrelevant.
3) Pareto Optimality, which says that we should take actions that are better for some, but worse for none, regardless of consent.
These axioms are sufficient to derive utilitarianism. The action that maximizes happiness could be made Pareto Optimal by redistributing the gains. Anti egalitarianism says that redistributing the gains has no morally significant effect on the situation. If Pareto improvements should be taken, and the utilitarian action is morally indistinct from Pareto improvements, utilitarian actions should be taken.
This can be illustrated with the trolley problem as an example. In the trolley problem it would be possible to make flipping the switch Pareto optimal by redistributing the gains. If all of the people on the track gave half of the happiness they’d experience over the course of their life to the person on the other side of the track, flipping the switch would be pareto optimal. In that case, everyone would be better off. The person on the other side of the track would have 2.5 times the good experience that they would otherwise have had, and the other people would all have .5 a life's worth of good experience more than they would have otherwise had. Thus, if all the axioms are defensible, we must be utilitarians.
Hedonism was defended here.
Anti egalitarianism can be defended in a few ways. The first supporting argument (Huemer 2003) can be paraphrased (and modified slightly) in the following way.
Consider two worlds, in world 1 one person has 100 units of utility for 50 years and then 50 units of utility for the following 50 years. A second person has 50 units of utility for the first 50 years, but 100 units of utility for the next 50 years. In world 2, both people have 75 units of utility for all of their lives. These two worlds are clearly equally good, everyone has the same total amount of utility. Morally, in world 1, the first 50 years is just as good as the last 50 years, in both of them, one person has 100 units of utility and the other person has 50 years. Thus the value of world 1 = two times the value of the first 50 years of world 1. World one is just as good as world two, so the first 50 years of world one are half as good as world two. The first 50 years of world 2 are half as good as the total value of world 2, thus the first half of world one, with the same total utility, but greater inequality of utility is just as good as the first half of world 2, with greater inequality of utility but the same total utility. This proves that the distribution of utility doesn’t matter. This argument is decisive and is defended at great lengths by Huemer.
Another argument can be deployed for non egalitarianism based on the difficulty of finding a viable method for valuing equality. If the value of one’s utility depends on equality, this runs into the spirits objection; it implies that if there were many causally inert spirits living awful lives this would affect the relative value of giving people happiness. If there was one non spirit person alive, this would imply that the value of them being granted a desirable experience was diminished by the existence of spirits that they could not effect. This is not plausible; causally inert entities have no relevance to the value of desirable mental states.
This also runs into the Pareto objection; to the extent that inequality is bad by itself then a world with 1 million people with utility of six could plausibly be better than a world with 999,999 people with a utility of six and one person with a utility of 28, given the vast inequality.
Rawls formulation doesn’t work; if we are only supposed to do what benefits the worse off then we should neglect everyone’s interests except those who are horrific victims of the worst forms of torture imaginable. This would imply that we should bring to zero the quality of life of all people who live pretty good lives if that would marginally improve the quality of life of the worst off human.
Rawls' defense of this rule doesn’t work. As Harsanyi showed we would be utilitarian from behind the veil of ignorance. This is because the level of utility referred to as 2 utility is, by definition, the amount of utility which is just as good as a 50% chance of 4 utility. Thus, from behind the veil of ignorance we would necessarily value a ½ chance of 4 utility at equal to 2 utility and always prefer it to certainty of 1.999 utility.
Rawls attempts to avoid this problem by supposing that we don’t know how many people are part of each class. However, it is not clear why we would add this assumption. The point of the veil is to make us impartial, but provide us with all other relevant information. To the extent that we are not provided information about how many people are part of each social class that is because Rawls is trying to stack the deck in favor of his principle. Simple mathematics dictates that we don’t do that.
An additional objection can be given to the egalitarian view. On this view, if an action makes people who are well off much better off and people who are not well off slightly better off, on egalitarian accounts this action is in some sense bad. Because it increases inequality, this action does something bad (though it might be outweighed by other good things). However, this is implausible. Making everyone better off by differing amounts is not partially bad.
We have several reasons to distrust our egalitarian intuitions.
First, egalitarianism relates to politics and politics makes us irrational. As (Kahan et al 2017) showed, greater knowledge of math made people less likely to solve politicized math problems.
Second, equality is instrumentally valuable according to utilitarianism; money given to poor people has greater utility given the existence of declining marginal utility. It is possible to easily explain our support for equality as a utilitarian heuristic. Heuristics make our moral judgments often unreliable. It is not surprising that we would care about something that is instrumentally valuable and that should often be pursued given that it’s pursuit is a good heuristic. We have similar reactions in similar cases.
Third, given the difficulty of calculating utility we might have our judgement clouded by our inability to precisely quantify utility.
Fourth, given that equality is very often valuable; an egalitarian cookie, money, or home distribution produces greater utility than an inegalitarian one, our judgement may be clouded by our comparison of utility to other things. Most things have declining marginal utility. Utility, however, does not.
Fifth, we may have irrational risk aversion that leads us to prefer a more equal distribution.
Sixth, we may be subject to anchoring bias, with the egalitarian starting point as the anchor.
Several more arguments can be provided against egalitarianism. First is the iteration objection. According to this objection, if we found out that half of people had a dream giving them unfathomable amounts of happiness that they had no memory of, their happiness would become subsequently less important. Given that egalitarianism says that the importance of further increases in happiness relates to how much happiness they’ve had previously, to the extent that one had more happiness previously - even if they had it in a dream that they couldn’t remember - their happiness would become subsequently less important.
The egalitarian could object that the only thing that matters is happiness that one remembers. However, this runs into a problem. Presumably to an egalitarian what matters is total happiness that one imagines rather than average happiness. It would be strange to say that a person dying of cancer with months to live is less entitled to happiness than a person with greater average happiness but less lifetime total happiness. However, if this is true then the importance of increasing the happiness of one’s dream self is dramatically more important than increasing the happiness of themselves when they’re awake. To the extent that they’ll forget their dream self, their dream self is a very badly off entity, very deserving of happiness. It would be similarly strange to prioritize helping dementia patients with no memory of most of their lives based on how well off they were during the periods of their life which they can no longer recall.
A second argument can be called the torture argument. Suppose that a person has been brutally tortured in ways more brutal than any other human such that they are the worst off human in history by orders of magnitude. From an egalitarian perspective, their happiness would be dramatically more important than that of others given how poorly off they are. If this is true, then if we set their suffering to be great enough, it would be justified for them to torture others for fun.
A third argument can be called the non prioritization objection. Surely any view which says that the happiness of poorly off people matters infinitely more than the happiness of well off people is false. If it were true it would imply that sufficiently well off people could be brutally tortured to make poorly off people only marginally well off. Thus, the egalitarian merely draws the line at a lower level of happiness in terms of how much happiness for a poorly off person outweighs improving the happiness of a well-off person. If this is true non egalitarianism ceases having counterintuitive implications. The intuitive appeal of “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 2 being morally wrong,” dissipates when alternative theories endorse “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 5 (it could be more than 5, 5 is just an example) being morally wrong.” At that point utilitarians and egalitarians are merely haggling over the degree of the tradeoff.
We can now turn to the Pareto optimality premise which says we should take actions if they increase the happiness of some but don’t decrease the happiness of any others. This principle is deeply intuitive and widely accepted. It’s hard to imagine something being bad while making some people better off and no people worse off.
One might object to the Pareto principle with the consent principle, which says that an act is wrong if it violates consent even if it is better for some and worse for none. However, this runs into questions of what constitutes consent. For example, a person throwing a surprise party violates the consent of the person for whom the party is being thrown. Yet a surprise party is obviously not morally wrong if it makes everyone better off.
Similarly, if a pollutant was being released into the air that went into people’s lungs without consent, it would seem that would be bad only if it were harmful. One might argue that we should only take Pareto optimal actions that don’t violate rights, yet views that privilege rights have already been discussed. Additionally, the areas where consent seems to matter are precisely those where one who does not consent can be seriously harmed. Consent to marriage is valuable because nonconsensual marriage would obviously be harmful. Yet to the extent that one is not harmed, it’s hard to imagine why their consent matters.
An additional objection can be given to rights based views. Suppose that someone is unconscious and needs to be rushed to the hospital. It seems clear that they should be rushed to the hospital. Ordinarily, transporting someone without consent is seen to be wrong. However, in a case like the one just stipulated, it increases happiness and thus is morally permissible.
Lots and lots of plausible arguments can be made for utilitarianism. This would be surprising if it were false. If you were purely judging different philosophical theories prior to case specific reflection, utilitarianism would blow other theories out of the water. Utilitarianism, unlike other theories, is derived from plausible first principles. It doesn’t require rapid patching up to accommodate for specific intuitions about cases.
I think these arguments are super decisive. Not only does utilitarianism seem very intuitive at first—it’s supported by 5 independent sets of plausible principles. Prior to going into the specific thought experiments, our credence in utilitarianism should be super high. Soon we’ll see, however, that the case only gets stronger when we consider specific thought experiments.
This is supported by all of the theoretical virtues, overwhelming historical evidence, proofs based on axioms of rational reasoning combined with impartiality and reasoning from a nobel prize winning economist, and three other fully independent sets of axioms. This is already enough evidence to trump most moral intuitions—even if we have a lot of conjunctive intuitions that favor it.
However, as we’ll see, every single time that our intuitions diverge from utilitarianism they can be independently proven wrong. We already did that with Huemer’s arguments. Dozens more thought experiments will fall to the blade of utilitarian reasoning1. Yog-Sothoth swims slowly, but he swims towards Bentham2.
It’s a blade, okay. I had to try to make the tedious prices of writing hundreds of pages about particular random thought experiments sound impressive and blades are impressive. Also, like pretend the blade is had by a dragon or something to make it extra impressive.
This is a modified version of a quote by a pretty odious individual, who’s not worth mentioning.
Utilitarianism wins outright part 8
Parfit's folly
0
“Even if Adam and Eve were leading fantastic lives in the Garden of Eden, the world was not perfect.”
We know that language used to describe moral questions affects how people view them. However, the relationship also goes the other way—a conclusion is rarely dubbed repugnant, or labeled with some other term of abuse if it’s not counterintuitive. Yet one conclusion in particular is given the title “the repugnant conclusion,” because people find it so deeply counterintuitive.
To his credit, Michael Huemer accepts the repugnant conclusion. Being an intuitionist, Huemer attempts to weave together all his many idiosyncratic intuitive judgements into one grand web of correct beliefs. This process has lead Huemer to throw out the rejection of the repugnant conclusion, on the grounds that it can’t be reconciled with other reasonable principles. He is right to do so.
The so called repugnant conclusion has a long and storied history. Parfit thought of it originally, concluding that it’s an absurd implication of total utilitarian views. However, Parfit was troubled by this, because he realized that rejecting it required rejecting some other very plausible moral principles.
This article will show that the so called repugnant conclusion isn’t repugnant at all. Accepting it is a demand of rationality. The repugnant conclusion is the rational extension of reasoning about tradeoffs properly conceived. The ethical reasoning of the average person is full of nonsense, polluting accurate judgements. Ethical judgements like belief in the act omission distinction, rights, prioritization of torture avoidance over dust specks, spread like a cancer throughout the edifice of moral beliefs. The rejection of the repugnant conclusion is by itself bad enough, a totally wrong ethical judgement.
Yet it’s far more pernicious. Those who reject the rc, when confronted with the smorgasbord of plausible ethical principles that lead to the rc, start holding other insane moral beliefs. Not content with merely causing one bad judgement, the rejection of the repugnant conclusion begins splitting rapidly, spreading throughout the body, affecting the lungs, heart, and skin. Some allow their rejection of the rc to reach stage 3 status—risking the total collapse of the ethical system.
This article is intended to provide the cure to the rapidly spreading disease of rejecting the repugnant conclusion. Like chemo, this cure my produce some negative side effects, leaving people deeply shaken. Yet it is necessary, if we want to survive as a species, and avoid a truly popular and pernicious moral failure mode. The repugnant conclusion, once given as a knockdown argument against utilitarian reasoning, is now widely conceived as a difficult puzzle to solve. Avoiding it is impossible if we want to retain other reasonable ethical beliefs about the world.
My proof
So what is this so called repugnant conclusion? It argues that by the lights of utilitarianism there is necessarily a number of people with lives barely worth living, (10^40 let’s say) who would make the world better than trillions of people living great lives. There is a number of people whose lives consist of getting a backrub and then disappearing who possess more moral worth than quadrillions of people living unimaginably good lives. Many people find this counterintuitive.
How can we argue for accepting the repugnant conclusion? Well, we can take a similar approach to the one taken in the previous section. Suppose that we have one person who is extremely happy all of the time. Suppose that they live 1000 years. Surely, it would be better to make 100 people with great lives who live 999 years, than one person who lives 1000 years. We can now repeat the process. 100000 people living 998 years would surely be better than 100 living 999 blissful years. Once we get down to one day and some ungodly large number of people 10^100 for example, we can go down to hours and minutes. In order to deny the repugnant conclusion one would have to argue for something even more counterintuitive, namely that there’s some firm cutoff. Suppose that we say that the firm cutoff is at 1 hour of enjoyment. They would have to say that infinite people having 59 minutes of enjoyment matters far less morally than one person having a 1 in a billion chance of having an hour of enjoyment.
Another argument
Another argument can be made for the conclusion. Most of us would agree that one very happy person existing would be worse than 7 billion barely happy people existing. If we just compare those states of the universe, iterated 1 trillion times, we conclude that 7x10^21 people with barely happy lives matters more morally than 1 trillion people with great lives. To deny this view one could claim that there is some moral significance to the number a billion, such that the moral picture changes when we iterate it a trillion times. Yet this seems extremely counterintuitive. Suppose we were to discover that there were large numbers of happy aliens that we can’t interact with. It would be strange for that to change our consideration of population ethics. The morality of bringing about new people with varying levels of happiness should not be contingent on causally inert aliens.
Huemer’s proof
Huemer has given another argument for the repugnant conclusion. To quote him, suppose we accept
“The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.7
Non-anti-egalitarianism: If x and y have the same population, but x has a higher average utility, a higher total utility, and a more equal distribution of utility than y, then x is better than y with respect to utility.8
Transitivity: If x is better than y with respect to utility and y is better than z with respect to utility, then x is better than z with respect to utility”
we must accept the repugnant conclusion. Huemer goes on to explain why these necessitate RC, writing “To see how these principles necessitate the Repugnant Conclusion, consider three possible worlds (figure 1):
World A: One million very happy people (welfare level 100).
World A+: The same one million people, slightly happier (welfare lev el 101), plus 99 million new people with lives barely worth living (welfare level 1).
World Z: The same 100 million people as in A+, but all with lives slight ly better than the worse-off group in A+ (welfare level 3).
A+ is better than A by the Benign Addition Principle, since A+ could be produced by adding one unit to the utility of everyone in A and adding some more lives that are (slightly) worthwhile. Z is better than A+ by Non-anti-egalitarianism, since Z could be produced by equalising the welfare levels of everyone in A+ and then adding one unit to everyone's utility. Therefore, by Transitivity, Z is better than A. Analogous arguments can be constructed in which world Z has arbitrarily small advantages in total utility; as long as Z has even slightly greater total utility than A, we can construct an appropriate version of A+ that can be used to show that Z is better than A. This suggests that we should embrace not only (RC), but the logically stronger Total Utility Principle: For any possible worlds x and y, x is better than y with respect to utility if and only if the total utility of x is greater than the total utility of y.”
Huemer goes on to defend each premise at length. Each premise is supported by extremely compelling arguments, which I won’t rehash.
Arrhenius’ proof
Arrhenius has his own proof that the repugnant conclusion must be accepted if we accept certain other very reasonable axioms. Let’s look at the axioms.
A) The Dominance Principle: If population A contains the same number of people as population B, and every person in A has higher welfare than any person in B, then A is better than B
This is obviously correct. If we have two worlds and everyone is better off in world A than world B then A is better than B. Duh.
B) The Addition Principle: If it is bad to add a number of people, all with welfare lower than the original people, then it is at least as bad to add a greater number of people, all with even lower welfare than the original people.
This is also obvious. If it’s bad to bring a person with negative 4 utility into existence then it’s more bad to bring ten people with negative 8 utility into existence. Also duh.
C) The Non-Anti-Egalitarianism Principle: A population with perfect equality is better than a population with the same number of people, inequality, and lower average (and thus lower total) welfare.
This is also obvious. A population of everyone with 10 utility is better than a population with more inequality and average utility of 5.
D) The Minimal Non-Extreme Priority Principle: There is a number n such that an addition of n people with very high welfare and a single person with slightly negative welfare is at least as good as an addition of the same number of people but with very low positive welfare.
E) The rejection of the sadistic conclusion. When adding people without affecting the original people's welfare, it can’t be better to add people with negative welfare rather than positive welfare.
These all in conjunction require we accept the repugnant conclusion.
Other views just suck
Huemer gives his response to six rival views. I’m going to give my own response to those, while drawing heavily on Huemer. What are the rival views?
A) The average utility view which says that we should maximize average utility
This one has lots of problems.
1 If we think that there’s even a .001% chance that we’re a brain in a vat, we should basically be egoists because in expectation increases in our welfare increase average utility by a lot. If there’s a .001% chance that I’m a brain in a vat, then increasing my utility from 50 to 100 would be better than increasing the well-being of tens of thousands of people.
2 It’s bizarre and ad hoc, having the strange implication that, if two people in possible worlds are both in caves, totally isolated from the rest of the world, but one of them exists in a world with a trillion people and the other exists in a world with a billion people, it’s much more important to increase the utility of a the second person because it raises average utility more. It also implies that, if there were lots of causally inert spirits floating around all the time with utility of zero, that would make it much less important to make people’s lives better.
3 It would say that if everyone in the world has positive utility of 10!!!, it would be actively bad to bring people into the world with positive utility of 10!!. This is not plausible. Bringing people into existence whose positive experience dwarfs all good experiences in the history of the world is a good thing, even if there are lots of other even happier people.
4 It would say that if the average utility was negative 10!!! it would be morally good to bring lots of people with utility negative 10!! into existence. This is deeply implausible. Bringing people into existence with no good experiences who experience more misery than the sum total of all suffering that has been experienced in the history of the world is bad, even if other people have worse lives.
5 It violates the independence of non interactionist agent criterion, which says that agents that can’t interact with each other or causally affect each other at all should not affect the goodness of the others actions. Even if there were lots of aliens with positive utility a lightyear away—that shouldn’t affect the desirability of making people’s lives better.
B) Critical level views say that happiness below a certain threshold is not morally good. For example, if one has utility of below 10, that is not morally good.
This view has problems of its own. The critical level view holder can think either that bringing people with utility below the threshold (with utility 8 for example, below the threshold of 10) is good or is morally neutral. Each one runs into problems.
If they say that bringing people into existence with utility below the critical threshold is morally neutral then if one could either bring a million people into existence with utility 9 or with utility zero into existence, they should flip a coin. Both of those are equally good. This is not plausible.
There’s also a problem relating to the sharp jump between the threshold and the area below the threshold. If the threshold is at 10, these views have to hold that bringing lots of people into existence with utility 9.999999999999999999999999999 is morally neutral, but bringing a person into existence with utility ten is good. This isn’t at all plausible. If each minute of existence for a hypothetical new being contains 1 unit of utility, then this view would hold that creating beings that are happy for 10 minutes before disappearing is good, but bringing beings into existence that are happy for 9 minutes and 59 seconds is morally neutral. This is not at all plausible.
They might also hold the view that bringing people with utility below the threshold into existence is actively bad. This view runs into several problems.
1 It would say that, if the threshold is 10 again, a world with lots of people with utility of 9.9999999999 would be worse than a world in which a small number of people are being brutally tortured. Enough bad things add up to be very bad.
2 It violates the natural presumption that bringing people into existence with good lives is good.
3 It would say that creating a world with googolplex people all of whom have utility 8 would be the worst act in history by a wide margin. This is implausible.
C) Narveson thinks that world A is only better than world B if it’s better for someone. This runs into lots of problems.
1 Suppose we can either bring 100,000 people into existence with mediocre lives or 100,000 different people into existence with great lives. Narveson’s view would imply those are equal, because neither is better for anyone.
2 On this view it would be fine to bring people into existence with miserable lives because they wouldn’t have otherwise existed. Also, as Huemer points out, it would hold that it would be good to bring millions of people with terrible lives into existence if it would make currently existing people’s lives better, because it would be better for some, but the miserable people wouldn’t have existed otherwise.
D) Variable-Value theories say that bringing a new life into existence has diminishing marginal utility. The first person with utility of 5 is more valuable than the 101st.
These views are also terrible flawed.
1 They imply that the value of bringing a person into existence is largely contingent on how many other people there are in existence. This is not plausible. As Huemer observes, Parfit observes “research in Egyptology cannot be relevant to our decision whether to have children.” However, these views imply that the number of Egyptian’s determines how valuable it is to have children, because new people have declining marginal utility.
2 These views imply that if there were lots of people with really terrible lives, with utility of negative 10!!! it would be good to bring new people into existence with utility negative 10!!. These views are intended to be like total views for small populations (Which say we should maximize total utility), while being like average views for large populations. However, this means that for large populations, all the problems that avail average utilitarian views still apply.
E) We could adopt perfectionist views, which Parfit did, according to which there are some things that are so great that they are categorically more valuable than other smaller goods. I recall Parfit saying that no number of lizards basking in the sun can be as good as the experience of true love.
This view runs into a similar objection to the one discussed in the previous post. Surely, true love is not infinitely more valuable than love that’s an iota less intense. Surely that’s not infinitely more valuable than love that’s an iota less intense than that. We can keep replacing one instance of immense love with lots of instances of slightly less valuable love, until we conclude that lots of 1 minute romantic flings are as valuable as one instance of true love. Surely, millions of lizards enjoying basking in the sun can be as good as a 1 minute romantic fling. This also runs afoul of non-anti-egalitarianism. Surely a world where one person was truly in love, but millions of others lived mediocre lives would be worse than one where lots of people had slightly less true love, but higher average and total utility—and great equality.
F) We could hold the view that some intense pleasures are lexically better than less intense pleasures. These run into the same issues that were discussed availing perfectionism. If each pleasure is not infinitely better than another pleasure that’s slightly less good than it, then some large number of trivial pleasures have more total goodness than a small number of very intense pleasures.
Biases
This demonstrates that our anti repugnant conclusion intuitions fall prey to a series of biases.
1 We are biased having won the existence jackpot. A non existing person who could have lived a marginally worthwhile life would have perhaps a different view.
2 We have a bias towards roughly similar numbers of people to the numbers who exist today.
3 Humans are bad at conceptualizing large numbers. We saw that in the previous post.
Conclusion
If you’re not convinced yet, read Huemer’s article. It’s one of the most compelling philosophy papers that I’ve ever read1. To avoid simply rehashing Huemer’s paper, I haven’t been able to go as deeply into the arguments as Huemer did. However, it should be clear by now that rejecting the repugnant conclusion has enormous costs. I’ve kept this post briefer than many of my posts, because the points that are worth making have either been made by the previous post or by Huemer.
So let’s rename the repugnant conclusion. A proper understanding of it reveals that it should be called the nice and pleasant conclusion.
So let’s recap the cumulative case for utilitarianism. Utilitarianism is supported by 5 independent sets of axiomatic principles, does better historically, blows other theories out of the water in terms of all of the theoretical virtues, and has conclusions that have been independently born out in ten independent cases, all of which are used as arguments against utilitarianism. This case is already overwhelming. But the case for utilitarianism is just getting started. There is far more to come.
Huemer’s writing is at its best when he’s agreeing with utilitarianism.
Utilitarianism wins outright part 9
A brief methodological clarification and then some notes on DPT
1
The bible says lots of things that seem revolting at first.
Leviticus 21:9 “And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”
Leviticus 24:16 New International Version (NIV) 16 anyone who blasphemes the name of the Lord is to be put to death. The entire assembly must stone them.
Second Kings 2:23-25 NIV Elisha went up to Bethel. As he was walking along the road, some boys came out of the town and jeered at him. "Get out of here, baldy!" they said. "Get out of here, baldy!" He turned around, looked at them and called down a curse on them in the name of the Lord. Then two bears came out of the woods and mauled forty-two of the boys.
Numbers 31:17-18 Now therefore, kill every male among the little ones, and kill every woman who has known man intimately. But all the girls who have not known man intimately, spare for yourselves.
It also has lots of things that seem contradictory.
“… I have seen God face to face, and my life is preserved.” — Genesis 32:30
“No man hath seen God at any time…”– John 1:18
“… the earth abideth for ever.” — Ecclesiastes 1:4
“… the elements shall melt with fervent heat, the earth also and the works that are therein shall be burned up.” — 2 Peter 3:10
However, Christians hold that the bible is divinely inspired, that the author of the bible is the greatest author in history, fully infallible and literally omniscient. If this were true, what would we expect? Well, we’d expect the bible to be brilliant, fully devoid of errors. We’d expect that every time the bible says some seemingly horrific thing, it turns out to be a piece of deep wisdom, that’s merely difficult to appreciate.
I have neither the time nor desire to delve into the defenses given by Christians of each of the aforementioned passages. I tend not to think that they’re compelling at all. However, for those who think that Christianity is correct, I’ve long since believed that they should use this as an argument for Christianity. Christianity makes the prediction that the bible should be a brilliant document, either fully or nearly fully infallible. Every seeming error should be wise if properly conceived. So if it turned out that the bible had no errors, that all the contradictions relied on misunderstandings and horrors relied on either foolishness or misunderstandings, that would be very good evidence for Christianity. Biblical brilliance is shocking on atheism, yet a straightforward prediction of theism.
However, Christians don’t tend to present this as a positive argument. The reason seems to be because the passages generally aren’t defensible. They’re certainly not sufficiently defensible to provide evidence of biblical brilliance.
I, unlike Christians, tend to think the moral judgements espoused by my preferred theory are defensible on independent grounds. If utilitarianism was the correct ethical system, we’d expect its injunctions to be independently justified, even when they don’t seem to be at first. This is a straightforward prediction of utilitarianism being correct. So to test utilitarianism, I see if its injunctions are, in fact, independently defensible. Every single time, they’d found to be defensible.
Much like in the case of Christianity, this provides strong evidence for my theory. Showing utilitarianism’s infallibility across a wide range of cases strongly supports utilitarianism.
This isn’t just intended to rebut the common arguments against utilitarianism. Instead, it argues that the failures of those counterarguments supports utilitarianism. Utilitarianism turning out to be right, even where it seems counterintuitive at first, would be very likely if utilitarianism were correct, but very unlikely if it weren’t correct.
Retrodictions are much easier than predictions. It’s far easier to claim that your theory best explains a phenomena after that phenomena has been discovered. Utilitarianism is able to, unlike all other ethical theories that don’t rely at least partially on utilitarianism’s judgements, make correct predictions about what ethical judgements will turn out to be justified upon reflection. So far, I’ve argued that 10 of utilitarianism’s judgements are all justified, largely on independent grounds. There are dozens of other cases where utilitarianism’s counterintuitive judgements turn out to be justified.
2
What else does utilitarianism accurately predict? Well, it accurately predicts the results of the dual process theory. The dual process theory predicts that more careful, reflective judgements will be more likely to lead to utilitarian conclusions. If utilitarianism were correct, we’d expect greater reflection by smarter people to point towards utilitarianism. This turns out to be the case. Let’s examine the evidence.
One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.
A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.
Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.
Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgements. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.
This theory is reasonably contested—however, I’ve done about a semester’s worth of research in it for one of my classes. The evidence overwhelmingly favors the utilitarian results. The contrary evidence almost always serves merely to show that utilitarian judgements are correlated with lack of empathy and with sociopathy. This is true, but doesn’t undercut the DPT. Utilitarian judgements can be caused either by lack of empathy or by greater reflection.
This study argues that drinking lots of alcohol makes people more utilitarian. This is true (probably), however, alcohol reduces emotional inhibition, which explains the results.
This study says that thinking about math makes people more utilitarian, but reasoning about other things can make people more utilitarian or more deontological. This study is weird and is some evidence against the theory, but is not decisive. It’s also consistent with utilitarianism being correct. Reasoning can make people more consistent, but it can make their views more wrong if they attempt to form a coherent web with their false beliefs.
The last piece of data that’s worth discussing is this which compared the utilitarianism of people with ADHD to that of people without it. It concluded that there wasn’t substantial variance which is surprising on the DPT. If it were true, it seems it would predict that more impulsive people would be less utilitarian. A few things are worth noting.
1 This study found that people with ADHD were slightly less utilitarian, just not as much as the DPT would predict. It’s still some evidence.
2 The sample size was pretty small—only 50 people with ADHD and 134 people without ADHD.
3 People with ADHD differ in other respects from people without ADHD. As this makes clear, ADHD is correlated with lots of other things, which could undermine the reliability of the data.
People with ADHD are evil narcissists 2
As the header suggests, having ADHD is correlated with lots of things which serve as cofounders. People with ADHD are more likely to have personality disorders. ADHD seems to maybe have similar effects to psychopathy and to correlate with reduced empathy. All of these serves as possible cofounders, which make the data inadequate to establish that the conclusion is very likely.
Joshua Greene has written extensively about the DPT and is perhaps its most ardent defender. Overall, the large amounts of data favoring the DPT serves as good evidence for it, and similarly for utilitarianism.
If we consider every other realm for which we can be reasonably certain of the results, the correct results seem correlated with greater reflection. Thinking more should be expected to correlate with getting the right answer.
The popular conservative quip that “facts don’t care about your feelings,” is surprisingly appropriate in the case of normative ethics. Emotions lead people to begin babbling, invoking strange, mysterious entities like rights. Entities that float around doing nothing, until sufficient conditions are met, before a rights violation appears from the ether, making the whole state of affairs bad. Better reasoning leads towards truth, away from deontology and virtue ethics.
Reason imposes ethical demands. Unsurprisingly, better reasoners heed them.
And he disappeared (Descartes joke)
(The header is dark humor, people with ADHD are usually wonderful people. I happen to have ADHD and I’m pretty dang awesome).
Utilitarianism wins outright part 10
Solving a lot of allegedly difficult moral quandaries
“There may only be a small number of laws, which are self-consistent and which lead to complicated beings like ourselves. … And even if there is only one unique set of possible laws, it is only a set of equations. What is it that breathes fire into the equations and makes a universe for them to govern? Is the ultimate unified theory so compelling that it brings about its own existence?”
— Stephen Hawking
There are many allegedly unsolved problems in ethics—one’s which the correct theory ought to be able to solve, much the way that a unified theory of the economy ought to be able to predict recessions, and a unified theory of physics able to predict quantum mechanical oddities. Easily solving all of them would be a nice benefit to a theory, for the correct one should be able to solve them all.
It is said that rationality consists of telling you how, rather than what, to think. Well screw that. This post is telling you what to think on lots of ethical problems. It also tells you how to think a little bit, by serving as more support for utilitarianism.
Ethical questions have the lovely property of being solvable a priori, for those who invent the thought experiments do us all the courtesy of specifying the utility of each action. So here goes. Let’s solve all of ethics.
1 What should you do in the trolley problem? The answer is obvious, you should flip the switch. I’ve already argued that doctor’s should kill people and harvest their organs in hypothetical cases in which they have metaphysical certainty that doing so would save five lives and have no other negative effects. The problem is no different. In the future, I might write more defending this pro flip switching view. But for now, I merely intend to show that utilitarianism easily solves all ethical questions, as long as there’s adequate information given.
2 Should you eat meat? No, animals probably can suffer and eating meat inflicts vast amounts of suffering on them.
3 Is it morally good to bring people with good lives into existence? Yes, they experience lots of happiness.
4 Is taxation bad because it’s theft? No, taxation may technically be theft in a semantic sense, but theft tends to have bad consequences. If taxation has good consequences, then the bad things about theft don’t apply to it. Incidentally, many years ago, it was thinking about the argument that taxation is theft that caused me to invent utilitarianism—which had been very rudely preemptively plagiarized by philosophers dating back thousands of years.
5 What determines if something is a right? Well, things should be conceived of as rights if doing so increases happiness. This seems to match all of the rights that we care about. The right to life makes things go best.
6 If we expect the future to be good, should reducing risks of extinction be our top priority? Yes, extinction prevents lots of good experience.
7 Should you donate lots to charity? Yes—you can save a life for only a few thousand dollars if you give to the malaria consortium or reduce existential threats.
8 What should you do in footbridge? Push the guy off—it saves the most people. Duh.
9 Is genetic engineering permissible? Yes, it makes people’s lives better.
10 Is abortion fine? Depends on difficult empirical questions.
11 Are human challenge trials fine? Yes, they make people’s lives better.
Utilitarianism wins outright part 11
Anti utilitarians say the darnedest things
0
Don’t be mislead, dear reader, from the detour from going through thought experiments used against utilitarianism and blowing them to smithereens. The series of posts arguing against that will continue soon. However, for this post, like the last few, I have a different aim. Here, I’ll see what objections people give to utilitarianism in popular articles and argue against them1.
1
Psychology today has produced an argument against utilitarianism. They think it’s unjust because it would advocate framing an innocent person to save lots of lives. I’ve already responded to this argument here . Saying that utilitarianism ignores justice begs the question against a utilitarian account of justice, for which I’ve extensively argued. I’ve also argued against rights as a whole here.
2
The Markkula Center for Applied ethics has an article arguing against utilitarianism, demonstrating the poor quality of their applied ethics. Based on the quality of the objections, I wouldn’t trust them to apply my ethics.
Their first objection is that it’s hard to calculate utility, to weigh a human life against art. That’s true but tradeoffs are inevitable and we can make best guesses. Probably all the art in the world is collectively more valuable than a human life, but one human life has more value than the average piece of art. The fact that a moral theory is sometimes difficult to apply doesn’t count against it. Ethics is often difficult, we don’t know the utility of everything. It’s no different from epistemic rationality. No one objects to rationality based on it being hard. Also, absent having precise expected value, one leaves themself open to dutch book arguments.
The next objection is the classic inane, tired objection that utilitarianism holds that if bad things were conducive to utility then they’d be good. BUT THOSE THINGS ARE BAD!!! The example given is apartheid. The author says that people said apartheid was good for overall quality of life and that if this is true then utilitarianism would justify it. Fair enough. Apartheid is bad because it makes people’s lives worse. Opposition to apartheid shouldn’t be caused by a strange aversion to sullying oneself—if it really made people’s lives better, then it would be good. The people who think apartheid had good outcomes are wrong. This is no more objection to utilitarianism than it would be to rationality to object that it would justify believing the earth was flat if the evidence pointed to it being wrong.
3
This blog presents 13 considerations favoring and disfavoring utilitarianism. All of the considerations, including the pro utilitarianism ones are wrong—an impressive achievement. They start out by defining utilitarianism as supporting the greatest happiness for the greatest number—which is a bogus definition. Utilitarianism tries to maximize overall happiness (or technically happiness minus suffering). If something produces slight benefit for two people but major losses for one, utilitarianism wouldn’t justify it.
The first argument for utilitarianism is “1. We get to focus on happiness as a society.” This is just what utilitarianism is, it’s not much of an advantage to the theory unless you already accept utilitarianism.
They say “2. It teaches us that harming other people is wrong.”
Other theories tend to agree.
“3. Utilitarianism is an easy theory to implement.”
This is irrelevant to its correctness and is also false—utilitarianism is sometimes pretty difficult to apply.
“4. It is a secular system that focuses on humanity.”
Lots of other have the same focus. Utilitarianism focuses on all sentient beings, not just humans.
“5. Utilitarianism seeks to create the highest good.”
I agree here—it’s intuitive that one should do what is the most good.
“6. It focuses on the democratic process for forward movement.”
Lots of other systems do too—utilitarianism also does but only if democracy is optimific.
“7. We get to focus on an objective, universal solution.”
Other theories also have this, also utilitarians don’t have to believe in objective morality.
Now they get to the cons of it. I’d urge you all to make sure to be far away from tables right now, lest you bash your head on a table in irritation at the incredibly low quality of these arguments.
“1. We do not consider any other element besides happiness.”
This is true, but it’s not a downside unless you think other things matter. The closest they come to arguing that other things do matter is the following.
“Love is something which offers tremendous value, but it also can cause extraordinary heartbreak. We eat foods like kale because we know that it is a healthier choice then eating Twinkies every day even if we preferred to do the latter because it makes us feel better.”
Well, love generally increases long term happiness as does kale. Twinkies probably don’t. If people are in love but aren’t happy, most would agree that it’s a bad relationship.
“2. It creates an unrealistic perspective for society.
Imagine the scenario: there are eight people right now who would benefit from having your organs. When looking at the principles of utilitarianism, the balance of happiness over harm supports the idea of putting you to death to improve the satisfaction of everyone else. Why?
Because you are not in that majority.
If there are eight other people and you to make nine, the chances are that everyone else will vote to save themselves as a group at the expense of your life. That is why the ends can never justify the means. It makes it too easy for the majority of people in a society to create harm using the balancing principle. The happiness of the minority must also come into consideration.”
I’ve previously argued that you should kill one to save 8, and similarly sacrifice one person to save 8. Anything else privileges those who happen to benefit from the status quo. I think most people agree that it would be moral to sacrifice yourself to save 8. The happiness of the minority does come into consideration by utilitarianism—it weights everyone equally. Surely people shouldn’t get extra consideration merely because they’re in the minority. It would be right to save 5 people rather than just 1.
“3. Utilitarianism can be unpredictable.”
So can rationality. Life is unpredictable sometimes.
“4. It also relies on people making consistent decisions.
If there is one thing that humans are good at doing, it is changing their mind. You cannot trust anyone to ask for the greater good if the majority decides to get rid of all of the other structures which support societal health and wellness. The average person will act selfishly whenever they are faced with a difficult decision, no matter what their upbringing or spirituality happens to be.
Why do people follow religions in the first place? The goal of “being saved” isn’t to initially help anyone else find some level of eternal salvation. It is to create a life insurance policy for an unpredictable future because that is what offers comfort to the soul. This process would happen immediately if society shifted to utilitarianism.”
What? What process? This is just incoherent. Utilitarianism says people should be consistent not that they will be. It doesn’t deny the obvious fact of selfishness.
“5. Utilitarianism relies on multiple definitions of happiness.
Every person has a different definition of happiness. Although we can find common ground on specific things, it is virtually impossible to see two people with cloned perspectives about the world today. Humans are complex beings. What makes one person happy can make another individual feel bored or out of touch with their life. That means we are faced with two choices: we could either find common ground within our experiences to compromise on a definition of happiness or only allow the description of the majority to exist.”
Utilitarianism says we should maximize good experience (At least hedonic utilitarianism does, which is what I defend and what the utilitarian’s that he criticizes do). There’s on definition that’s instantiated in different ways in different people. Some people get happiness from certain foods. Others don’t. This isn’t different definitions.
The last part is particularly baffling. Oh no, utilitarianism allows the description of the majority to exist? Utilitarianism says we should maximize overall happiness, not just maximize the things that make the majority happy.
“6. It creates the potential for the majority to rule through tyranny.
People who self-identify as being an evangelical in the United States do not support the idea of same-gender marriage at a level of 67%. Although younger evangelicals support the idea as a majority at 53%, the significant population of older adults skews the overall percentages. Now imagine that laws were being created based on the concept of utilitarianism in this population group. If you identified as an LGBTQIA+ individual in this society, you would be unable to get married. There would be nothing you could do about it either until enough people were swayed to come over to your position.
Just because the majority of a population believes something is right does not make it the ethical choice. That thought process kept women from voting for centuries, permitted slavery to exist, allows for child trafficking and exploitation still today, and many more activities that harm others in some way. That is why happiness cannot be permitted to be the foundation of societal pursuits. Sometimes the correct choice is not the popular choice.”
Tyranny of the majority doesn’t maximize overall happiness. Minorities are harmed more by oppression than majorities benefit by it. This is where the confusion at the beginning of the article sends the author astray. Utilitarianism doesn’t just benefit the majority—it maximize overall benefit. Torturing one person to make two people slightly happy wouldn’t be justified because it wouldn’t increase the overall balance of experience.
4
Not be be surpassed in foolishness by the green garage blog, Gaynor writes an article talking about why utilitarianism is “an awful ethical system, with almost no merit.” The objections given, however, are awful ethical objections, with almost no merit.
Gaynor says “The first problem with utilitarianism is that it is conditional. Utilitarianism is a teleological system that says, “seek to maximize utility”, different thinkers have put for different answers to what that that utility is. This makes utilitarianism a conditional system, it only applies so long as the actor agrees with the identified activity, or property that provides utility. If one seeks to maximize pleasure, as Bentham suggests, that’s fine, except if I don’t want to maximize pleasure the entire system is useless to me. This is a major problem, as an ethical system shouldn’t be entirely contingent on an assumption, that happiness is the correct thing to attempt to maximize. David Hume calls this the is ought-problem.”
Ah yes, the notorious is-ought problem. The author seems unaware that the is ought problem avails all ethical theories—it’s a problem of logical deduction, of getting a normative conclusion from purely descriptive premises. This isn’t an objection to utilitarianism any more than it is to literally any other ethical theory ever.
The author objects that utilitarianism only applies if the actor agrees with it. This is either false or trivial. If the claim is that utilitarianism makes no demands on non utilitarians—that would be false. It’s a theory of what things are good and what things are bad. It applies to non utilitarians just as much as others.
However, if the objection is that utilitarian arguments won’t motivate non utilitarians, that’s obvious but true of all ethical theories. Deontology won’t motivate non deontologists. Furthermore, I argue that rationality requires people to be utilitarians, so it’s binding as much as rational belief is. Sure, people with irrational beliefs won’t care about standards for rational beliefs. However, if they don’t care about them they’re being foolish.
Quick note on the is ought problem, it’s not a very good objection. It does show that we can’t prove objective morality deductively. However, we also can’t deductively prove induction—you can’t straightforwardly derive a will be from an is any more than an ought from an is. Despite that, we still make predictions about things…
Moral non naturalists would agree that you can’t cross the is ought gap but just say that there are non natural moral properties that make things matter. These are not trivial tautologies, rather they’re controversial substantive claims. The is ought gap just rules out trivial tautologous moralities.
Moral naturalists would think that morality IS a natural property of some sort, so you can derive an ought from an is. Either way, Hume’s guillotine is not compelling, except against confused libertarians trying to derive all of ethics from definitions and first principles with logically necessary and uncontroversial premises.
Gaynor continues, saying, “The second, arguably larger issue, is that it’s impossible to apply for two reasons. Because utilitarianism attempts to maximize something we must have a way to quantify it, or at least compare two different items to see which is greater. Except how does one quantify pleasure or pain? Bentham proposes a “pleasure calculus” based on 7 attributes of pleasures or pains, but this is really just moving the goal post, how do you compare the intensity of pleasure, of the fecundity of pain? These are impossible, John Stuart Mill suggests there are two types of pleasures, higher and lower, but this is just a further attempt to both ignore the impossibility of comparing pleasures and pains as well create artificial distinctions, grounded not in reason, but in individual intuition. If we can’t tell which actions are better, we can’t actually make any decisions from our ethical system.
The other issue in the application of utilitarianism is that, even if we could compare pleasures and pains, they’re often impossible to predict in advance, or even years later. For example, was the accident at Three Mile Island good or bad? It obviously had devastating effects, but it also was a catalyst for changing nuclear power policy in the US, and even now, 30 years later, we probably can’t say whether the benefits in safety policy outweigh the obvious costs.
Because utilitarianism is both logically unsound (it relies on an unproven assumption) and impossibly to realistically implement it is a bad ethical system. I have no understanding of how people try to follow a utilitarian ideology in light of these indisputable flaws. In a future post I’ll cover my issues with some deontological ethical systems.”
The fact that we can’t quantify things precisely doesn’t mean that we’re totally in the dark about how to maximize them. Despite my inability to quantify happiness I’m pretty sure that if I banged my head against a table with great force it wouldn’t be conducive to happiness. Similarly, saving a life would be conducive to happiness. In chess, it’s impossible to precisely calculate exactly how the game will play out if you take a particular move, but it’s still possible to make good moves that you’re reasonably certain will turn out well.
We can’t predict things with perfect accuracy years in advance, but we can be reasonably confident. Three mile island was probably bad because it caused lots of damage and also resulted in bad nuclear policy. Saving lives is probably good, even if we’re slightly uncertain. One calculates pleasure and pain impersonally the same way they do in their own life. They consider what each of the mental states is like, and see which of them would be preferred. Eating ice-cream causes pleasure, but it would be outweighed by brutal torture. We can make similar judgements when deciding what things will be optimific.
They finally say that utilitarianism is logically unsound, so it’s false, having an unproven premise. Well, as per Münchhausen trilemma, any chain of justification has to bottom out at some point. We have to rely on some axioms to form beliefs. Utilitarianism’s judgements are no different from literally all other domains in this respect. Mathematics itself relies on axioms. The fact that pain is bad is about as plausible as mathematical axioms.
The rest of this blog looks pretty interesting, but this article is just awful and misses the point on everything.
5
This well named blog has some objections which are fine—better than some of the other ones we’ve seen so far.
“(1) Perhaps pleasure is not the ultimate good: perhaps there are other goods, such as beauty, justice, truth, loyalty, duty, and so on, that can trump pleasure in some cases.”
All of those things are generally conducive to utility. Things that would otherwise seem just aren’t just if they don’t make lives better. It would be senseless to punish people if it didn’t produce benefits, given the arguments against retributivism. Beauty is only good if it’s appreciated, we wouldn’t care about a random beautiful unobserved galaxy.
Maybe you disagree? Well, if a beautiful unobserved galaxy is good, then enough beautiful unobserved galaxies would be good enough to outweigh any bad things. If this were the case, the destruction of googol unobserved galaxies that were beautiful would be worse than the holocaust. Also, beauty is not discovered but invented. The reason we find humans beautiful but blobfish ugly is because we evolved to feel that way. Blobfish find each other very beautiful.
Truth, loyalty, and duty are also only good if they’re conducive to utility. If one was loyal to their friend and helped them cover up a murder, that would be bad. We value the truth about the optimal cure for malaria to a great disease. We don’t, however, value a cure for a fictional disease. This is because truths are only good if they’re generally conducive to utility.
The truth of the number of stars in the universe isn’t valuable. However, if knowing the number of stars in the universe allowed us to save the world, it would be. The value of truth is entirely contingent on the well-being it brings. This is the only way to explain why the aforementioned virtues wouldn’t matter if there were no sentient beings.
Duty is also only valuable if it’s optimific. The nazi excuse that they were just following orders holds no weight, because their alleged duty was evil.
“(2) It does indeed seem wise to calculate consequences to some degree to discern if an action is right or wrong. But are we really prepared to accept that motives and acts have no moral value in themselves until we look at the consequences? Isn’t the act of rape intrinsically wrong…indeed instrincially evil?”
Yes, rape is not conducive to utility. Acts and motives do have moral value because they’re the only way to infer things about what people will do in the future. Hitler’s grandmother was not blameworthy and shouldn’t be punished, because there was no way to predict her actions would turn out bad. It’s impossible to deter actions with unpredictable consequences. Bergman explains this principle super well.
“(3) Moreover, can we really calculate consequences to a sufficient degree of accuracy, especially in situations like war where there are an astonishing number of variables to take into consideration as we do the pleasure calculus?”
I already replied to this objection—no need to rehash it.
“(4) It appears that some harms are not evil. We can think of endless examples of how people harm each other in everyday life – certain inappropriate comments, lies, breakups, and so on – in ways that are bad not evil. Thus the utilitarian view that all harm is evil seems too inclusive.”
Utilitarianism wouldn’t lab all bad things evil. Evil generally refers semantically to things done by conscious agents that are very bad. Things that are a little bit bad are bad but not evil.
“(5) Moreover, utilitarianism, in focusing only on consequences as far as moral analysis goes, fails to place any value on those doing evil. All harm is evil regardless of its source. But this seems wrong: we often judge one form of harm as worse than another not simply because of the harm it causes in others but because it was voluntarily willed by a culpable agent with certain sadistic, malicious, or defiant motives. So by adding certain motives and culpable agency to extreme forms of harm we can successfully distinguish evil actions, motives, and agents from those that are just bad.”
These considerations affect how acts should be treated for reasons I explained when responding to the third argument. However, the harms are not made worse by being caused by malice—malice just affects how we should treat it. If covid were made by a psychopath in a lab, that wouldn’t make it any worse. I’d rather take a 49% chance of death from being shot by a murderer than a 50% of dying by being shot by accident.
“(6) A utilitarian must be prepared to accept some evil as a means to satisfying the GHP. This would allow evil to be used instrumentally, that is, to be used knowing that it is evil but allowing it to occur as a means to a greater good being established. This of course means that, paradoxically, it can be right to do evil.”
Utilitarianism holds that things that would be prima facie bad are sometimes justified. In general it would be evil to kill one person. However, you should kill one person to save the world. Utilitarianism just disagrees about the conditions for being evil. For an act to be evil, it has to have bad consequences, so killings to save the world wouldn’t suffice.
6
The Free Radical thinks that utilitarianism is stupid for very bad reasons. The main objection seems to be that utilitarianism sometimes justifies taxing people to benefit others and paternalism. They write
“Even though this definition of utility as a measure of happiness/fulfillment/satisfaction etc. is not useful scientifically, it is appealing because it seems like it reflects reality. I’m not saying it doesn’t. I agree that, in a very loose, non-scientific sense, the happiness I would gain from having an additional loaf of bread would be much less than that of a starving man who received the same loaf. And what’s more, I act like a utilitarian to some degree in my personal life. Every time someone donates food to the hungry or gives the foul ball they caught at a baseball game to the kid in the seat next to him, they are acting out of some form of this sentiment.
However, in those cases, they are still acting in accordance with their own personal preferences, it is just that those preferences have what might be called a utilitarian dimension to them. They get more happiness out of giving the bread to the hungry than out of eating it or out of seeing a child smile and imagining him lying in bed clutching the foul ball and dreaming about being a major league ball-player than they would from throwing it in their closet and probably forgetting about it.
The problem with utilitarianism is when you try to apply it by force via the government. The government I think Sumner would prefer is one which leaves people alone to make decisions about how to live their lives in most cases but just does a few things to spread the wealth around to increase total utility. And I think Sumner’s ideal world would be a lot better than what we’ve got now. But the problem is that that is not a suitable moral/philosophical foundation for such a system because this notion of utility is entirely imaginary which means it is entirely subjective which means that it can be used to justify any breach of individual property rights.”
The fact that utilitarianism disagrees with their preferred policy seems like a better argument against their policies than against utilitarianism. Both egalitarians and libertarians agree that the state should enforce a just distribution of property. Libertarians would support the state using force to make me give my TV back to a person who took it. Thus, the relevant question is not whether force should be used to justly distribute goods, but what the just distribution of good is. It’s not more an objection to utilitarianism that it sometimes endorses taxes than it is an objection to libertarianism that it sometimes endorses taking people’s stuff, when they’ve stolen from other people.
There are further problems though. Lots of things that we do all the time infringe on others without their consent. Lighting a candle produces smoke which goes into others lungs without consent. Playing music shoots soundwaves all around which hit the eardrums of people without consent. Even talking to people shoots soundwaves at people without consent. Only utilitarianism can make sense of the things that are thought to be rights.
The objection that generosity involves people acting in accordance with their preferences, while state theft doesn’t, only matters if you think one should never violate consent. There are obviously cases where it makes sense to violate consent. If a person is knocked unconscious, you should rush them to the hospital even though they didn’t consent in advance. Additionally, we’re able to do things all the time without people’s consent. Looking at someone’s house doesn’t violate rights, even if people don’t consent to it. Entering people’s houses without consent, however, does. Why? Well, one rights is optimific, the other one isn’t.
Our friend the Free Radical continues, writing
“This way of thinking suggests that someone has a moral right to the possessions of others by virtue of being poorer than they are. So while it is functionally possible that we could have a specific “progressive” tax system along with an entirely libertarian everything else and that would work pretty well if we had it, it would never lead to that. The same notion of forced utility maximization across individuals, once accepted, would be (and for that matter is) used to justify all manner of other government interferences in the lives of individuals.
The land your house sits on might generate more “utility” with a highway there, or for that matter a shopping mall. You will have higher utility if you save more money for retirement but you are too stupid to realize it so the government will just take some out of your paycheck and use it to pay you back later if you survive long enough (and to pay other older people in the interim). Sure, maybe you could live another year or two if you got the million-dollar treatment for your cancer but would the utility you derived from that extra year really be more than that derived by all the poor children we could feed with that (“your”) money? Yes, we’re sending you, against your will, to fight and probably die in a foreign country but your sacrifice can’t be compared to the extra utility that will be secured for future generations. It’s not that we want to tell you what to put in your body, it’s just that when (certain) drugs are legal, crime increases and you can’t possible argue that the added utility you get from smoking dope outweighs the indirect harm that legalizing it does to your neighbors. Plus we know that drugs are actually hurting you anyway, you are just, again, too stupid or weak to realize it, so we’re actually increasing your utility by taking them away too. It’s win-win! I could go on but hopefully you get the picture.
Utilitarianism, at its core, is just a made-up method of collective reasoning. This type of collective reasoning is at the heart of every usurpation of individual liberty. It is the foundation of every form of socialism, communism, fascism, etc. The only alternative to collectivism is to elevate the rights of the individual above all such notions. This means we have to be willing to look at a rich guy and a poor guy and think that it would be better if the rich guy cut back on his yachts to buy a house for the poor guy without also thinking that we aught to force him to do it. Once you start down that path, forever will it dominate your destiny.”
This person correctly identifies that utilitarianism would support non libertarian things if they had good consequences. However, there are obviously cases where it’s okay to violate rights. One should steal a penny from Jeff Bezos to prevent infinite child rape. If a person falls out of an airplane and is hanging on the end of the building, they wouldn’t be obligated to fall to their death if the building owner announces that they don’t consent to the use of their building.
All of the bad things like communism and fascism had bad outcomes. I’m not sure what it means for utilitarianism to be a “made up,” method of collective reasoning. It’s certainly a method. I don’t know how it being made up is an objection, or even what it means for it to be made up. Any society will need to make decisions. Utilitarianism, like libertarianism, just has specific judgements about what those decisions should be.
7
The new criterion thinks that there’s something wrong with utilitarianism, and believes that if utilitarianism were are old ethical criterion, we need a new one :) . The first objection is that utilitarianism would prescribe that it would be good to bring about utopia if it required the brutal torture of one child. This is true, but doesn’t count against utilitarianism. Previous articles have argued against the idea of inalienable rights. Additionally, most people would agree that you should kill one person to save the world. If this is true, then you should probably also torture a child to save the world. Bringing about literal utopia has as much value as saving the world, so it should be similarly justified.
Hearing about the child can induce scope neglect—causing us to ignore all the goodness of the society broadly. Nearly everyone is well off in that society.
If you think that a society where one person is brutally tortured but everyone else is well off is a bad society then you should be against our current society, for more than one person is currently being tortured. The only difference is that in the hypothetical society there’s only one person being tortured (which is a good thing, better than lots of people being tortured) and that it’s more systematized. That, however, shouldn’t be morally relevant.
Lots of small good things can add up to produce enough goodness to outweigh one very good thing. On a regular basis we risk being tortured when we go out to do anything fun. We’d be less likely to be brutally tortured if we never went out. This, however, would be ill advised.
Every time a person has a child they’re risking their child being brutally tortured for their child probably having a good life. It’s only moralistic bs and systematic lack of empathy for the millions of utopia dwellers that causes most people to oppose the utopia.
The author next argues that utilitarianism disregards liberty, which I’ve already responded to several times. Finally they say “To the extent that we see our selves as moral creatures-- creatures, that is to say, whose lives are bounded by an ideal of freedom--utilitarianism presents itself as a version of nihilism. “What is the use of use?” That is one question the thoroughgoing utilitarian refuses to ask himself. Entertained in earnest, that question reveals the limits of utilitarianism. The limit is reached where morality begins, which is why a utilitarian faced with our thought experiment can only endorse what it proposes or wring his hands in mute uneasiness.”
Utilitarianism isn’t nihilism… not sure why the heck Kimball thinks this. It holds that there are good things, namely, utility. Utilitarianism doesn’t say anything about the use of use—it just says happiness is good so there should be more of it. Reason allows us to figure out the solutions to ethical questions—much the same way other theories think it does.
Why is it that when the subject of utilitarianism comes up otherwise intelligent people begin babbling incoherently and producing awful objections?
8
Content warning very very very low hanging fruit
“No, I'm not going to pull a B. Williams and say it "harms your integrity" or some other stupid shit right before I shut off my ears and go running off like bezoomny.
Utilitarianism is bullshit because there's nothing is an end in itself. You can't say "making humans happy is important because it makes humans happy". Why then, can't we give any real horrorshow drugs to people and expect them to experience the highest level of happiness? The utilitarian answer would be "because it would lead to suffering later on and will kill them so it wouldn't last long". This is bullshit. Remember, happiness is just a release of endorphins in your brain, it's a meaningless evolutionary mechanism. Why then, shouldn't all humanity stop whatever the hell it's doing and focus on altering our genes so that we would experience a healthy release of endorphins at all times? It would last forever. And then utilitarians will finally see how little difference does happiness make.”
Well, utilitarianism says making people happy is good. Much like there’s no deeper reason why 1+1 equals 2 there doesn’t have to be a deeper reason why happiness is good. It just is.
Drugging people without consent doesn’t increase net happiness. It shortens their lives. Altering our genes to make us happy is, in fact, good. Happiness is produced by endorphins in the brain, but happiness is a good mental state. It would be good even if it weren’t produced by endorphins. The state of being in love is produced by chemical states but it’s still good. Saying that because it’s produced of things that are not by themselves valuable it can’t be valuable itself is the composition fallacy. If all of humanity stopped their activities and did cocaine that would not bring about more overall happiness.
9
Well, Christians think about ethics sometimes and when they do they often say nonsense. This article argues that utilitarianism is grossly and grotesquely immoral. It disagrees with the allegedly very moral bible, which says the following.
Leviticus 21:9 “And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”
Leviticus 24:16 New International Version (NIV) 16 anyone who blasphemes the name of the Lord is to be put to death. The entire assembly must stone them.
Second Kings 2:23-25 NIV Elisha went up to Bethel. As he was walking along the road, some boys came out of the town and jeered at him. "Get out of here, baldy!" they said. "Get out of here, baldy!" He turned around, looked at them and called down a curse on them in the name of the Lord. Then two bears came out of the woods and mauled forty-two of the boys.
Numbers 31:17-18 Now therefore, kill every male among the little ones, and kill every woman who has known man intimately. But all the girls who have not known man intimately, spare for yourselves.
Nonetheless, people who think that the bible and was a pretty good book and are big fans of Jesus are calling out utilitarians based on the alleged immorality of their ethical doctrine. The objection relates to the numerical proficiency of utilitarians. This Christian says “The heuristic “do what creates the greatest happiness for the greatest number of people” encourages treating people as nonhuman numbers only. Utilitarianism involves taking a calculating attitude where human beings are just pawns in the calculations. As such, utilitarianism has a corrupting effect; encouraging us to be immoral, not moral. It is a rejection of justice, and fairness, in favor of what is temporarily useful to the many.”
I have no idea why a morality that ignores calculating would be preferrable to anyone who was able to do addition. Surely if we had the option of either saving 1 person or five people, we should save the 5 people. Utilitarianism doesn’t reduce people to mere numbers—it uses number to represent people’s interests for purposes of calculating. This is reasonable—giving people presents is nice but saving lives is better. If you could only do one of them you should save lives. Numbers are just used to represent the moral importance of it. The alleged immoral, depraved, corruption of utilitarianism is obviously question begging—it won’t be convincing unless you’re already convinced of the immorality of calculating in ethics. Then they give the reductio of framing an innocent person, which I’ve already defended here. They then say that utilitarianism justifies society killing people if hostages demand it as a requirement for releasing 5 people. This is obviously false—it would encourage more hostage taking and not be optimific.
The author continues “This indicates that human reason when decoupled from (moral) intuition is an unreliable and even demonic guide.”
Also, worth pointing out that utilitarianism doesn’t reject moral intuitions.
Next they argue that utilitarianism justifies pushing a person off a bridge to save five people. The arguments for doctors killing people and harvesting their organs also apply here—but there are more reasons to push people off bridges.
1 Imagine a modification to the scenario in which all of the people on the bridge and on the tracks are on suitcases, so that no one knows if they’re on the tracks or on the bridge. In this case every single person wants you to push the person off the bridge because there’s a 5/6 chance it will save their life and only a 1/6 chance it will end it. In that case, it seems obvious that you should push the person off the bridge. Making people impartial shouldn’t make this action worse—so these cases seem morally parallel.
2 Consider another modification of the situation in which there are 5 people on the tracks. The train will kill all of them unless you push one of them forward, in which case only that person will die. In this case, it seems obvious you should push them forward—they’ll die either way, it will just save people’s lives. However, pushing someone off a bridge is no worse than pushing them forward, and the person atop the bridge has no extra moral worth than the person being pushed forward, so the cases seem parallel once again.
3 It seems obvious that the world would be better if the person fell off the bridge. No one would die rather than five people. Thus, deontological theories have to hold that giving a perfectly moral person control over a moral situation makes the situation worse. A third party should hope for the best state of the world, so a benevolent third party should hope that you act wrongly.
4 As Greene argues the key factor that determines whether or not people find it intuitively acceptable to cause on to die rather than five relates to whether the force was exerted with their muscles. That’s why we find it intuitive to flip the trolley but unintuitive to push the person in bridge—one is done with our muscles, the other one is not. This shows that the intuitions don’t capture a morally relevant feature.
5 Suppose that the footbridge case is duplicated with one utilitarian and one deontologist—the deontologist doesn’t push the person, the utilitarian does. A moral third party can choose to stop one of the trains. It seems obvious that they should prevent the train from killing the five, which means that it’s much more important to negate to effects of the deontologists action than it is to negate the effects of the utilitarians action.
6 People are much more utilitarian if speaking a foreign language—which also reduces their emotional reaction to the situation.
7 Imagine that you would be in the situation, where you would be either on the footbridge or on the tracks. It seems obvious in that case that you’d rather they push the person—it’s only when we consider it from the point of view of a detached third party observer that this verdict changes.
8 Imagine you’re swinging on a rope such that if you do nothing you’ll push the person off and save the five people. In order to avoid pushing the person off the tracks you’d have to move your legs up. In this case inaction will result in 4 fewer people dying—but it doesn’t seem intuitively like the morality of pushing the person changes if you’re swinging on a vine. Thus, the killing versus letting die distinction doesn’t work.
9 Suppose you push the person off the track and then can lift them up, causing the five people to die. In this case it seems like you shouldn’t bring them back up—undoing your actions. Yet bad actions should surely be undone, meaning that this action isn’t bad.
10 Imagine a situation in which you push someone off the bridge thinking that it will stop the train. However, you find out that it won’t stop the train, and the train will now kill all six people. You have a bit of time during which you can either save the one person or the five people. In this case, if you save the one person, no one will have been killed by your actions, while if you save the five you’ll have killed one and saved five. Yet it still seems obvious that you should save the five.
11 We also know that asking people to be impartial makes them more utilitarian. Impartiality is good, right?
Hic est ubi articulus concludit
Well friends, we’ve waded through the river of nonsense surrounding utilitarianism. All the arguments seemed to boil down to one of the following.
1 Utilitarianism has us calculate but that’s hard sometimes and also bad because… humans aren’t numbers. Also, like Hitler calculated sometimes so it must be bad.
2 Utilitarianism justifies x thing that sounds unintuitive until you reflect for like five minutes.
3 I know utilitarianism says x thing is bad but like in a counterfactual scenario where it had good consequences utilitarianism would endorse it.
Why did I make this post? Well, wading through serious phil papers gets tedious sometimes and it’s fun to blow off verbal steam by bashing terrible arguments. It was also a fun experience. Also these articles had lots of readers so lots of people find them persuasive apparently… a fact which seems like at least as good an argument against democracy as a five minute conservation with the average voter.
Creationists will often say very stupid things about evolution, like that it believes apes2 randomly give birth to humans sometimes. Evolutionists do not in fact believe this. There are some (slightly) sophisticated creationists who make arguments against evolution that are way better than these crappy misconceptions. However, it’s still worthwhile for popular science advocates to argue against this low hanging fruit, for it’s shockingly prevalent among creationists. I think a lot of people disagree with utilitarianism for similar terrible reasons. So it’s similarly worth correcting idiotic misconceptions. It’s also fun for the whole family.
The rationale was also similar to Alexander’s rationale here. If utilitarianism is dismissed with only a few sentences of analysis constantly, it builds up negative affect. It becomes seen as trivially false, the type of thing not worth engaging with. If when people think of utilitarianism the first thing they think of is that it’s that pro framing innocent people philosophy which can’t be applied in the real world and requirea literal omniscience on the part of its practitioners, that it’s the type of thing that can be dismissed in a few sentences by reasonable people, that would be bad for utilitarianism broadly.
Promoting good values is very important . Utilitarianism is the root of all of those good values that are worth promoting. So it would be great if there were more utilitarians. Hopefully this blog furthers that goal. Let human reason, which rejects some intuitions as an unreliable and even is even a demonic guide, spread to the ends of the earth.
Some are really funny so I’ll mock them mercilessly
Yes, I know humans are technically apes…I’m referring to the type of apes that people generally think about when people talk about apes. Stop being a smartass.
Utilitarianism wins outright part 12
Defending the future
0
“The question is not can they speak, nor can they talk, nor can they suffer, but will they be able to suffer and experience happiness.”
—Similar to something cool that Bentham said
I, like most of your, rarely encounter the trolley problem in real life. It’s been at least six months before I’ve had to decide whether or not to give a cookie to Ted Bundy over Mother Teresa. But alongside its many injunctions in hypothetical cases, utilitarianism has significant implications in real world cases. It is, unsurprisingly, correct in all of those cases.
But the number of cases where utilitarianism gets the right answer in the real world surpasses the number of grains of sand on a beach and the number of stars in the sky, which, contra old testament are different numbers. This article shall discuss just one class of cases—cases which both demonstrate the superiority of utilitarianism and why the spread of utilitarianism is so important. It is no exaggeration to say the fate of the world hangs in the balance—utilitarians are the most ardent advocates for making it no longer hanging precariously on the precipice.
The verdict here relates to considerations discussed in the repugnant conclusion post. It can be thought of as a real life version of the repugnant conclusion—but with a much more intuitive result.
Eliezer Yudkowsky once remarked that if people spent five hours a day bashing themselves with baseball bats, they’d come up with convenient excuses for why it gives value to life. He thinks death is similar—we only think crazy things like “death gives meaning to life,” because death is already here. In the absence of death, no one would support bringing it into the world.
This is a partial explanation. However, people also say dumb things about phenomena that don’t exist yet—and they get praised as deep for doing so. When discussing the end of the world people start spouting crazy nonsense. If you google quotes about the end of the world, most of them are stupid.
“It's the end of the world every day, for someone.”
― Margaret Atwood, The Blind Assassin
What? The end of the world for someone? The end of the world is not “for someone.” It does not describe the end of one life. It, as the words “the world,” suggests denotes the end of the world. One person is not the world.
“End of the human race is just part of an endless life cycle.”
― Toba Beta, Master of Stupidity1
Things being part of an endless life cycle don’t vindicate them. Getting malaria and dying is part of the life cycle but I think it’s safe to say most of us are not big fans of malaria.
However, beyond even this inane trite about the end of the world being neutral or being part of some natural cycle that Very Serious People understand shouldn’t be worried about, people have moronic views about the end of the world. Utilitarianism correctly identifies that (assuming we predict the future will have positive utility) the end of the world would be very very bad—the worst thing in history by orders of magnitude.
One reason people are opposed to the end of the world is because they think it’s bad when billions of people die. I agree—it is bad when billions die in agony and terror. Yet this is not the main reason to oppose the end of the world. The value of preventing the deaths of everyone currently is far outweighed by the value of humanity’s continued preservation.
1
Utilitarianism argues that we should give immense weight to the happiness of future people given the vast number of future people that are likely to exist and their immense capacity for value.
How many people could exist? Well, Bostrom estimates that with future technology there could be 10^54 years of experience, equivalent to 10^52 hundred year lives. Those lives could be very good. But let’s say you grant this thesis. Well, utilitarianism prescribes that, if this is true, reducing existential threats should be the world’s top priority by orders of magnitude. It is not close. A 1 in a billion chance of 10^52 people is still 10^43. That’s a lot of people. If you tried to fit them all in a boat, you’d need a very big boat.
In order to deny this thesis you’d have to hold that it is not morally good to bring people with good lives into existence. It is, as Narveson says, good to make people happy, but not good to make happy people. However, this view, despite it’s snappy slogan and superficial appeal is disastrously wrong. Horrendously, mind-bendingly wrong. Let’s see the insane things that this entails.
Well, if there’s nothing morally good about making future happy people then if you had a choice of either bringing someone into existence with 10!!! units of happiness or with zero, it would be morally neutral to bring the person with zero units of happiness into existence. This is an insane view. If you could make your child less happy by a factor of infinity, that is bad. These views have to accept that if given the choice, one has no reason to prefer having a happier child, assuming there’s no suffering. This is already an insane implication. It gets crazier.
Well, now lets add suffering to the picture. Each person will be guaranteed to have 5 units of suffering. On this view, unless the happiness gains value only in virtue of the suffering, it would be morally neutral to bring about a child with 10!!! units of happiness and 5 units of suffering over one with 0 units of happiness and 5 units of suffering. Thus, this view has the troubling implication of making it morally neutral to press a button that would make no one who will be born in the future ever happy again. This is very implausible.
Maybe you think there’s an easy way of getting out of this, by saying that happiness can offset suffering, such that if there’s more happiness than suffering then a persons suffering is offset. However, this has a very troubling implication, namely, that it would be morally neutral to reduce everyone who experiences more happiness than misery over the course of their life to the precise threshold at which their happiness equals their misery. If happiness only offsets misery then happiness above the threshold needed to offset all the misery is just an optional extra, with no moral significance, that can be removed without harm. This also required biting the bullet on the first reductio.
Perhaps you think that suffering is a prerequisite for the value of happiness. I see no reason to accept this claim, but if true, it has no significant implications. Future people will undoubtedly have some suffering in their lives. On this account their happiness will still be good given the existence of this. This also requires biting the bullet on the first reductio.
2
Presumably we’d all accept that bringing a being into existence who lives a horrific, tortured existence, would be morally bad. It would be seriously morally wrong to press a button which created a child, and then left them to be brutally tortured to death. A libertarian who I once argued with bit the bullet on creating a spawning pit that creates lots of babies directly above furnaces, who fall directly into the furnace, should be legal. I think it’s safe to say, dear reader, that you have more sane sensibilities, and would not bite such an enormous bullet.
So we know that bringing a being with a terrible life into existence is wrong. If we accept that bringing a being with a great life into existence is morally neutral, then it would be wrong to have kids if they had a 99.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999% chance of having a fantastic life, experiencing more happiness than all humans in history combined, and a miniscule chance (100- the aforementioned number)% of having a slightly bad life, which consisted of getting one papercut. This is an insane view that would also incidentally condemn all having of children, because children are not guaranteed to have good lives.
One could take the view, as many economists do, that we ought to discount the future. However, this view runs into difficulty. If we take the view that we should discount the future at a rate of 2% per year, we’d have to hold that 1 person's death today has more moral significance than 1,682,967,360 people dying a thousand years from now, which is deeply counterintuitive.
3
One could take the view that for something to be bad it must be bad for a person. However, this view runs into problems as well. If the government passes climate legislation that makes the lives of people in 100 years better, the people will be different because climate policy will change the population. Thus, this passage of climate policy will be better for no individual. However, it seems clear that we still have reason to not cause suffering to future people, even though future people would have a changed composition if we improved their lives. This also requires biting the bullet on all the above reductios.
I am a child of the Montreal protocol. I was born long after it was implemented. It is almost certainly the case that, in the absence of it, I wouldn’t exist. The Montreal protocol almost certainly had a big enough impact on the world to produce the unlikely chain of events that caused my existence. This is true of basically everyone born after Montreal. However, despite that, the Montreal protocol was still good for the future. While I wouldn’t exist sans Montreal, someone else would, and they’d probably have a worse life on average.
This problem, called the non-identity problem, avails all such accounts which ignore the quality of life of future people. If making happy people is not good, then no action which improves the future is good. This is unintuitive to say the least.
There’s a certain intuitive symmetry between pleasure and pain that they who disagree with the case for reducing existential risks must deny. They have to hold that future pleasure is good, but future pain is bad—a strange and ad hoc asymmetry which conflicts with our best judgements.
If one’s life is good, the universe is better for them having been. Contra Benatar, it would not be better for them never to have been.
Perhaps this still seems unintuitive—the notion that reducing existential threats a fraction of a percent is more important than saving millions of lives. There is an obvious explanation of this—it is hard to sympathize with the non existent. Yet as we’ve seen, failure to expand one’s moral circle to everyone is a failing on their part with an incredibly poor track record. Our intuitions are not reliable. We should follow utilitarianism, include everyone in our moral circle, and shut up and multiply.
That is actually the title of the book. I haven’t read it but hopefully this was an example of stupidity.
Utilitarianism wins outright part 13
Why you should let a monster eat your babies if they like the taste enough
0
Robert Nozick, one of the most popular contemporary opponents of utilitarianism thought he had a knockdown objection to utilitarianism in the utility monster. Imagine a being with far greater capacity for utility than all humans in history, a being whose midday meals bring more joy than all the love in history. Suppose additionally that this being is hungry and really likes the taste of humans. On utilitarian accounts, it would be morally good to let this utility monster eat us all. And yet some people for some strange reason find this unintuitive.
I have never understood this intuition. There’s no death more noble than dying to make the midday snack of a monster. Imagine this monster roaming around, chomping entire cities, making a smoothie out of London, Detroit, New York City, Albany. Sounds like paradise, doesn’t it?
When we see the cookie monster eating cookies, there isn’t anything horrific about it. The cookies seem happy enough. And the monster is very happy. And yet this utility monster is just a happy guy. He gets more joy from cookies than even the cookie monster does. And he doesn’t even have a particular affinity for cookies. He just loves life in general.
As Tom Lehrer says “Oh, we will all fry together when we fry
We'll be French-fried potatoes by-and-by
There will be no more misery
When the world is our rotisserie
Yes, we all will fry together when we fry.”
I know I talked yesterday about how bad the end of the world would be, but I obviously wasn’t talking about a scenario in which the end of the world made a really good snack for a monster. That changes everything. Obviously.
In fact, one of the better arguments for welfare programs is that they allow people to have more meat on the bone, so in the off-chance that a utility monster comes and devours the world, we’ll be tastier morsels. That’s one of the main reasons I support welfare programs. In fact I wrote a short poem about the utility monster.
“Oh utility monster.
Darn, when trying to find rhymes with monster google gave no answer (but like pronounced in a British way so that it rhymes with monster).
The ills of humanity’s malevolent chatter
Pale in comparison to those of their refusal to adorn your platter
When others hear about my advocacy for you they say I’m silly
But they don’t grasp the utility
Of eating them and why it is vaster
Than their other absurd pursuits
Like dancing and playing the flute
They say I’m a bastard
But they are just ass hurt
That you’re justified in eating Beirut
They underestimate your utility’s vastness
And stupidly think you shouldn’t eat Annapolis
This topic seems to make their brains break
Like you do when you bake
Their brains into a cake
The fact that they don’t accept all this is um
Evidence for solipsism
For this obviously true position
Shouldn’t cause a schism
In those who otherwise support utilitarianism
On this result Nozick wasn’t too keen
Probably just didn’t want to be steamed
But he shouldn’t have feared
You wouldn’t have eaten him seared
His experience machine brain-rot left him unclean
For fears of boring you, utility monster
This poem shan’t go on longer
Signing off, sincerely me, a tasty blogger”
1
Despite the obvious moral imperative to allow the utility monster to run roughshod over cities, eating every man, woman, non-binary person, and child in its path, some people are bizarrely not convinced. So I suppose I’ll produce some arguments for the obvious conclusion.
First, the intuition flips with a negative utility monster. Suppose there was a utility monster who experienced trillions of times more suffering than any human. It seems intuitive that their suffering would be worse than the collective suffering of humans. The reason for this divide in intuitions is simple; we can imagine something close to the most extreme forms of suffering. We can imagine, at least to some degree, what it’s like to be tortured horrifically. While this does not come close to the badness of the negative utility monsters suffering, we can still get a sense for how bad their misery is.
In the case of a negative utility monster, experiencing more misery than all humans ever, it seems intuitive that we should sacrifice ourselves to relieve their suffering. Their suffering is so horrendous that sacrifices should be pursued to reduce it. Yet this case is analogous to the positive utilitarian case.
Perhaps you don’t share my and Yetter-Chappel’s intuitions about the negative utility monster, that you should sacrifice things to relieve its suffering. I would suggest that this is reflective of not considering the horror of extreme suffering.
TW descriptions of extreme suffering, all of which will be contained within the quoted text.
Take a moment to consider horrific forms of torture. Salient examples include being burned alive, slowly fried inside a metal bull, drowning, bleeding out from thousands of cuts, starving to death, having your skin melted off from boiling water, and other very grisly things. This agony, extreme as it is, is but the smallest of blips on the radar of the negative utility monster. The negative utility monster would trade-off thousands of years of being burned alive for avoiding one second of the type of agony it’s experiencing now. The gulf between the misery of the negative utility monster far surpasses the gap between the smallest of pinpricks and the most grisly of tortures. Whatever the worst agony is that you’ve experienced in your life is but the tiniest of irritations compared to what the negative utility monster experiences every day for what is experienced as millions of years. The most sadistic murderers in history couldn’t produce a fraction of a percent of the misery that this being experiences every second. Hell will freeze over before it contains as much collective misery as an instant of the negative utility monster’s existence. If one caused the utility monster even a second more of misery, they’d have perpetrated the greatest crime in history
2
Actions speak louder than words. We may prattle on about how horrific the utility monster is, but when we’re in an analogous case, we act like utility monsters. Despite the moralistic preaching against a being of unfathomable greater sentience being unfathomably more important than lesser beings, we treat beings with vastly diminished sentience the way that many would object to the utility monster treating us.
People find it repugnant that the utility monster would be justified in eating us according to utilitarianism. Well, most of the people who make this objection eat meat. They devour sentient beings who are much more sentient compared to us than we are compared to the utility monster. A cow is has much greater ability to experience pleasure and pain compared to a human than a human does to a utility monster.
You, dear reader, almost certainly act like a utility monster.
Most humans treat insects as inanimate objects. Whether our treatment of insects is unjust is a separate question. Insects likely deserve some consideration, though obviously far less than a human. But we treat insects as callously as the utility monster would treat us on utilitarianism. We literally kill them in cold blood because they are buzzing near our ears and annoying us. If we liked the taste and they were healthy and cheap, we would almost certainly eat them.
And yet the utility monster has a much stronger case for mistreating us than we do for insects. The utility monster is orders of magnitude further above us than we are above insects. Fruit flies have 100,000 neurons in their brains. We have 86 billion. If we assume conservatively that sentience scales linearly based on brain size, that means that we’re 860,000 times more sentient than fruit flies. The utility monster is waaaaaaaaaaaaaaaaaaaay more sentient than that compared to us. It’s more sentient enough for its well-being from a single meal to outweigh the collective future value of humanity.
If fruit-flies could do philosophy they’d probably bemoan our callous treatment of them. Being unable to understand the vast pleasure experienceable by humans, they would think that any ethical theory which justified our treatment of them was absurd. They would, however, be wrong to do so. We are justified in discounting the interests of fruit flies to a large degree—albeit probably not to the degree we do now.
3
Let’s take a little detour back to the repugnant conclusion. Let’s suppose that, rather than a choice between billions of beings with great lives and vast numbers with mediocre lives, it’s a choice between one being with a great life and a billion with mediocre lives. I’ve already argued that we should accept the repugnant conclusion. But let’s say that we’re not convinced and we think that it would be better to have one being with a great life than billions with mediocre lives. Let’s also say that we run this experiment 1 million times, at the end of which everyone has their memories erased. If you think that you should bring the one great life into being rather than the billions that are barely worth living, then if you run this experiment millions of times you get something interesting.
What you get is basically the utility monster. One being with a vast amount of pleasure, rather than billions with pretty good lives—lives as good as those that are produced by going through a life that’s barely worth living a million times. Thus, iterating the repugnant conclusion basically gets one at the utility monster. There is tension between rejecting the RC and accepting the utility monster.
4
The utility monster is subject to a similar argument to the one that was applied in the torture versus dust specks post.
Suppose that we’re deciding whether to give a good experience to 7 billion entities or a much better experience (produced by the comparatively greater sentience) to 6.9 billion entities. In this case it’s obvious that if the 6.9 billion entities get 1000 times the pleasure of the 7 billion entities, you should benefit the 6.9 billion entities. Well now suppose that it’s a choice between 6.9 billion entities or 6.3 billion entities with 1000 times the sentience. In this case, once again you should give the benefit to 6.3 billion. Well now compare 6.3 billion to 5.9 billion with 1000 times greater sentience. Once again, you should give it to the 5.9 billion. We can keep going through this process until the number of entities is reduced to one. For there to be some firm cutoff it would have to be the case that there’s some threshold whereby benefitting a number of people below the threshold can’t be as good as benefitting a number above the threshold. This is implausible. Let’s say the threshold is 300. Well, on this account producing a vast benefit to 300 people would be better than producing a benefit that’s 1000000000000000000000000 times greater for 299 people.
While this doesn’t get us to the rights violating conclusions, it does get us to the conclusion that, if we could either feed every person on earth, or feed one meal to the utility monster, we ought to feed the meal to the utility monster. We can add the additional supposition that, if we could kill one person, to give everyone on earth a meal, we ought to do that. Giving everyone on earth a meal will surely save many lives, so it’s analogous to the bridge version of the trolley problem, but with hundreds, or thousands of people on the track below. This gets us to the utility monster. Additionally, I’ve previously provided a devastating takedown of rights that has left Locke, and Nozick shaking. It’s no coincidence that they’re dead—they planned their deaths to avoid having to deal with the critiques :) .
In summa re ridiculum est quomodo stercore soni latine refrigescant et ii soli qui hanc interpretantur occultam significationem habebunt
We should obviously accept the utility monster. It only seems unintuitive because the life of a utility monster is literally inconceivable for us. We can no more imagine being a utility monster than we can imagine in vivid detail objects that are the size of the universe or smaller than light.
Think about all of the good experiences you’ve had in life. Every moment of being in love, reading great literature, listening to music, treating others kindly, poisoning pigeons in the park, stirring up nationalism in Hungary to such an extent that you change the history of the century in ways large enough to change the identity of every human born after 2033, reading this blog, having sex, reading this blog while having sex, reading this blog while stirring up nationalism in Hungary to such an extent that you change the history of the century in ways large enough to change the identity of every human born after 2033, eating delicious food, pretending to be a snake and jumping out of the closets of Nazi’s shouting “I’m a snake, I’m a snake,” having sex while stirring up nationalism in Hungary to such an extent that you change the history of the century in ways large enough to change the identity of every human born after 2033, etc. Normal stuff.
None of these compare to the goodness of the life of the utility monster. Every second that it exists, it experiences unfathomably more happiness than you will experience throughout all of those experiences. Think about how much less the misery experienced in dreams seems to matter—based on the lower level of consciousness. The difference in consciousness is so much greater in the case of this utility monster. Many of Tom Lehrer’s songs seem to be advocacy for the utility monster. We will all go together when we go, poisoning pigeons in the park, the masochism tango, etc.
In that spirit, let us conclude with a final verse of the poem.
“I say this with the confidence characteristic of Searle
Let us overcome this hurdle
Against utilitarianism
So let’s say with conviction
Let the utility monster devour the world.”
Utilitarianism wins outright part 14
Williams' error
“The important issues that utilitarianism raises should be discussed in contexts more rewarding than that of utilitarianism itself. The day cannot be too far off in which we hear no more of it.”
—Bernard Williams in 1973
This prediction of his was obviously quite far off. Williams kept hearing about utilitarianism until his dying days. It is still alive and well, tragically unlike Williams. One of Williams objections to utilitarianism is the following— which will be addressed in this post.
He wrote “(1) George, who has just taken his Ph.D. in chemistry, finds it extremely difficult to get a job. He is not very robust in health, which cuts down the number of jobs he might be able to do satisfactorily. His wife has to go out to work to keep them, which itself causes a great deal of strain, since they have small children and there are severe problems about looking after them. The results of all this, especially on the children, are damaging. An older chemist, who knows about this situation, says that he can get George a decently paid job in a certain laboratory, which pursues research into chemical and biological warfare. George says that he cannot accept this, since he is opposed to chemical and biological warfare. The older man replies that he is not too keen on it himself, come to that, but after all George’s refusal is not going to make the job or the laboratory go away; what is more, he happens to know that if George refuses the job, it will certainly go to a contemporary of George’s who is not inhibited by any such scruples and is likely if appointed to push along the research with greater zeal than George would. Indeed, it is not merely concern for George and his family, but (to speak frankly and in confidence) some alarm about this other man’s excess of zeal, which has led the older man to offer to use his influence to get George the job . . . George’s wife, to whom he is deeply attached, has views (the details of which need not concern us) from which it follows that at least there is nothing particularly wrong with research into CBW. What should he do?”
Perhaps it is a side effect of thinking about many of the other thought experiments, but this one did not strike me as counterintuitive at all. Suppose that George’s contemporary would bring about 5 more deaths than George would, if George had worked at the job. While it might seem somewhat counterintuitive to say that George ought to take the job, it seems much less unintuitive when we think about it from the perspective of the victims. Imagine being one of the five people killed, as a result of George’s inactions. Surely, it would seem reasonable to say that George has a duty to take the job, to prevent your death from being brought about. It only seems counterintuitive when the people saved are unnamed, faceless people, rather than someone who we care about. Surely what Oscar Shindler did was morally right, despite sending weapons to the Germans, because it prevented large numbers of innocent jews from being killed. Additionally, if we take seriously the principle that Williams has espoused, we run into some wacky results. Suppose that everyone in South America would die a horrible painful death, unless a vegan ate a cheeseburger, or even worked at a cheeseburger restaurant. It would be reasonable to say that the vegan would be obligated to work at the restaurant, for the overwhelming greater good.
Additionally, consider two states of the world. In each state of the world we have a thousand Georges. However, for each of them, if they don’t take the job, the number of people killed will be one one thousandth of the world's population. If we say that George should not take the job, then we endorse a state of affairs where every human dies. If we say George should take the job, we burden a thousand people, but prevent the world ending. If this principle would end the world, and be worse for everyone, including George, it seems reasonable to suppose that George is acting wrongly.
Consider a parallel case. George is deciding whether or not to mow the lawn of a neighbor. George, an ardent supporter of animal rights, thinks mowing the lawn is somewhat immoral. However, he knows that if he doesn’t mow the lawn, Fred will be hired. Fred has a curious habit of bringing a shotgun to mow the lawn. While he’s mowing the lawn, he makes sure to shoot twenty passerby’s, throw acid into the face of twenty other passerby’s, sexually assault twenty other passerby’s, beat twenty other passerby’s with a cane, and engage in vandalism, treason, assault, assault with a deadly weapon, violation of traffic laws, consumption of every single title one drug, looting, violation of intellectual property rights, illegal possession of a firearm, sale of heroin to minors, and discrimination on the basis of race. In this case George clearly has an obligation to take the job, even though he finds it ethically objectionable.
A final objection can be given. Suppose that George has to choose between two jobs, designing chemical weapons and being a bartender. In this case, there is no overzealous older contemporary—if George takes the bartending job someone else will fill in for George designing chemical weapons who will be just as good at the job as George and just as zealous. However, an additional consideration is present. George knows that if he takes the job as a bartender it will enrage a random terrorist who will kill dozens of people. In this case, it seems clear he shouldn’t take the bar tender job—it will kill lots of people. Choosing it while knowing this fact is no more justified than working voluntarily for a genocidal organization.
This case seems structurally analogous to the first case. In this scenario if George takes the job at the chemical plant many people won’t die. If he takes the other option many people will die. The only difference is that in one case the deaths are caused by George’s job and in the other case they’re caused by George not taking another job. However, in both cases George’s actions are guaranteed to result in lots of people’s deaths. There seems to be moral parity between these cases.
By all accounts, Williams was a brilliant thinker. However, he was badly off when it came to utilitarianism and made bad arguments.
Utilitarianism wins outright part 15
Another argument for utilitarianism
Suppose there’s something we care about other than happiness. That thing surely can trade off with happiness. If rights matter then some number of rights violations are as bad as some amount of suffering. This poses a problem.
Suppose we have two states of the world. In state of the world 1 every person violates every other person’s rights billions of times a second but every single person is very happy. Every moment of existence for every person contains more joy than the sum total of joy experienced in the history of the real world.
State of the world two has zero rights violations but every person is in extreme misery—experiencing more suffering per second than the sum total of all suffering experienced in world history. If there are enough rights violations, then the person who thinks rights matter would have to hold that world one is worse than world two. This is deeply implausible. A world where every second everyone experiences more misery than the sum total of all misery experienced during the holocaust and slavery cannot be worse than a world where everyone has immensely excellent lives.
A similar case can be made for virtue. A world where everyone is virtuous and miserable is worse than one where everyone is vicious and happy.
Utilitarianism wins outright part 16
Responding to Williams again
Bernard Williams’ moral intuitions differ from mine considerably. He gives cases that are presented as objections to utilitarianism, when it just seems obvious that we should accept the utilitarian conclusion. In his extensive dialogue with Smart, giving another objection, Williams writes “(2) Jim finds himself in the central square of a small South American town. Tied up against the wall are a row of twenty Indians, most terrified, a few defiant, in front of them several armed men in uniform. A heavy man in a sweat-stained khaki shirt turns out to be the captain in charge and, after a good deal of questioning of Jim which establishes that he got there by accident while on a botanical expedition, explains that the Indians are a random group of the inhabitants who, after recent acts of protest against the government, are just about to be killed to remind other possible protestors of the advantages of not protesting. However, since Jim is an honoured visitor from another land, the captain is happy to offer him a guest’s privilege of killing one of the Indians himself. If Jim accepts, then as a special mark of the occasion, the other Indians will be let off. Of course, if Jim refuses, then there is no special occasion, and Pedro here will do what he was about to do when Jim arrived, and kill them all. Jim, with some desperate recollection of schoolboy fiction, wonders whether if he got hold of a gun, he could hold the captain, Pedro and the rest of the soldiers to threat, but it is quite clear from the set-up that nothing of that kind is going to work: any attempt at that sort of thing will mean that all the Indians will be killed, and himself. The men against the wall, and the other villagers, understand the situation, and are obviously begging him to accept. What should he do?”
In this case, it once again seems intuitive that Jim should kill Indians. If he does not, everyone is worse off. We should take options that make everyone better off.
It might be true that it would be hard to blame Jim for acting wrongly here, given the difficulty of taking the right actions. Certain actions are psychologically difficult, partly as a result of desirable heuristics, such that we oughtn’t fault people too much for failing to take those actions. However, how much we would blame Jim would be distinct from the wrongness of his actions.
We can consider a parallel case; suppose that the only way to save the world was to steal a penny. In this case, it seems reasonable to say one ought to steal the penny. Additionally, suppose that an asteroid was about to collide with earth. The only way to prevent the asteroid deflection was to press a button, which would compress it into a javelin, which would be subsequently launched at a person, thereby killing them. Surely we should launch the javelin. Now suppose that the button presser had to witness the person being hit with the javelin. It still seems clear that the button ought to be pressed. Finally, suppose that they had to hurl the javelin. It once again seems intuitive that it would be extremely morally wrong not to hurl the javelin. We ought to save the world, even if we have to sully ourselves in the process.
Williams goes on to say “To these dilemmas, it seems to me that utilitarianism replies, in the first case, that George should accept the job, and in the second, that Jim should kill the Indian. Not only does utilitarianism give these answers but, if the situations are essentially as described and there are no further special factors, it regards them, it seems to me, as obviously the right answers. But many of us would certainly wonder whether, in (1), that could possibly be the right answer at all; and in the case of (2), even one who came to think that perhaps that was the answer, might well wonder whether it was obviously the answer.”
This is not a mark against utilitarianism. Ideally, a theory will make us able to come to conclusions quickly, even if they had previously seemed impossible to solve quickly. It makes no more sense to criticize utilitarianism for its ease of generating a solution to this thought experiment than it would to criticize the chain rule in calculus, for simplifying seemingly very complex math problems, or the intersection of marginal revenue and marginal cost, in economics, for quickly figuring out what the price would be, or the labor theory of value for quickly figuring out the long run equilibrium price of goods. Regardless of whether or not we accept the labor theory of value, it would be silly to reject it on the basis of it being too quick at generating accurate predictions of long run equilibrium price.
Williams’ argument seems to run afoul of the Pareto principle, according to which bad actions must make some people worse off. It’s also susceptible to previous objections given to rights.
Utilitarianism wins outright part 17
Let the children burn (sometimes)
In his intelligence squared debate, Gilles Fraser had two simultaneous misfortunes. First, he had to argue against effective altruism—a movement that is obviously good. Secondly, he had to argue against the brilliant William Macaskill, one of the most brilliant people I’ve had the good fortune of reading.
Fraser ended up largely agreeing about effective altruism being good in general. His main gripe was with the utilitarian philosophy. He gave a counterexample to utilitarianism which was the following, called CB.
In it, we see a child and a valuable painting in a burning building. The painting could be sold to save many children from dying of malaria. Utilitarianism prescribes that we take out the painting (all else equal), sell the painting, and allow the child to burn. However, this strikes people as counterintuitive.
In order to see why CB is less counterintuitive upon reflection than it appears to be at first, we can consider the scenario UTPAAC (use the painting as a crowbar). In this scenario, we can either save the child or the painting. However, there are hundreds of children in a burning building across the street. The only way to save them is to take the painting, and use it as a crowbar, to pry open the door to the neighboring burning building. Surely in this case we should save the painting.
Then we compare two states of the world. In the first, we can save hundreds of children in a burning building. In the second, we can save thousands of children from malaria. Surely the second one would be as good as the first. Thus, if saving children from malaria is just as good as saving children from a burning building, and saving children from a burning building is sufficient grounds for leaving one child in a burning building, then we should save the painting, rather than the child.
One might object that proximity and directness matters. However, in this case we can consider UTPAACTPTB (use the painting as a crowbar to push the button). In this scenario, we can either save the child or the painting from a burning building. If we save the painting from the burning building, we can use it to pry open the door to a second building, which contains a button. If we press the button, doors will open up and hundreds of children in overseas burning buildings will be saved. Surely we should use the painting to pry open the door and save the hundreds of children.
One might object that selling the item makes it less virtuous. However, we can modify the case again to be
Trade: The painting can be traded for a crowbar, which can pry open a door, allowing you to press a button that saves hundreds of children. In this case, we should sell it for the crowbar. Thus, whether it’s traded is not a morally relevant feature.
Additionally, we have several reasons to distrust our intuitions about this case.
1 It’s very difficult to sympathize with nameless, faceless children, half a world away.
2 The world we live in is morally very counterintuitive. We did not evolve to be in situations very often in which we could save lives indirectly, by complex, third party exchanges.
3 We have a bias for those near us, who we can directly see.
4 Our judgement may be clouded by our self interest about particular cases. If we accept that our duty to donate is sufficiently strong, that failing to do so is analogous to leaving children in burning buildings, this undermines our high moral views of ourselves.
5 We may have a tribalist bias towards people closer to us, rather than people in other countries.
Utilitarianism wins outright part 18
Responding to objections
“I'll do what I can to promote the greater good. If you can harm one person to save 1,000, I'm gonna do it every f**king time.”
— Sean Patrick Flanery
Sadly, not everyone is a utilitarian. Those non utilitarians are often extremely adamant about the horror of utilitarianism. Anscombe famously said that “if someone really thinks, in advance, that it is open to question whether such an action as procuring the judicial execution of the innocent should be quite excluded from consideration—I do not want to argue with him; he shows a corrupt mind.” She also called Sidgwick vulgar and Mill brash. Williams famously predicted the end of utilitarianism was not too far off. It was even suggested that utilitarians should be sacrificed first.
Probably the most common non rights based objection to utilitarianism is that it is too demanding. Utilitarianism says that we should do whatever is best, whatever personal sacrifices are involved. However, many people argue that we can’t be obligated to endure sacrifices that are too terrible. This objection has always struck me as terrible.
First, utilitarianism is intended as a theory of right action not as a theory of moral character. Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection. However, it is far less counterintuitive to suppose that it would be good to sacrifice oneself to save two others than it is to suppose that one is a bad person unless they sacrifice themselves to save two others. In fact, it seems that any plausible moral principle would say that it would be praiseworthy to sacrifice oneself to save two others. If a person sacrificed their lives to protect the leg of another person, that act would be bad, even if noble, because they sacrificed a greater good for a lesser good. However, it’s intuitive that the act of sacrificing oneself to save two others is a good act.
The most effective charities can save a life for only a few thousand dollars. If we find it noble to sacrifice one's life to save two others, we should surely find it noble to sacrifice a few thousand dollars to save another. The fact that there are many others who can be saved, and that utilitarianism prescribes that it’s good to donate most of one’s money doesn’t count against the basic calculus that the life of a person is worth more than a few thousand dollars.
Second, we have no a priori reason to expect ethics not to be demanding. The demandingness intuition seems to diffuse when we realize our tremendous opportunity to do good. The demandingness of ethics should scale relative to our ability to improve the world. Ethics should demand a lot from superman, for example, because he has a tremendous ability to do good.
Third, Singer’s drowning child analogy can be employed against the demandingness objection. If we came across a drowning child with a two thousand dollar suit, it wouldn’t be too demanding to suggest we ruin our suit to save the child. Singer argues that this is analogous to failing to donate to prevent a child from dying.
One could object that the child being far away matters. However, distance is not morally relevant. If one could either save five people 100 miles away, or ten 100,000 miles away, they should surely save the ten. When a child is abducted and taken away, the moral badness of the situation doesn’t scale with how far away they get.
A variety of other objections can be raised to the drowning child analogy, many of which were addressed by Singer.
Fourth, while it may seem counterintuitive that one should donate most of their money to help others, this revulsion goes away when we consider it from the perspective of the victims. From the perspective of a person who is dying of malaria, it would seem absurd that a well off westerner shouldn’t give up a few thousand dollars to prevent their literal death. It is only because we don’t see the beneficiaries that it seems too demanding. It seems incredibly demanding to have a child die of malaria to prevent another from having to donate.
For those who find utilitarianism too demanding, I’d encourage you to read eulogies for young children who have died. Every single one of them is so horrifically tragic. When they’re no longer hypothetical people, but instead people with loving families and names like Jacob, Alexander, Max, Emma, or Charlotte, who died the day before their fifth birthday, who brought joy to all those around them, who had laughter that helped everyone else get through hard times, and was just a bundle of joy, it becomes harder to value their lives at less than 10% the price of a car.
To paraphrase Owen
“Bent double, like old beggars under sacks,
Knock-kneed, coughing like hags, we cursed through sludge,
Till on the haunting flares we turned our backs
And towards our distant rest began to trudge.
Men marched asleep. Many had lost their boots
But limped on, blood-shod. All went lame; all blind;
But someone still was yelling out and stumbling
And flound'ring like a man in fire or lime...
Dim, through the misty panes and thick green light,
As under a green sea, I saw him drowning.
In all my dreams, before my helpless sight,
He plunges at me, guttering, choking, drowning.
If in some smothering dreams you too could pace
Behind the wagon that we flung him in,
And watch the white eyes writhing in his face,
His hanging face, like a devil's sick of sin;
If you could hear, at every jolt, the blood
Come gargling from the froth-corrupted lungs,
Obscene with malaria, bitter as the cud
Of vile, incurable sores on innocent children’s tongues,-
My friend, you would be so persuaded
By the argument allegedly damning
This so called objection which claims
Utilitarianism is too demanding”
—A heavily edited Wilfred Owen poem inspired by this article
A book review of after virtue
This may be unnecessarily polemical, but I thought it was worth expressing my thoughts on after virtue
0
My father has an amusing story that he tells from when he was in elementary school. There was a classic kerfuffle over who would be the team captain for soccer. One of the students, adept in political strategy at a very young age, decided to go on the attack. Rather than present a positive case for his desirability as a team captain, he explained why everyone else was ineligible, his attacks running the gamut of effective elementary school insults—everything from “this guy is too short,” to “this girl is a girl, which is obviously a disqualifying quality.”
Well, being no longer in elementary school, my criticism of alternative theories will avoid the puerile nature of those from elementary school. But, adopting the same general strategy, this post will be exclusively about attacking virtue ethics. The poverty of the alternatives serves as a good argument for utilitarianism.
Attack ads are fun to watch. Flashing red lettering on the screen makes a very compelling argument, especially when combined with Very Serious People discussing other Very Serious People who are against a particular policy or candidate. Think of this as a political attack ad.
1 Ad vice for virtue ethics
“Where’s the beef Mr. Macintyre”
—A good objection
(Gotta associate virtue ethics with scary clowns, this is an attack ad after all)
Fictional dialogue
Me: “So virtue ethicist should we accept the repugnant conclusion.”
The virtue ethicist stroked his beard for a while. Then, after several minutes had elapsed, during which I wasn’t sure if he was dead, asleep, or in a permanent brain freeze induced by being forced to use virtue ethics to solve ethical problems he said, in a voice the sounded like Yoda
“conclusion repugnant the accept conviction and character strong of is who one would.”
Me: “Woah woah woah—no talking like Yoda. Answer the question.”
He stroked his beard for several more minutes.
Virtue Ethicist (let’s call him Kevin because that’s been found by research to be the worst name on dating apps): “Would one who is wise, good, and strong in moral conviction accept the repugnant conclusion?”
Me: “Well, I think so. Huemer seems pretty wise, good, and strong in moral conviction. This Bentham’s bulldog guy is beyond wise, good, and strong in moral conviction—a fact attested to by everyone who has come across him—all of whom have been blown away by his brilliance, genius, and most of all humility. And both of them accept the repugnant conclusion.”
Kevin: “Then you should accept the repugnant conclusion.”
Me: “Well, ideally an ethical system will tell us what things are good. If virtue ethics is parasitic on already realizing the right answer, then it doesn’t help us come to ethical conclusions.”
Kevin: “Virtue ethics tells what’s good. What’s good is what a good person would do. Duh.”
Me: “Okay, but how do we figure out what a good person would do without already knowing what is good.”
Kevin: “Well, we consider lots of virtues and see if they overall count in favor of or against a particular decision.”
Me: “Okay, so let’s take tax policy. My view on tax policy is informed by what tax policy I think will maximize happiness. On your view, how do we decide whether to cut taxes.”
Kevin: “Well, a virtuous person wouldn’t want to take too much money, but also wants to fund good things.”
Me: “This once again seems to rely on having an external standard of goodness. In all the cases there are virtues on both sides. You’re just making ad hoc judgment calls.”
Kevin: “They’re not ad hoc. I think ‘what would a good guy do,’ and then I do it.”
Me: “Okay, but how do you decide what a good guy would do.”
Kevin: “I consider the virtues.”
Me: “But there are always virtues on both sides. How do you weigh them up.”
Kevin: “Well, I consider all the virtues and then come to conclusions.”
Me: “How do you come to conclusions without just relying on intuitions.”
Kevin: “I do rely on intuitions. They’re just intuitions about what a virtuous person would do.”
Me: “Well, we know that intuitions about what is good are often wrong. So why the heck would we think that the nearly identical class of intuitions about what would be done by a virtuous person would be any better.”
Kevin: “We have to carefully reflect in order to figure which virtues are supreme.”
Me: “Where would this diverge from regular moral intuitions.”
Kevin: “I guess it wouldn’t very much. But you utilitarians try so hard to reduce everything to a simple equation. The real world is much too complicated for that.”
Me: “Absent some formula for deciding what’s good how the hell are we supposed to figure out what’s good. We know that intuitions throughout history have been incredibly unreliable. It’s also the case that we have lots of independent intuitions that favor utilitarianism.”
Kevin: “But you accept killing one to save five.”
Me: “Obviously. You only think that’s bad because of intuitions. If you think more about the case, you’ll conclude you should kill one to save five. I’ve written an article about this in fact.”
Kevin: “Okay—compelling points in that article. Maybe killing one to save five is virtuous.”
Me: “Doesn’t this show that virtue ethics is just copy pasting our intuitions and calling it an ethical theory.”
Kevin: “No, our intuitions about what is virtuous are not the same thing as our moral intuitions. Most bad things indicate vice and good things indicate virtue.”
Me: “That’s certainly true of some things. However, lots of good actions can be done by vicious people. Suppose that a psychopath saves a child, merely for attention. They would have stabbed the child to death if it would have made their life better, but it happened to make their life better to save the child. That’s obviously indicative of vice, but it’s still better that the action happened. Similarly if a person attempts a murder but fails and accidentally saves the person’s life, it’s good that the action happened.”
Kevin: “But surely we shouldn’t do vicious things.”
Me: “Acts can’t be viceful, only people can. Acts can indicate viciousness, but they are not themselves conduits of vice. You should be a virtuous person, but that doesn’t give you guidance to deciding what acts are taken.”
Kevin: “It seems like most bad acts can be explained by being unvirtuous. Lying, stealing, killing are dishonest, selfish, and brutal.”
Me: “Bad acts do tend to indicate bad character. However, bad people can do bad acts. Rough correlation doesn’t indicate causation.”
Kevin: “Why should ethics be about deciding right action. Why can’t it just be about being a virtuous person.”
Me: “I’m all on board with being a good person. But there are lots of tough ethical questions. Questions like whether you should eat meat, donate lots to charity, lie to avoid offending people, and vote for a particular candidate are important ones that are worth deciding. If your moral theory can’t decide what we should do, that counts against it. But a related problem avails your theory—how do we decide what the virtues are?”
Kevin: “Eudaimonia inducing things are virtuous. Your life goes better if you are honest than if you are dishonest. Eudaimonia is the things that determines how well your life goes.”
Me: “So then aren’t these all just means to an end, with that end being Eudaimonia.”
Kevin: “There’s a necessary relationship between the two, neither being explainable in terms of the other.”
Me: “Okay so what if there was more flourishing brought about by everyone lying. Should people lie.”
Kevin: “No, it’s not virtuous. It couldn’t bring about Eudaimonia.”
Me: “Well, if one lie prevented two other lies that would bring about more total Eudaimonia.”
Kevin: “Okay, well maybe the virtues are just irreducible.”
Me: “That’s possible, but then it’s hard to account for all of the virtues being utility maximizing heuristics. Lying generally has bad outcomes. So does being cruel, selfish, hateful, and spiteful. How do you account for that.”
Kevin: “Virtues cause good consequences because they’re virtuous.”
Me: “Okay, but surely we could imagine an alternative world with a different set of things that have good outcomes that aren’t considered virtuous. Suppose that every-time a lie was told it made the life of the person who you lied to 1% better, without their knowledge. In that world lying would have good consequences, but it would not be virtuous.”
Kevin: “Maybe it would be virtuous.”
Me: “Okay, let’s imagine that showing callous disregard to human life prevented a demon from torturing everyone. Would it be virtuous then.”
Kevin: “No.”
Me: “But if we had two worlds, one in which a demon tortures everyone because people are virtuous and one in which everyone has a super high quality of life, surely the second one is better.”
Kevin: “I guess.”
Me: “But that shows virtues aren’t what fundamentally matters.”
Kevin: “Virtues don’t have to be the only things that matter.”
Me: “Well, if you agree that utility matters and all of the virtues are strictly reduceable to utility, then it seems preferrable on theoretical grounds to suppose that only utility matters.”
Kevin: “Perhaps.”
Me: “But there’s another problem with virtue ethics. It holds that it can be bad to give a perfectly moral person the choice over whether some event occurs. Suppose that we grant that it’s unvirtuous to push the fat man in the trolley problem variant called bridge. It’s also surely the case that the world would be better if the man fell naturally—one would die rather than five. Thus, giving a perfectly moral person control over whether or not the man falls makes the world worse. This is quite counterintuitive.”
Kevin: “I guess.”
Me: “It’s also not clear how we ground things using virtue ethics. Why are some things virtues and others not?”
Kevin: “Much like a car is good if it fits its purpose or a pen is good if it writes well a human is good if it meets its purpose.”
Me: “That’s obviously equivocation on the word good. It’s good at being a pencil if it writes well. That doesn’t make it morally good. A date rape drug is a good date rape drug in one sense if it’s effective, but that certainly doesn’t make it morally good. I’m also not sure how we decide what the objective ends of humans are.”
Kevin disappeared in a gust of smoke, replaced by the book after virtue, which began talking immediately.
2 The talking book
After Virtue: “Let me do what Kevin could not and defend virtue ethics. Let’s start with chapter 1. What say you to it, the broad thesis that modern ethical talk is analogous to rediscovering science after a period during which all scientific knowledge was destroyed except a few phrases. We use terms analogous to neutrino—talking about right and wrong and ought and ought not, but at the end of the day we have no good grasp on ethics.”
Me: “Well, my prior in that is pretty low. It seems unlikely that the wide diversity of moral views is just rediscovering ancient stuff. But let’s see what your further chapters argue.”
After Virtue: “Okay, what do you think about chapter two. First the claim that moral debates are rationally irresolvable—they just come from different starting points but can’t ultimately be solved.”
Me: “I don’t think that’s true at all. Lots of things have fierce disagreement but that doesn’t mean there’s no progress. Huemer accepts the repugnant conclusion. That shows that people can be convinced by persuasive argument to accept things they didn’t accept initially. I’ve written a series of articles providing compelling ethical arguments for things.”
After Virtue: “But the disputes about abortion, war, and justice have been going on for an eternity.”
Me: “That’s true of lots of things that are rationally resolvable, including whether or not god exists, how we get consciousness, which theory of quantum physics is correct, which political party will bring about greater aggregate welfare, and whether the minimum wage will increase aggregate welfare.”
AV: “But for each argument they can be supported by premises which can’t be proven, but can either be accepted or rejected.”
Me: “That’s true of math too. If you reject mathematical axioms, you’d be left without mathematical reasoning. That doesn’t mean that math is rationally irresolvable. I think for ethics we can do lots of reasoning, and come to conclusions about which assumptions are justifiable. I’ve written quite a lot about particular cases of allegedly justified deontologist assumptions that I don’t think stand up to scrutiny.”
AV: “But deontologists would say the same.”
Me: “Yes, and they’d be wrong. Flat earthers think the evidence favors a flat earth. They’re wrong.”
AV: “I don’t have the time to wade into your extensive arguments for utilitarianism.”
Me: “Okay, I was just explaining why I disagree with your premise.”
AV: “Okay, what do you think about point two that morality proceeds by appealing to impersonal rationality, in a way that’s foolish given that those allegedly rational assumptions can’t be independently justified. Asserting that you have a reason to accept my deontological arguments is ridiculous—if I ask you to vote against abortion and you say because you want me to, that gives you no reason. The same is true if I appeal to it being your duty. If you don’t care about this, it gives you no reason vote against abortion.”
Me: “It’s false. Moral injunctions may not give you a motivating reason, but that’s not the same as saying they give you no reason. Epistemic rationality is a good parallel. Saying that your position that the earth is flat is rationally unjustified may give you no motivating reason, the same as if I said that the fact that I believe it’s round gives you no motivating reason, but it does give you an epistemic reason. You are foolish to believe unjustified things.”
AV: “This would get us into pretty thorny metaethical territory that we’ll steer clear of for now.”
Me: “Okay.”
AV: “What about point 3 that ethics falsely assumes to be part of some grand tradition, but really a lot of people are talking past each other and just upholding their societal norms. Kant can’t be analyzed absent understanding Prussia, Hume absent Scotland.”
Me: “I’m not sure about whether people have been talking past each other. There seem to be fundamental moral concepts like ought, right, shouldness, and goodness, the details of which are merely disagreed about. I’m also not sure why this requires understanding the history of people. We can just consider their arguments without having a comprehensive understanding of Scotland from 300 BCE to 1825 CE.”
AV: “These points will be developed further. Do you agree with my takedown of emotivism.”
Me: “Yes.”
AV: “Okay—do you agree of the failure of rationality to establish morality.”
Me: “No—I agree with you about the failures of Kant and others to do so, but I think utilitarians have done a pretty good job. Sidgwick was great at this. We can start with some very plausible axioms and reason our way to morality. Those axioms can be concluded to be true, like mathematical axioms.”
AV: “I address that in chapter 5.”
Me: “I’m reading chapter 5 now. When you say “Thus all these writers share in the project of constructing valid arguments which will move from premises concerning human nature as they understand it to be to conclusions about the authority of moral rules and precepts,” that just seems obviously wrong. Kant wasn’t making claims about human nature, he was making claims about what we have reason to do. Same with Sidgwick.
AV: “Well, those applied only to humans.”
Me: “Kant said they applied to all rational beings. Sidgwick’s applies to all beings that can suffer.”
AV: “It’s still broadly about human nature.”
Me: “I guess. But then when you say that morality has to be about all three—human nature, passions, and reason, I just flatly reject that. Morality would be the same if no one had any emotions and if human nature was such that it wanted to cause maximal suffering.”
AV: “I’m criticizing the conceptions of Kant and friends, not you.”
Me: “I don’t think that’s a fair characterization of Kant. I think he’d agree with me. But okay, go ahead, I’m all for Kant bashing.”
AV: “So what say you of the way that I cross the is ought gap—that it’s possible to go from this watch tells the time accurately to this is a good watch without an unjustified inference.”
Me: “This seems like equivocation. When we say ‘this is a good watch,’ that’s not a moral claim. That sentence semantically means it furthers the purpose of a watch. That’s not a normative claim.”
AV: “But surely you’d be in favor of watches being good, even if it caused suffering. You wouldn’t take it’s effects to make it a bad watch. The same is true of humans. A good human meets the functional role of a human.”
Me: “But against, that’s not a moral claim. A good slave catching ship might be good at meeting its functional role, but it’s not morally good. Let’s imagine that we created a race of rapist murderers, made with the sole aim of raping and murdering. They were created with that express purpose. Surely it would still be bad for them to rape and murder people. A man made virus made to kill lots of people is bad in the moral sense, despite being good in the functional sense.”
AV: “Then how do we cross the is ought gap.”
Me: “The same way we cross the is-will be gap. While it doesn’t follow straightforwardly from what is that something will be the case, we can still use induction. Reason lets us figure out what things are good, despite it not being a straightforwardly natural property. But you’re dodging the point. Surely this is not an account of moral goodness.”
AV: “I’m a good book. I perform the function of a book. A book does not argue points not made in the book. Macintyre did not address this argument so I can’t help you there. But he’s a smart guy, I’m sure he could think up a response. But anyways, you ground morality in imperatives. But imperatives can’t be truth apt.”
(You might think dear reader that this is being uncharitable. It is not. Macintyre literally does not address this blatantly obvious objection. At all.)
Me: “I’m not sure if morality is an imperative—I find the discourse to be hopelessly muddled and confused. I think we can figure out what we should do by reasoning. The statement ‘we should do x’ is not saying ‘do x’ it’s a claim about what we would do if we were completely rational and impartial. But statements about imperatives are certainly truth apt. The statement ‘if you were totally rational and impartial you’d follow this imperative,’ is truth apt certainly.”
AV: “I’m just a book. But what have you to say about my utilitarian bashing in the next chapter.”
Me: “I agree that the claim that people do what maximizes their pleasure doesn’t straightforwardly imply any moral claims. Mill and Bentham were wrong there. But your criticism of Bentham is wrong. You say that Bentham’s view that if we were rational we’d be utilitarians fails because a social reformer should try to persuade the irrational masses. I agree that this makes it not straightforwardly able to cause major social reform but that doesn’t make it false.”
AV: “I was just making a claim about the social reform.”
Me: “You also say that Mill was “clearly the most distinguished mind and character ever to embrace Benthamism.” I don’t think that’s true. Sidgwick seemed to be a greater thinker, though he was less influential. Singer also gets a lot more right than Mill.”
AV: “I was making a claim of how renowned the thinkers were, not absolute greatness.”
Me: “Okay then, carry on.”
AV: “What do you make of the objection that there are too many different types of pleasures.”
Me: “That’s obviously true but it’s not an objection. Pleasures coming in different varieties doesn’t mean some aren’t greater than others. It is obvious that the pain of a pinprick is less than the pain of being brutally tortured. It may be hard to figure out whether a particular shade of yellow is brighter than a particular shade of pink, but that doesn’t mean that brightness is subjective or that white is not brighter than black.”
AV: “But the fact that there’s no clear criteria of pleasure makes it unable to be used to evaluate experience.”
Me: “There is a criteria—pleasure describes the mental states that are desirable, such that if you were totally rational you’d want more of them. Greater pleasures are more desirable. There are some difficulties with the details but that doesn’t mean it’s no criterion. There are also difficulties with determining if viruses are alive, but that doesn’t mean life is a “pseudo concept,” to use your words.”
AV: “But if an individual can’t evaluate different pleasures, surely a society can’t do interpersonal comparison of utility.”
Me: “An individuals inability to weigh between pleasures is an epistemic problem. So pleasures are greater than others. The pleasure of eating food one really enjoys is greater than that of eating food one doesn’t really enjoy. We can make similar interpersonal comparisons. We know that one person being tortured causes more suffering than another stubbing their toe.”
AV: “But we can make evaluations about fictions. We can know that ironman is made of atoms, even if it were not told to us.”
Me: “Perhaps, but that’s just based on stipulated features. The claim that being boiled alive is more painful than a toe stub isn’t part of the definition of pleasure of pain. Rather, it’s a feature that we discover. None of the stipulations baked into the definition of pleasure render it necessarily true that torture is more painful than a pinprick.”
AV: “Perhaps. But what say you to my treatment of Sidgwick.”
Me: “I think it’s absurd. You claim that Sidgwick concluded his inquiry was a failure. This is false. Sidgwick concluded that reason alone could lead us to either prioritize our own interests or to care about good generally. This just means reason by itself doesn’t automatically make us moral. Others, like Singer, De Lazari-Radek, and Parfit have disagreed. However, Sidgwick was quite firm in his conviction that reason could lead us to only either utilitarianism or egoism. Thinking that reason doesn’t motivate us to be good is different from thinking that reason can’t let us conclude anything about morality.”
AV: “But Sidgwick agreed that his principle couldn’t motivate everyone through reason alone. So it can’t serve as the ultimate moral principle.”
Me: “That doesn’t matter. This was a separate philosophical view of Sidgwick’s, one that many utilitarians disagree with.”
AV: “Perhaps. But what do you think about my treatment of rights?”
Me: “I agree with it. I’m no believer in rights.”
AV: “Any disagreements pre chapter ten.”
Me: “Some perhaps, but none relevant to our discussion today. There’s a lot of historical claims that I haven’t any ability to assess, but none seem to make arguments against utilitarianism or for virtue ethics.”
AV: “In chapter 10, what say you about story telling being the predominant way ethics is done.”
Me: “Plausible, I’m telling a story right now of a dialogue with a book but I’m not sure that it’s true. You just point out that it is important, not that it’s the predominant way ethics is done.”
AV: “But what about the numerous examples of stories I give.”
Me: “I can give numerous examples of books written by left handed people that informed people of ethical truths. That doesn’t make it the primary method of ethical transmission.”
AV: “This is all just in service of the point that stories helped tie people back to moral history.”
Me: “I’d accept they did to some degree, not that this was their defining feature.”
AV: “But doesn’t this show that stories caused people to adopt appreciation and practice of particular virtues?”
Me: “To some degree…”
AV: “How about my claim that ‘Courage is important, not simply as a quality of individuals, but as the quality necessary to sustain a household and a community. Kudos, glory, belongs to the individual who excels in battle or in contest as a mark of recognition by his household and his community.’”
Me: “I agree courage is usually good but not single handedly necessary to sustain a community. America is sustained without acts of particular courage.”
AV: “I was just describing the historical background of these virtues.”
Me: “Okay but if you’re just bloviating about the history then why the hell does this have significant normative implications for today. Why should we care about what the greeks thought about virtues being important.”
AV: “Well, the thesis of the book is that we live in the shadow of the once recognized virtues.”
Me: “Even if that’s true, as long as we can coherently talk about other moral concepts, we should surely do so if they’re better, regardless of the history of your virtues. But your chess analogy runs into problems.”
AV: “It does?”
Me: “Yes, like nearly everything else inside of you. You say that a good chess player is one who plays chess well according to the rules—that chess, like morality requires some agreed upon metric. However, this seems to rely on reason being unable to resolve moral questions, which seems obviously false. Much like reason annihilated logical positivism, it can annihilate bad arguments in ethics.”
AV: “But what about my scathing rebuke of enlightenment ethics, that tries to reason its way to morality.”
Me: “I think it’s false for reason I already explained. But even if it were true that morality is not a rationally realizeable code of conduct, we can still use reason to argue against particular moralities. Look at Hare’s or Smart’s universal prescriptivism. They were non cognitivists but still made ethical progress. Look at the targeted arguments deployed against the repugnant conclusion. They don’t rely on any extravagant metaphysical claims. Even if we accept that morality is something subjective like art, we can still change people’s views about art by reasoning. Informing people of the history of art can change their views. Similarly, the best video game is obviously subjective, but that doesn’t mean that no utterances can inform people of which video games they’re likely to prefer. Informing people of the violence of Grand Theft Auto may make them like it less. The same is true of ethics. I’ve written quite extensively about ways to debunk particular ethical intuitions with reasoning. It’s obviously possible.”
AV: "Again, being a book I can’t stray too far off the trajectory of my writings.”
Me: “Your explanation of the disanalogy in the chess case is also totally wrong. You say ‘One reason why the analogy is dangerous is that we do play games such as chess for a variety of purposes. But there is nothing to be made of the question: for what purpose do the characters in the Iliad observe the rules that they observe and honor the precepts which they honor? It is rather the case that it is only within their framework of rules and precepts that they are able to frame purposes at all; and just because of this the analogy breaks down in another way, too. All questions of choice arise within the framework; the framework itself therefore cannot be chosen.’ This is obviously false if we accept that reason can inform us of ethical truths. It would be absurd for a utilitarian to reject all criticisms of their theory based on them being internally justified within their theory and there being no way to criticize theories externally. We can appeal to independent standards of judgement to criticize a theory. For example, I have a friend who is a radical libertarian. He holds the view that you shouldn’t steal a penny from Jeff Bezos to prevent infinite child rape. This counts against his theory. I can criticize his theory based on it rendering this absurd verdict, even though this is an external critique. Were this not the case, there would be no way of deciding upon a moral theory.”
AV: “But isn’t that regressive. How do we decide on a metric for deciding upon metrics.”
Me: “That’s just the Munchaeusen trilemma. We can hold certain metaphysical axioms like phenomenal conservatism that we use to form bedrock conclusions, and we can also hold the view that we have direct access to certain moral truths. It’s not inconceivable that I have direct access to the badness of pain.”
The book stroked its beard and looked pensive for a moment before nodding.
Me: “This wise guy persona isn’t helping your case pal. But anyways, your claim that ‘first that all morality is always to some degree tied to the socially local and particular and that the aspirations of the morality of modernity to a universality freed from all particularity is an illusion,’ is once again false. Morality is obviously influenced by your social location, but if you accept my claim about ethics being rationally derivable, then it’s like mathematics. Obviously your mathematical knowledge will vary depending on where you are, but math itself doesn’t change. You keep doing this infuriating thing where you go on a long ramble about history and then act as if that justifies controversial ethical claims, without any further argument. It would be like me arguing for utilitarianism by presenting the history of the roman empire and then saying that the answer that must be learned is the truth of utilitarianism.”
AV: “We’ve already argued about whether or not morality is rationally derivable.”
Me: “We did for a bit. But your criticism of my derivation was basically pleasure isn’t real, which is a terrible objection. If this false claim requires you being right about your other false claim then I guess we’re at a standstill.”
AV: “Perhaps we are. What say you to my second point, that ‘secondly that there is no way to possess the virtues except as part of a tradition in which we inherit them and our understanding of them from a series of predecessors in which series heroic societies hold first place.’”
Me: “Is there more.”
AV: “No, should there be.”
Me: “Yes. You just rambled about history and then made another controversial ethical claim. You didn’t provide an explanation of why that followed from the historical account that you gave. It seems conceivable that one could have virtues based on reading aristotle and cultivating them. Why the hell should I accept this claim.”
AV: “Based on the historical analysis.”
Me: “The historical analysis showed one thing and one thing only, that lots of previous societies got virtues from their culture. This does not establish it as being necessary and sufficient, it just establishes that they’ve coincided.”
The book remained silent for several minutes. I wasn’t sure if it was sleeping. It then nodded, stroked its beard, and said “Courage is not simply one of the virtues, but the form of every virtue at the testing point.”
Me: “You can’t keep pivoting away from my points. I know you got that from a fortune cookie. It’s the first google result when you type in fortune cookie quotes.”
AV: “A ship in harbor is safe, but that’s not why ships are built.”
Me: “ARGH. Okay, well you next discuss that within these stories people have no standpoints divorced from that which is demanded of their social roles—no point of view from either uniquely their own or from societies. Maybe this is true, you’re a better historian than I. But I’m not sure why individual stories are indicative of the broad views of people in ancient societies. There wasn’t evidence presented for the claim that this was broadly the point of view in ancient society. And if this is true, why can’t we just make everyone’s social role utilitarian.”
AV: “Because we can’t all agree on utilitarianism.”
Me: “Same with virtue ethics. And I’m working on getting everyone to be utilitarian.”
AV: “But virtue ethics was widely agreed upon.”
Me: “I don’t know if that’s true. But even if it is, that was a very different society. This also just shows that it would be nice to get society to be based on virtue ethics, not that individuals should follow it.”
AV: “Perhaps. Keep reading.”
Me: "Okay, you say next ‘Every activity, every enquiry, every practice aims at some good; for by 'the good' or 'a good' we mean that at which human beings characteristically aim. It is important that Aristotle's initial arguments in the Ethics presuppose that what G.E. Moore was to call the 'naturalistic fallacy' is not a fallacy at all and that statements about what is good - and what is just or courageous or excellent in other ways-just are a kind of factual statement. Human beings, like the members of all other species, have a specific nature; and that nature is such that they have certain aims and goals, such that they move by nature towards a specific telos. The good is defined in terms of their specific characteristics.’ This has lots of problems.
1 The one I gave earlier. Purposes are not morally good if something has a bad purpose. A boat made with the purpose of catching slaves is good at being a slave catching ship if it’s effective, but that doesn’t make it morally good.
2 This is the naturalistic fallacy, as Moore points out. Pointing out that Aristotle was ignorant of the naturalistic fallacy is not a response to the charge.
3 It also runs into Moore’s open question argument. It seems like even if something meets the role of a human well, there’s still an open question of whether they’re morally good.
4 It’s also equivocation—it’s confusing practical goodness and moral goodness.
5 It’s not clear how humans have a nature grounded in anything. Assuming god doesn’t exist, we were made by the blind process of evolution, rather than by a creator. Thus, there’s no clear defined purpose. The closest there is to a purpose is passing on our genes, but that’s obviously not a moral imperative. Killing 1 billion people to impregnate 9 would be morally bad.
6 It’s not clear that every action does aim at some good. People often acknowledge that they’re being immoral or foolish, but don’t revies their ethics. There’s nothing incoherent about this.”
AV: “Perhaps. Keep reading though.”
Me: “Okay, well you don’t address any objections to this Aristotelian account. But next you say that a man’s purpose is eudaimonia, which is what makes a person’s life good. You then say ‘We thus cannot characterize the good for man adequately without already having made reference to the virtues. And within an Aristotelian framework the suggestion therefore that there might be some means to achieve the good for man without the exercise of the virtues makes no sense.’
This is maybe true within an Aristotelian framework. However, this is a problem for the framework. Consider a severely mentally disabled person who lacks the requisite understanding of the world to pursue any virtues. It seems like things can still go badly or poorly for this person. If this person is happy, that is good for them.
This also runs into a circularity problem. You can’t ground ethics in what’s good for people without it being circular. Making the leap from ‘what is good for people,’ to ‘virtue,’ is an unjustified leap that assumes the conclusion. Instead, I’d argue, as I already have extensively, that what makes people’s lives go well is how much happiness they have.”
AV: “It would be virtuous to help the severely mentally disabled.”
Me: “Perhaps, but on your theory, them being happy wouldn’t truly be good for them. Maybe it would be good for others to make them happy. But their happiness is not intrinsically good. This seems obviously wrong. It’s bad for people to be sad and good to be happy. This truth would hold even if no one could affect the quality of life of anyone else.”
AV: “This gets us away from character analysis.”
Me: “But surely we have to get away from character analysis. We need to make tough ethical decisions. Asking what a good person would do doesn’t get us very far.”
AV: “Well, I’ll address the circularity objection. It’s not circularity given the external justification for virtues.”
Me: “I don’t think there was much of a justification beyond just asserting that they’re good and rattling off random historical anecdotes.”
AV: “Perhaps. Keep reading though.”
Me: “Okay, you next say ‘This last remark suggests that one way to elucidate the relationship be-tween virtues on the one hand and a morality of laws on the other is to consider what would be involved in any age in founding a community to achieve a common project, to bring about some good recognized as their shared good by all those engaging in the project. As modern examples of such a project we might consider the founding and carrying forward of a school, a hospital or an art gallery; in the ancient world the characteristic examples would have been those of a religious cult or of an expedition or of a city. Those who participated in such a project would need to develop two quite different types of evaluative practice. On the one hand they would need to value-to praise as excellences-those qualities of mind and character which would contribute to the realization of their common good or goods. That is, they would need to recognize a certain set of qualities as virtues and the corresponding set of defects as vices. They would also need however to identify certain types of action as the doing or the production of harm of such an order that they destroy the bonds of community in such a way as to render the doing or achieving of good impossible in some respect at least for some time. Examples of such offences would characteristically be the taking of innocent life, theft and perjury and betrayal. The table of the virtues promulgated in such a community would teach its citizens what kinds of actions would gain them merit and honor; the table of legal offences would teach them what kinds of actions would be regarded not simply as bad, but as intolerable.’”
Lots of problems here.
1 It’s not clear that this corresponds to the virtues. It seems like being honest is not needed to build a community.
2 It’s not clear why we should care about the traits needed to build a community. Suppose that to achieve a common project, one needed to torture lots of babies. That wouldn’t make torturing babies good. We can stipulate that this is just a law of nature.
3 To the extent that we’re not building a community, the traits needed to build a community aren’t straightforwardly relevant. Given that there’s already a community and individuals ethical systems don’t have ripple effects on the communal morality, it’s not clear why we should care.
4 This can’t account for moral relevance not tied for virtues. If a boltzman brain appears and then freezes to death, that would be bad but not unvirtuous.
5 This seems to justify arbitrary prejudice. Caring about non human sentient beings is not needed to make a school. It’s good nonetheless. Similarly, not being racist would not be needed to make a community, if the community was comprised entirely of one particular race. Finally, caring about people far away is not necessarily conducive to building a community.”
AV: “Perhaps.”
Me: “I agree with the next chapter that you can ground the virtues based on what all the virtue ethical accounts agree upon. However, this responds to a pretty poor objection. The better related objection would be that, even if you rattle off a list of things that you take to be virtues, it’s hard to give a deeper underlying account of why they’re virtues, or quantify how important they are.”
AV: “But the problem of ascertaining relative importance plagues all moral systems.”
Me: “Not really. All ethical systems have to make tough judgement calls. But for utilitarianism, there’s some principled metric by which we can make evaluations. With virtue ethics, there’s no way in theory to figure out how important virtues are or to ground them. This is particularly a problem when there is a complex, disunified mishmash of different virtues.”
AV: “But doesn’t that plague utilitarianism as well. Pleasures have no underlying quality.”
Me: “They do. That quality is pleasantness. They may be quite different but they all share the quality of being choice worthy—of being desired when experienced. We can figure out how much things would be desired when experienced by clear headed thinkers.”
AV: “We can just accept some virtues as desirable.”
Me: “Okay, but that sacrifices massive amounts of parsimony and explanatory power.”
AV: “On my account, parsimony doesn’t matter because ethics is generated, not discovered.”
Me: “That doesn’t make parsimony unimportant. Presumably you’re trying to figure out what you’d care about if you were super rational. Well, obviously if you were super rational the odds you’d care about a,b,c, and d are lower than the odds you’d care only about a.”
AV: "Perhaps.”
Me: “Next you say ‘Consider another equally trivial example of a set of compatibly correct answers to the question 'What is he doing?' 'Writing a sentence'; 'Finishing his book'; 'Contributing to the debate on the theory of action'; 'Trying to get tenure'. Here the intentions can be ordered in terms of the stretch of time to which reference is made. Each of the shorter-term intentions is. and can only be made, intelligible by reference to some longer-term intentions; and the characterization of the behavior in terms of the longer-term intentions can only be correct if some of the characterizations in terms of shorter-term intentions are also correct. Hence the behavior is only characterized adequately when we know what the longer and longest-term intentions invoked are and how the shorter-term intentions are related to the longer. Once again we are involved in writing a narrative history.’
I’d agree that some actions are wrapped within a narrative but this is not the defining characteristic of actions. I might blink instinctively—that’s not best analyzed as part of a narrative. Additionally, explanations need not stretch back to their ultimate source to be intelligible. If you ask why I’m typing this sentence, the best answer would be because I’m writing a response to After Virtue, rather than the ultimate explanation of me thinking that it will maximize the positive experience of conscious creatures. This also is a slightly strange use of the term narrative. If I scratch my arm because it itches and I think itching is bad and unpleasant, that’s not best cast as a narrative.
But you next say ‘We place the agent's intentions, I have suggested, in causal and temporal order with reference to their role in his or her history; and we also place them with reference to their role in the history of the setting or settings to which they belong.’
I don’t know why that’s true. We certainly consider motivations, but that’s not the same thing as placing their actions in causal and temporal history with their settings. If asking why I’m typing that sentence, hashing out the history of my country would be unnecessary.”
2
At the end of the day I have several complains with After Virtue.
1 I think intuitionist ethics is bad. Our moral intuitions throughout history have been disastrously wrong.
2 It thinks a devastating objection to other theories is their inability to get universal agreement. This is true of literally all theories including virtue ethics.
3 It gives no guidance for solving any ethical questions that are not blatantly obvious.
4 The criticisms of other theories are bad, especially utilitarianism.
5 The claim that ethics must be communal is false—a person not in a community torturing people is bad.
6 There is inadequate justification for the claim that rationality can’t ground morality.
7 The claim that ethics is about what a proper human would do is not justified.
8 The historical ramblings are annoying and fail to justify significant ethical claims.
9 There was basically no argument given for virtue ethics being correct other than attacking other theories and historical analysis.
10 Scott Alexander is right about after virtue.
11 There’s no good metric for deciding what things are virtues.
12 As Alexander says “I broadly agree with him about this problem. I discuss it pretty explicitly in sections 6.5 and 8.1 of my Consequentialism FAQ. I propose as the solution some form of utilitarianism, the only moral theory in which everything is commensurable and so there exists a single determinable standard for deciding among different moral claims.
Annnnnd MacIntyre decides to go with virtue ethics.
The interesting thing about virtue ethics is that it is uniquely bad at this problem. In the entire book, MacIntyre doesn’t give a single example of virtue ethics being used to solve a moral dilemma, as indeed it cannot be.”
If making compelling historical arguments is a virtue, MacIntyre’s book is certainly after virtue.
My attack ad on deontology
Attack ads are fun
My last attack ad on virtue ethics sent me spiraling down a rabbit hole, causing me to read after virtue and write a long book review. This time, I will resist the temptation. I will state, as succinctly as possible my case against deontology, while including scary images. Here it is.
1 Deontology prioritizes one’s desire not to sully themself over making the world a better place in a way that’s quite narcissistic and would be recognized as such if not wrapped in the façade of moralistic language.
2 There are no good arguments for it.
3 It has ridiculous conclusions if extrapolated.
4 It holds that third parties should hope that you act wrongly sometimes.
5 It holds that giving more options to perfectly benevolent omniscient beings can make the world worse.
6 It can’t provide a coherent account of rights, unlike utilitarianism.
7 It does terribly in terms of theoretical virtues.
8 It does poorly historically.
9 It is not able to make accurate predictions about what things will turn out to be intuitive upon reflection.
10 Attempts to derive it from first principles fail and either have insane results or give up on the principled derivation almost entirely.
11 It’s evolutionarily debunkable.
12
(Okay maybe that wasn’t so much an argument but this is an attack ad after all).
13 It privileges those who are well off, falling prey to status quo bias.
14 It only seems to reconcile unreflective intuitions
15 It has lots of paradoxes and contradictions .
16 It only seems to reconcile the intuitions based on the connotation of words, not based on careful reflection about cases,
17 The doing allowing distinction seems incoherent.
Many of these were inspired by Yetter Chappel
Is moderate deontology problematically explosive?
Short answer: yes.
I already mentioned this argument in a previous article but it was worth reiterating for a few reasons.
I think this argument is actually really good and worth informing lots of people of.
It enabled this lovely title
Most of you probably don’t read in detail and remember every single argument in complex technical articles.
Let’s begin by explaining what moderate deontology is. Moderate deontology holds that rights are valuable, but not infinitely so. While a radical deontologist would be against killing one person to save the world, a moderate deontologist would favor killing one to save the world. Extreme deontology is extremely crazy. An extreme deontologist friend of mine has even held that one shouldn’t steal a penny from Jeff Bezos to prevent infinite Auschwitz’s. This is deeply counterintuitive.
However, moderate deontology collapses to full fledged, foaming at the mouth, radical deontology. It is, as the title suggests, problematically explosive.
We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you the same options you were just given.
The 100th circle is comprised of psycho murderers who will take option one if the buck doesn’t stop reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication. If they accept this option then their view is problematically explosive, and lapses into extreme deontology. It holds that one shouldn’t kill one person in a way that would prevent oodles of deaths.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you shouldn’t kill one to save five. Thus, in order for them to hold that you should kill one to prevent five perfectly moral people from having two options, one of which is killing one, they’d have to reject the extra choice principle. This principle states that giving an all knowing perfectly moral being more options can’t make the situation worse. This is deeply intuitive. If the extra option is worse than the existing options then they won’t take it. If it is better, then it’s good for them to take it. Thus, this principle seems very hard to reject.
Well, the deontologist has to reject it. They hold that you shouldn’t kill one to prevent five perfectly moral person from having only option 1—the option of killing one person. However, they’d also have to hold that you should kill one person to prevent them from having options one and 2. Thus, giving them extra options is bad. This is deeply counterintuitive.
So deontology has to either reject an almost self evident principle or be problematically explosive.
If you’re currently thinking that “moderate deontology says you shouldn’t kill one to save five but should kill one to save 1.5777218 x 10^69,” read the argument more carefully. The argument shows that moderate deontology is internally inconsistent. If you think the argument is just question begging or that the deontologist should obviously accept option 1, as some deontologists have who have heard the argument before I explained it to them more carefully, read the argument again.
Either way, this seems to be a pretty decisive objection.
Is extreme deontology problematically impotent?
Yes!
Extreme deontologists hold the view that not violating rights is infinitely important. One shouldn’t kill a single person to save the world. This runs into the problem that it would seem to dictate immediate suicide.
If one does not immediately commit suicide then it’s possible that they’ll violate rights. Lots of ways this could happen.
1 Maybe they’ll change their view and become a serial killer.
2 Maybe they’ll hit a pedestrian. This is obviously a rights violation. If a person just drove their car into pedestrians for no reason that would be a rights violation.
3 It’s possible that they’ll do something horrific in a fit of rage.
4 It’s possible that actions they currently don’t realize are rights violations actually are rights violations. Maybe all the food that they eat is actually made of people who have been turned into food by a witch, but who are still conscious.
If the extreme deontologist holds that rights violations are infinitely bad then any risk of doing them can’t be done for any benefit. If one shouldn’t kill one person to save the world one certainly shouldn’t take a .0001% risk of killing one person to have a more fun life. Thus, these theories would demand immediate suicide in order to avoid violating rights.
Additionally, jailing an innocent person is surely a rights violation. Imagine that one person was a judge who only made a single mistake, but correctly sentenced trillions of people. On the deontologist account, this persons actions would be wrong—it would be better for there to be no legal system presided over by this nearly perfect judge, because they are violating some rights. This is deeply counterintuitive.
Maybe the deontologist thinks that doing things that will certainly violate rights are categorically distinct from, and worse than, doing things that will probably violate rights. This runs into a few problems.
1 It would hold that doing something with a 90% chance of killing someone 53222 times would be less bad than certainly killing one person.
2 It would hold that shooting at someone is less problematic than presiding over 100000000000000000000000000000 cases exremely well, because the odds of falsely sentencing someone in 1000000000000000000000000000000000 cases is much higher than the odds of actually killing someone when shooting at them.
These problems seem pretty damning.
Huemer's paradox of weak deontology is devastating
I think this is a really good argument
I’ve previously said that I think Huemer is at his best when he’s arguing in ways that coincide with utilitarianism.
Huemer starts out laying out two principles.
“Individuation Independence: Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive—how we classify the division between actions shouldn’t affect their moral significance.
Second “Two Wrongs: If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
This is obvious. Huemer gives an example to justify it, but if properly understood, this principle is trivial. Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause on to experience one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off would be morally wrong.
There are lots of ways that deontologists might object to this, each of which Huemer refutes decisively. I won’t delve into the details of all of those. Huemer argues for another case, writing “Now, if one still thinks that Mary’s turning of the first dial entitles her to turn the second dial, consider a final torture scenario: Delayed Torture Transfer: As in the original Torture Transfer Case, except that Mary performs only Adjustment 1, harming A to help B. She then quits her job. Ten years later, Mary is working in a different prison, where, alas, more unjust torture is occurring. She comes upon a situation exactly like the one she encountered ten years ago. Coincidentally, the same two prisoners are involved, having been arrested for something else. This time, Mary decides to perform only Adjustment 2, harming B to benefit A. I take it that the standard deontological intuition is that both of Mary’s actions are wrong. In particular, Mary’s earlier unjust act is irrelevant to whether she may perform Adjustment 2 ten years later. But if so, then we should say the same when the time delay between the two actions is reduced--say, to one second.”
This seems like a pretty devastating objection. If you shouldn’t harm then haring two to prevent 2 greater harms would be bad, even if it made everyone better off. This is immensely counterintuitive.
If you’re not convinced, read the original paper. Huemer is quite impressive when he argues for utilitarianism.
Another problem with deontology
This one is original to me
Let’s start assuming one holds the following view
Deontological Bridge Principle: This views states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.
This is obviously not morally different from
Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.
In both cases you’re pushing a person off a bridge to save five. Whether their body stop the train or pushes a button to save other people is not morally relevant.
Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.
It seems reasonable to accept the Genie’s option. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.
It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they can lift up the five people. It seems obvious that they should lift up the five people. But this is analogous to the switch case, which is analogous to bridge.
We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.
Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back. Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.
If flipping the switch is wrong then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.
Maybe you hold that there’s some normative significance to flipping the decision and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth 10000000000000000^1000000000000000000000000000000000000000^10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000^100000000000000000000000000000000 times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.
Either way, deontology seems to run afoul of the deeply intuitive principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.
It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.
I thought of this argument when I was thinking over how deontologists would respond to the question of “if you accidentally hit the switch in the trolley problem, should you switch it back?”
What's wrong with Kant's formula of universal law
I Kant endorse it
“If we look back upon all previous efforts that have ever been made to discover the principle of morality, we need not wonder why all of them had to fail. It was seen that the human being is bound to laws by his duty; but it never occurred to them that he is subject only to laws given by himself but still universal and that he is obligated only to act in conformity with his own will ..”
—Kant being very humble
Kant’s formula of universal law has some superficial appeal. On this principle, one ought only act in accordance with maxims that could be willed to be universal law.
What the hell is a maxim? It’s not exactly clear. It’s sord’ve like a rule or a principle. So when I try to maximize well-being, that’s plausibly a maxim.
These objections are not original to me—Parfit is responsible for all of these in his excellent series “On What Matters,” which you should all go read immediately. Act on that maxim! I will it to be a universal law!
What do we mean by making it a universal law? Well, roughly that it could be done universally.
One way of interpreting a maxim is that it is the principle that you in fact act on. So if I take your pencil because I think it will maximize well-being, maximizing well-being is my maxim.
If we interpret it that way then it runs into problems. Egoism plausibly can’t be willed to be a universal law. Yet not every act done by egoists is wrong. If an egoist saves the life of a child merely to gain fame, that act is not wrong.
Similarly, there are many good acts that would be impossible for everyone to act on. If I act on the maxim “don’t have children to have more time to spend on protecting the future of humanity,” that maxim couldn’t be universalized. Yet that doesn’t make it clearly wrong. Similarly, acting on the maxim “go to that secluded forest,” doesn’t seem wrong but couldn’t be done by everyone.
The other way of interpreting a maxim is that it’s a principle that could be acted on to justify an action. Yet this runs into problems. Any act has infinite maxims that could justify it. Suppose I steal a pencil. I could act on the contradictory maxim always steal, or I could act on the non contradictory maxim I always get to steal from the particular person from whom I stole, or even the maxim maximize well-being unless it conflicts with me stealing this person’s pencil and then do that.
For a maxim to justify an act it just has to include that act. However, the maxim can be tailored to include no other acts.
This view also runs into the problems given earlier. If I act on the maxim go to a secluded forest to meditate, that is not universalizeable but it’s still not morally wrong. If I were to go on a date with Jane, that wouldn’t be objectionable based on the impossibility of everyone simultaneously going on a date with Jane.
It’s also not clear what the justification is for this principle. Why would we accept that taking an action is wrong if everyone couldn’t do it? It’s not at all clear.
Parfit makes more objections, but these seem enough to refute it decisively.
Ayn Rand was off the rails
My response to Ayn Rand
The Stanford Encyclopedia of Philosophy is written to be unbiased and merely to report the facts. It does this job quite admirably, avoiding partisanship and staking out strong opinions on controversial philosophical issues. And yet, despite this impressive objectivity—far more robust than the vast majority of allegedly objective sources—its article on Ayn Rand reads like a hit piece.
This is not intentional, nor is it caused by bias. It just happens to be the case that an objective portrayal of the facts necessarily pans Ayn Rand, the same way that an assessment of the Manson family will, even if written with maximal objectivity, leave most readers generally opposed to the Manson family. If it were truly being polemical, it could have been one of the most strongly worded articles written in the history of the world.
In this article, I shall review “The Objectivist Ethics, in The Virtue of Selfishness.” I intend to show that, contrary to the claims of her fervent advocates, Rand’s philosophical reasoning is very poor. Her rejection in mainstream philosophy comes not from an insular desire to ignore outsiders, but rather from the fact that her arguments are just bad.
Rand’s claim at the outset is that moralists have nearly all concluded that ethics is arbitrary convention. This claim is false unless we play very fast and loose with nearly all. Moral realism is the majority view. Even when it wasn’t, it wasn’t rejected by nearly everyone.
(It’s formatted so that you can’t copy and paste text, so some of the text will be quotes from Huemer’s quotes of it. Huemer has good contributions so read his article too.)
Rand seems to feel very strongly that all other philosophers aren’t interested in trying to use reason to get ethics, writing “No philosopher has given a rational, objectively demonstrable, scientific answer to the question of why man needs a code of values.” It’s unclear what she means by this. The obvious reason for having a code of values is because those values are good so people following them is also good. Sidgwick, Bentham, Mill, Aristotle, and thousands of others have argued for a particular ethical system. It straightforwardly follows from an ethical system being the correct account of the good that we should follow it.
Rand claims that most philosophers took ethics for granted and didn’t care about its objective implications. She gives no citation for this or justification. As Hitchens said “claims made without evidence can be dismissed without evidence.” It’s amusing the dismissive attitude she takes towards other philosophers. Good philosophy does not claim to be the first philosopher since aristotle to be opposed to irrationality and stupidity and ethics.
She next claims that the neomystics replaced the will of god with the good of society. I have no idea what neomystics she’s referring to, but ethical realists generally hold that things goodness has no relation to particular societies. A utilitarian like myself would hold that you should do what produces the most good, regardless of which society benefits. Neomystic is obviously a term of abuse, not an argument.
Rand is upset about those damn neomystic cultural relativists (which is apparently the main view in philosophy) holding that whatever society does is good. This is a view held by virtually no one except undergraduates prior to taking an ethics class. It’s obviously false because it entails that Nazi Germany was fine.
She then proclaims that this is the consensus view among philosophers who declare that ethics is outside of the power of reason. I have no idea where she is getting this idea. Cultural relativism is held by approximately zero people. Maybe she’s including non cognitivism and error theory in this category, but it’s still not a majority. This does, however, explain why she names her philosophy objectivism, despite there being oodles of other moral realist theories that claim morality is objective. Rand is under the delusion that she’s basically the only person who thinks ethics is objective. This confusion has resulted in lots of linguistic contortions for those of us moral realists who want to explain our view while also making it clear that we’re not Randian objectivists.
Rand next utters the incredible sentence “Whatever else they may disagree about, today’s moralists agree that ethics is a subjective issue and that the three things barred from its field are: reason—mind—reality.” This is false. Most philosophers are moral realists according to the phil papers survey. There is certainly no universal agreement about subjectivism. Many philosophers like Parfit, Nagel, Singer, Railton, and hundreds of others think that morality is about reasons. I have no idea what it would mean form orality not to be about mind or morality. The subjectivists think that it’s mind dependent so wouldn’t that be including mind? It’s not clear.
This strange succession of claims—unabashedly self aggrandizing, unjustified, and fully orthogonal to correct historical information, make Rand exciting to read yet devoid of any semblance of a decent argument. It would be like a mediocre historian proclaiming that “No for centuries has investigated any pre enlightenment history. What ever else contemporary historians agree upon (oh by the way they’re also mystics, forgot to mention that) they all agree that the world pre enlightenment was a black box that historical methods can’t investigate.”
She next blames this subjectivism for the world collapsing, which seems to overestimate the role that academic philosophers have.
Next she says “"'Value' is that which one acts to gain and/or keep."""
This is obviously false. One who is being tortured yet cannot prevent their torture still is experiencing something invaluable, yet they cannot act to prevent it. Similarly, the elegance of mathematics is something that I think is valuable, yet I don’t act to gain or keep it. Finally, this presupposes value subjectivism. Something could be valued by no one but still valuable. If a person does not care about their future joy, that wouldn’t render their future joy not valuable.
Rand further claims that “The concept ‘value’ is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.”
This presupposes values subjectivism. One could find something valuable without it actually being valuable. It might be valuable for me to meditate, even if I don’t find meditation valuable. Presumably Rand would hold that her account of value would remain even if no one valued it. Thus values don’t require a valuer.
Values also don’t presuppose the ability to affect those values. I think that the truth of utilitarianism is very valuable—it would be a shame if pleasure didn’t matter. However, I do not act to make pleasure matter. This is because it’s mattering is not dependent on me, or anyone else valuing it.
Attempting to present a dichotomy Rand says "There is only one fundamental alternative in the universe: existence or nonexistence..." It’s not clear what she means by this. Obviously all things either exist or don’t. However, all things are also either red or not red. That does not mean that redness is some fundamental alternative with meta-ethical implications. She then says existence pertains only to living organisms. It’s not clear what this means. She says life can choose to exist of to cease to exist, but life can also choose to cause inanimate organisms to exist or to cease to exist. It’s not clear why this has normative implications.
She then claims that life is a prerequisite for value. This is false, a sentient robot would possess moral value. If atoms could suffer, they would possess value.
She next says "To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured, or destroyed. Such an entity would not be able to have any values..."
This is obviously false. If the robot experiences brutal torture that would still be bad and bring about things that have negative value. The happiness I get from understanding words that other people say is not something I can control but it still has value. There is no argument for this bold claim, as is true of many of Rand’s claims.
Next, the argument is made that only a living entity can have goals and that those goals all have to be directed towards the maintenance of the organism’s life. This is false—lots of organisms (especially people) have altruistic motivations and care about other people, not just their own life. She also doesn’t explain why having goals immediately confers value. I can imagine a being with goals that does not experience pleasure or pain and is thus not the conduit of value. Finally, even self interested people have goals other than maintaining their lives. Some self interested people commit suicide and nearly everyone aims at their happiness, even when it doesn’t increase their survival.
The fact that x is a prerequisite for making choices doesn’t mean that all choices should aim at maximizing the probability of x. Oxygen is a prerequisite for me writing this essay, but that does not mean that this essay should have its primary aim be to increase the Oxygen in the world.
She continues “Only living things have goals, and "On the physical level, the functions of all living organisms, from the simplest to the most complex—from the nutritive function in the single cell of an amoeba to the blood circulation in the body of a man—are actions generated by the organism itself and directed to a single goal: the maintenance of the organism's life.”
This is false. Rand herself often goes off about people’s excessive altruism, not just in pursuit of people’s goals. Amoeba aren’t sentient so they don’t have goals. Also, goals are not inherently good. Valued is not the same as valuable, we can value things that don’t really matter. People sometimes sacrifice their lives for the greater good.
She then says “An organism’s life depends on two factors: the material or fuel which it needs from the outside, from its physical background, and the actions of its own body, the action of using that fuel property. What standard determines what is proper in this context? The standard is the organism’s life, or: that which is required for the organism’s survival.”
This relies on the assumption that organisms should only care about their own life, which is false. Consider some counter examples.
1 Taking a 1 in 1000000000000000000000000000000000000 risk of death to save the entire continent of Asia.
2 Taking a 1 in 10000000000000000000000000000000000000000 risk of death to prevent one from enduring infinite torture.
3 Sacrificing one’s life to save their entire family.
4 Crossing the street to go to a party, despite it risking death.
5 Eating slightly unhealthy food, despite health risks.
6 Going on a date, despite it increasing the risk of being brutally murdered.
All of these prove the principle false. Rand has also disappointingly left her thesis wholly unjustified.
She then says “No choice is open to an organism in this issue: that which is required for survival is determined by its nature, by the kind of entity that it is.”
To the extent that it’s derived from natural facts, this would be the naturalistic fallacy. Proven that organism’s do x in nature doesn’t show it’s good. Malaria is natural and bad. Even if its nature necessitates something that doesn’t make that thing good. Imagine making a species of beings who all aim at torturing and murdering. Surely that wouldn’t make it morally good for them to torture and murder.
She then repeats her thesis a few times in question begging ways. She then explains that goals are needed which is true but doesn’t show why her particular goal is needed.
She then says “Metaphysically, life is the only phenomenon that is an end in itself: a value gained and kept by a constant process of action.” This is false—happiness is treated as an end and can be constantly maintained by action. It’s not clear what adding metaphysically gets her here, or why any of this matters.
She then says that the is ought divide is not a problem because entities existence necessitates survival as a value. Perhaps existing requires an entity valuing their survival, but that doesn’t make it ultimately valuable. This does not pass the is ought gap.
She then says that we become acquainted with value through pleasure and pain. I agree with this, but her view would seem to entail only caring about one’s own life, rather than their own pleasure. She then says pain signals threats to one’s body and pleasure things good for one’s body. This is true, but it fails to explain why pain matters. If one experienced horrific suffering in ways that didn’t affect their survival, that would still be bad. Pain is not just a means towards survival, it’s an end.
Rand argues next that consciousness is the way we survive. It’s certainly a big part of it, but it’s not clear exactly what this means or why it matters. She says this means we need conceptual knowledge to survive which is false—some people may be alive while not understanding concepts in any depth. This also establishes nothing normative.
She then says this means we need to use reason. I’m all for using reason, but it doesn’t seem like we should use it just because it increases our survival. Even in cases where it doesn’t, it’s still reasonable to be reasonable.
Rand’s next unjustified inference is that this means that we need ethics to survive ,which is false—error theorists survive just fine. Following this up, she makes the odd claim that humans who are not self sufficient but who parrot others rely on other’s for their existence which is bad because…they’re parasites which are bad because… it’s not clear. Everyone relies on their parents to exist. That has no normative implications.
Rand argues next that those who survive by force or fraud are parasites. She doesn’t justify why this has normative implications other than the fact that she has a derogatory term for them. Following this up, Rand claims that humans who use force can’t survive because they are using animalistic methods which makes them unable to survive, much like a tiger trying to reason wouldn’t survive. However, the empirical evidence of people who do this not being dead disproves this. It’s not clear why using force is unique to animals or runs afoul of reason.
Rand next claims that man’s values are their goals. This rests on unjustified assumptions. She then rejects utilitarianism based on it requiring service to others, which is true, yet she doesn’t explain why people shouldn’t try to help others.
Rand follows this up with the bizarre assertion that hedonism and altruism claim that people can only be happy at other’s expense. This is trivially false. It just says that one should sacrifice their happiness to make others very happy. Rand is wrong to reject this. Consider a few counterexamples to her theory.
1 You can take a 1 in 100000000000000000000000000000 risk of death to save a parallel world with lots of happy people, who you don’t care about.
2 You see a child that you don’t know or care about very much drowning in a pond. You can save their life but you’d get all wet.
3 You can press a button to save a continent but you don’t know anyone in that continent and you have arthritis so it would hurt your hand a little bit.
4 You can endure a pinprick to save an alien civilization.
All of these seem decisive.
She then says that rational self interest never clashes. This is obviously false. If one person is deciding whether to donate to save another’s life, there is a conflict between their self interest and the interests of the other person. Donating would make the other’s life better, but not donating would make their life better.
She next argues that trade is the only way to justify helping others. This is false—if you could save an alternative world, even for nothing in return, you should do so.
Ultimately, Rand’s arguments were strings of assertions loosely tied together by fallacious justification, with little justification as she haphazardly jumps from false claim to false claim. Her philosophy should be utterly unpersuasive to everyone. Trying to make it a syllogism would result in the following.
1 Other theories are neomysticism which claim that what matters is societal will
2 Value is that which we aim to gain or keep
3 Value requires a valuer
4 We have to either exist or not exist
5 Life is a prerequisite for value because robots wouldn’t have values if they were indestructible and omnipotent
6 Therefore all things must be aimed at preserving life
7 Therefore life is metaphysically an end in and of itself
8 Therefore we must use reason
9 If we didn’t use reason we’d be parasites so we really must use reason
10 Therefore one’s aims are what are really valuable
This is not an argument. It’s a stream of repeated confusion stacked on top of itself. I have no idea why anyone finds it persuasive.
GPT3 is an ai that can write essays arguing for things. I asked it to argue for ethical egoism. It was waaaaaaaaaaaaaay more persuasive than Rand. It wrote
“In any given situation, individuals face a choice between acting in their own self-interest or in the interest of others. The philosophy of rational egoism holds that the best choice in any situation is to act in one's own self-interest. This may seem counterintuitive, but a closer examination of the philosophy reveals that it is the most logical and rational course of action. Rational egoism is based on the belief that individuals are the best judges of their own interests. In any situation, individuals are in the best position to determine what is in their own best interests. This is not to say that individuals always make the best decisions, but that they are in the best position to do so. Another key principle of rational egoism is that individuals are rational beings. This means that they are capable of making reasoned decisions based on their own self-interest. In other words, individuals are capable of thinking for themselves and of making decisions that are in their best interests. Rational egoism also holds that individuals are the best judges of the consequences of their actions. In other words, individuals are in the best position to know what the consequences of their actions will be. This includes both the positive and negative consequences of their actions. Finally, rational egoism holds that individuals are the best judges of the means to their own ends. In other words, individuals are in the best position to know how to achieve their own goals. This includes both the means and the ends themselves. When all of these principles are taken together, it is clear that rational egoism is the most rational and logical philosophy. It is based on the belief that individuals are the best judges of their own interests, and that they are capable of making reasoned decisions based on that information. It also holds that individuals are the best judges of the consequences of their actions and the means to their own ends. This makes rational egoism the most rational way for individuals to live their lives.”
Ayn Rand was not a fan of CS Lewis. She called him an abysmal bastard, a monster, demonstrative of The cheap, awful, miserable, touchy, social-metaphysical mediocrity!, a bastard again, an old fool, a lousy bastard, a mystic, monstrous, a God-damn, beaten mystic, a bastard again, and abysmal scum. Scott Alexander expressed that reading Macintyre left him with a similar desire to write a stream of invective in the margins, based on how poor the arguments were. Ironically, reading Rand left me with this desire more strongly than any other author I’ve read. Her ironclad confidence in every single claim, combined with belittling of her detractors, leaves her incredibly frustrating to read. It seems even the SEP has found that desire to criticize her irresistible.
I’ll leave my string of insults rather brief. Rand is disastrously wrong in nearly every sentence she writes. She expresses near complete ignorance of the history of philosophy. She belittles those who disagree with her in a rather childish way. Her arguments are sufficiently poor to be only persuasive to a sparse selection of undergraduates, which is why she’s not taken seriously in philosophy. Her attempt to deduce a priori philosophical principles, squeezed out of definitions and descriptive claims, is bound to fail. Moral claims have to be substantive, they can’t just follow from the definitions of words. Contrary to the aim of her philosophical project, Rand is no advocate of reason. She is antithetical to the project of reasoning about ethics, preferring to claim that she’s the first to undertake the project, and then spectacularly failing in upholding the burden of providing an objective justification for ethics. Those inclined for using reason to understand ethics would do well to read Parfit over Rand, and ignore all of her utterances.
Unreason never had an advocate as vociferous and beneficial as Rand.
Some hedonist intuitions
Explaining why I think mental states are all that matters
The difficult and arduous task of figuring out the best theory of self interest would likely be a book length project. I have, however, already written about some considerations that count in favor of ethical hedonism1. I shall provide several more here.
One intuition motivating this that may not be shared by everyone is that consciousness is all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
One might object that, while consciousness is required for things to possess any value, the value doesn’t have to be proportional to its effect on consciousness. This is true. However, if B can’t be had without A, that at least counts in favor of the view that A and B are the same. Additionally, the consciousness based intuitions that I have seem to point to consciousness being the only thing that matters. It seems strange to imagine that I could be made worse off by something that doesn’t affect my conscious experiences at all.
Second, hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, things can be bad for a person after they die. This is, however, implausible. It would be strange to suggest that dead people can benefit or be harmed. This article fleshed out the view in greater detail.
Third, pleasure seems to be the thing that is most obviously good. Many people like Parfit provide intuitions to argue that things can have desire independent relevance. All of the examples that are given are examples of one not caring about their pleasures.
Parfit gives the example of a person with future Tuesday indifference—one who doesn’t care about their suffering if it occurs on a future Tuesday. Thus, they’d be willing to experience horrific suffering on Tuesday rather than a pinprick on Monday. Parfit also provides the agony argument—a person who does not desire avoiding future agony would be acting foolishly. The example that I find most persuasive is side of body pulled out of a hat indifference. This person at the start of a day pulls out either left or right out of a hat at random. For the duration of the day, they are indifferent to their suffering that occurs on the part of their body that they pulled out of a hat.
If they pulled out left they are indifferent to suffering that occurs on the left side of their body. If they pulled out right, the opposite. This person at the start of a day pulls out right. They are then given the option of having the right side of their body burned with a hot iron or having a pinprick on the left side of their body. They choose to be burned with a hot iron. This seems clearly irrational.
One might object that it wouldn’t be if they didn’t truly desire avoiding being burned with a hot iron on the right side of their body. However, they still have identical qualia to what any of us would experience if we were burned with a hot iron on one side of our body. This person merely has no preference for avoiding that type of pain. We can even stipulate that when they’re burned with the hot iron on the right side of their body they are hypnotized to think that they pulled left out of the hat, such that they’re experience is identical to what it would have been if they had, in fact, pulled left out of the hat. This choice seems clearly irrational, despite them acting in accordance with their desires.
However, cases like this almost exclusively appeal to hedonic experiences. One who is indifferent to being plugged into the experience machine would not seem to be irrational. Thus, the case for desire independent relevance for well-being is far more robust.
Fourth, other theories seem to have trouble accounting for experiences that don’t produce pleasure not being valuable, but ones that do producing value.
Beginning with preference views, scenarios like the one’s described above show that preference views can hold that people are benefitted by things that decrease the quality of their mental experiences. In the case before, the one who experiences but does not desire avoiding agony, is indifferent to future Tuesdays, or is indifferent to suffering that occurs on the right side of their body, has no desire to avoid these sufferings. However, it still seems like it’s bad to torture people on Future Tuesday’s, despite their indifference to it.
It might be objected that one has to desire the experience of pleasure and avoidance of suffering. This is, however, false. As Parfit showed, people are frequently indifferent to their suffering. Sinners who believe they’ll be condemned to hell frequently go on sinning, despite it being bad for the quality of their experiences. I’ve certainly had times when I was experiencing something harmful but didn’t want it to stop for reasons of stubbornness. Additionally, these views are willing to account for higher order desires. One who wants to help their friend even at their own expense would not be acting irrationally. Thus, these views are willing to accommodate placing extra importance on higher order desires. If one has a higher order desire which renders them overall indifferent to experiences of pain, these views would be unable to condemn them.
An additional point is worth making. Very often people experience unpleasantness but they don’t have any particular desire for it to stop. I might be hungry but not focusing on my hunger. Thus, I have no unfulfilled desire for my hunger to stop. In this case, it’s not clear how desire theories make sense of this.
They might hold that what matters is the desires that I would have about things if I focused on them. This, however, runs into issues. If I focused deeply on any world tragedy, I would have a desire for it to stop. However, it seems strange to say that it is bad for me that a tsunami hits some far away place, to the extent that it doesn’t distress me.
One who committed a heinous crime before developing a concern for others and developing retributivist intuitions may hold that they deserve to suffer. Thus, they have no desire for their misery to cease. However, it still seems intuitive that their misery is bad for them. This is a real world case in which people have no desire for the cessation of their agony.
A final point is worth making. There is no conceptual relationship between suffering and one desiring for it to stop. A desire is something that one aims for. While the nature of suffering provides people a reason2 to desire its cessation, there is no necessary relation between suffering and desiring for it to stop. Given that one is a reaction and one is an experience, it is hard to see why they would necessarily be linked.
One might simply define suffering as experiences that people desire the end of. However, this definition has several problems. First, it seems to run afoul of common usage and intuitions about several of the aforementioned cases. The retributivist who believes he deserves to be in pain and is burned with a hot iron is clearly suffering. The same is true of the other examples given. The person with no desire to avoid being in pain on Future Tuesdays is still clearly suffering when they’re burned with a hot iron on Tuesday’s, while being hypnotized to believe that it’s Monday.
Second, people can conceivably experience self loathing, wherein they don’t value their own happiness. A depressed person may not care about being happy. However, it would be strange to suggest that being happy doesn’t produce pleasure for them.
Third, one can imagine a person with an addiction. They may have no desire to gain pleasure from an experience, though it would still be pleasurable.
However, a similar problem is faced by the objective list theorist. OLT is, in my view, far more plausible than preference utilitarianism. The most plausible version is the one given by people like Yetter-Chappel, according to which pleasurable experiences are more valuable if they’re achieved by appreciating something that is in fact good. He thinks the most plausible candidates for things being good are relationships with other people. Thus, one’s happiness caused by appreciation of a loved one is more valuable than their happiness if garnered by eating ice-cream.
This view, however, runs into the following dilemma. Does pain alleviation caused by appreciation also bring about the same type of objective value. If the answer is no, this introduces a strange asymmetry. Suppose that one is experiencing a headache with 5 units of pain. Appreciating another person could either give them 5 units of pleasure to offset, or could eliminate the 5 units of pain. On the asymmetric account, it would be better to experience the 5 units of pleasure. This is implausible. Pleasure is not more valuable than comparable pain elimination, regardless of the source.
However, if we say the opposite, then one can benefit from something despite it producing no pleasure for them. Suppose a friend of mine writes a birthday card to me that is intended to be nice, but that says certain things that I find hurtful. However, knowing that my friend intended to help and be kind makes me not be offended, though I get no joy from reading the letter. My appreciation of my friend’s attempted kind gesture only serves to blunt the hurt it would have otherwise caused, but brings no joy. It would seem strange to suggest that I am made better off by reading this letter that doesn’t make me happy at all. However, on OLT, I am made better off. This is counterintuitive and runs afoul of Sidgwick’s principle that something can only benefit you if it brings you joy.
Additionally, we can add that reading the letter brings me a small amount of suffering. Suppose that reading the letter gives me a papercut that is a little bit painful. On OLT, I would still benefit from it. This already seems very counterintuitive. However, it can be made much more so.
Much like hedonist accounts hold that if I was given a pleasurable experience, before having my memory erased, such that I experienced it again, it would be good for me, OLT accounts would seem to have to hold that appreciating good things over and over again would be especially good. Thus, if I could be given the experience of appreciating my friends gesture, and reading the letter that gave me a papercut and that I didn’t enjoy reading 100^100^200 times the total value from this would exceed all of the value in the universe. This is deeply implausible. Experiencing 100^100^200 unenjoyable experiences and papercuts does not possess enough value to offset the badness of the holocaust an unfathomable number of times.
The Objective List Theorist may hold that there is some limit, such that one only benefits from an experience the first time they experience it. This, however, is either implausible or inadequate to avoid the objection. If they hold that the same type of experience cannot possess value beyond the first time it’s experienced, then it would make many things that Objective List Theorists believe are valuable not, in fact, valuable. If one is hanging out with their friends, OLT would hold that this is valuable, even if they’ve had previous similar experiences with their friends. If experiences are only valuable the first time, then this extra interaction would not be valuable.
They might hold that the same experience happening multiple times cannot generate Objective List based value. However, this is both no objection to the broader principle and is implausible. Suppose that the plot of groundhog day were true, and at the end of each day, it got reset and relived. This wouldn’t seem to eliminate the value of people being in love. Additionally, we can stipulate that each time they write the letter it’s different in subtle ways, such that the same experience is not, in fact, repeated.
Going back to the earlier point, even if the OLT is able to work out some account of why repeated experiences are not valuable that boxes out the repeated letters but includes the repeated friend interactions, we can add an additional stipulation. Each time the letter is written by a different person. Suppose this person is very popular and has 100^100^200 friends, each of whom writes the same general letter. In this case, the experience is not repeated. However, the conclusion is still deeply implausible.
The Objective List Theorist could finally reject some form of aggregation, and say that lots of small good things can’t add up to produce as much good as one single very good major thing like the goodness of the world. However, as I have argued (As has Norcross), this is very implausible. The Objective List Theorist holds presumably that the experience of opening the letter produces more value than is avoided by preventing a dust speck from entering a person’s eyes. If, as I have argued, oodles of dust specks entering people’s eyes produces as much disvalue as lots of tortures, then many people opening letters that they don’t enjoy and getting pinpricks would be good enough to offset lots of tortures. This is implausible.
The Objective List Theorist could bite the bullet and argue that our intuitions relating to huge numbers are unreliable, as I have argued when arguing that lots of bad things can outweigh a single very bad thing. However, in the case I gave, I argued we had lots of plausible independent reasons to accept the utilitarian conclusion. The OLT would be hard pressed to come up with a similarly adequate justification that appeals to principles as plausible as the one’s that I gave.
The failures of our intuitions in certain cases relating to large numbers certainly cuts against the reliability of those intuitions. However, that doesn’t mean we can throw them out entirely.
Additionally, in this case, the counterintuitive result came merely from aggregating lots of cases of things that the OLT holds are good. The reason it seems unintuitive has more to do with the verdict about reading letters that one doesn’t enjoy and getting a paper cut being bad, rather than the aggregative intuitions.
Even if one is convinced that our intuitions relating to very big numbers are very unreliable, they should still rely the intuition prior to aggregation. The intuition that one is made worse off by getting a paper cut from a letter they don’t enjoy reading, rather than better off, is fairly strong. Aggregation just made the point more salient.
The OLT might object that hedonism has the opposite implication which is just as unintuitive, namely that one should torture and kill lots of people to prevent someone from getting a papercut from a letter they don’t enjoy reading 100^100^200 times. However, I’ve already defended similar judgements. Additionally, this just comes from aggregating a judgement about some action being bad. The OLT is the one who holds the counterintuitive judgement about the moral status of the action.
Another related worry arises for OLT of the type that’s been described earlier. Suppose that friend A gives me book A and friend B gives me book B. However, I get confused and think that friend B gave me book A and friend A gave me book B. As I’m reading book A I have a deep appreciation for friend B and as I’m reading book A I have a great appreciation for friend B. In this case, it would seem that this confusion doesn’t make me worse off. However, on OLT, it seems it would have to. Given that this confusion causes me to be mistaken about facts, such that when I think I’m appreciating some experience, I’m not truly experiencing the thing on my objective list that I think I am, it would prevent me from getting objective list value. This is implausible. It seems strange that there has to be a parity between appreciation from some event and that event actually existing for a person to be benefitted by that event.
A similar case raises a similar point. Suppose I mistakenly believe my friend did something which I appreciate, but I also don’t realize my friend did something else nice for me, which benefits me, which I don’t appreciate. For example, suppose my friend bakes me cookies, but I mistakenly think they baked me brownies. When I eat the brownies, my appreciation is based on a false assumption. Much like in the experience machine, my experience of appreciation is divorced from the actual facts of the world. However, this doesn’t make it less valuable.
The OLT might object, claiming that even if I’m wrong about the details, I’m still right about it being the type of thing for which I ought to appreciate them. Much like appreciating a friend would still have OL value, even if you are wrong about certain facts about them, if they still have the defining characteristics that cause you to appreciate them, mixing up the particular food that was baked would be an erroneous detail, not relevant to OL value.
However, we can modify the case. Suppose that friend A bakes me cookies, but I think friend B baked them. I appreciate friend B. Given that this is based on a false pretense, the OLT would have to hold that this lacks OL value. However, suppose additionally that ten years later friend B bakes me cookies, but I get confused and appreciate friend A. OLT would hold that these confusions leave me worse off. This is implausible.
The OLT might hold that what matters is the Parity between there being something that’s worth appreciating and someone appreciating it. On this view, it is irrelevant whether I am appreciating the thing that is actually good. If I am appreciating something and also there is something worth appreciating, I gain OL based value. However, this yields implausible conclusions.
Suppose that I am in the experience machine because my wife who cares about me greatly decided to put me into the experience machine. I did not know about this, but I would have consented. However, I was on the brink of death prior to being plugged in. I gain sufficient joy from being plugged in for it to benefit me on OLT. In this case, there is something to truly be appreciated in every good experience I have. However, it would be odd to suppose I gain OL based value every time I appreciate an imaginary person who exists merely in my mind.
Consider a final case. Suppose I am talking to a person at a costume party. I falsely believe them to be friend A. They are really friend B. I appreciate their contribution. This would seem to be valuable, despite the false pretense. Now suppose that they are in fact the brother of friend A who I don’t know (they’re not intending to be deceptive). This still seems valuable. Now let’s suppose that they’re a robot that is a copy of friend A but it not conscious. It seems hard to hold that this would not be valuable but the others would.
Let’s say they were a copy of friend A who was half as sentient. Would the value be lower? These logistical questions seem to pose major hurtles for OLT.
Sidgwick held that good things have to be appreciated—an unobserved galaxy would not be better if it were beautiful than if it were ugly. Hedonism just extrapolates that trend. Much like it’s hard to compare two states of the world that aren’t observed by anyone, it’s similarly hard to compare two states of the world in which there are differences which no one is aware of. If no one experiences a difference between two states of the world, those states of the world aren’t morally different.
For clarificatory purposes, ethical hedonism is the view that the only thing which matters is pleasure, or desirable mental states.
I use reason in the indefinable counting in favor of sense here, not in the motivational sense
Hsiao's defense of animal cruelty fails
Responding to a phil paper arguing that it's fine to brutally torture animals for food
Many people describe Kershnar as holding horrific views and getting things wrong across the board. I certainly have many disagreements with Kershnar, but I also find myself in agreement with him on a wide range of topics. However, there is one philosopher whose views I find repugnant1, who I think gets things wrong across the board. That philosopher is Timothy Hsiao.
Hsiao is a theist who is in favor of marijuana being illegal, anti homosexuality, anti contraceptives, pro natural law, and pro factory farming. I thought it would be worth responding to one of his papers—the one where he justifies harming animals. Hsiao’s aim is to, as he says, “…argue that animal suffering is not morally salient, and that animals lack the required features necessary for membership in the moral community.”
Hsiao objects to the view that animal suffering is bad by writing “However, the move from being harmed to being wronged is too quick. While pain experiences are no doubt harmful to the being who experiences them, this fact does not by itself establish that pain is harmful in a moralsense. A harm in its most general form is simply a setback to something’s welfare, and there is a distinction to be drawn between moral and non-moral harms. Not every violation of something’s welfare counts as a moral evil. Suppose I cut a rose off a bush. My cutting the rose has harmed the bush, but I have not wronged it. If I introduce malware into a computer, I harm the computer by compromising its ability to function properly, but it is not wronged by my act of introducing malware. A lack of motor oil is bad for an engine, but not morally bad. Fertilizer is good for a plant, but not morally good. Just as there are moral and non-moral senses of goodness and badness, there are likewise moral and non-moral senses of harm.”
What? Several things are wrong here.
1 There are three main theories of well-being, desire theory, objective list theory, and hedonism. None of them hold that a rose being plucked is harmful.
2 Something is harmful if one who was totally rational would want it not to happen. Imagine that the pig had a fully rational guardian angel, who had their interests at hand. They wouldn’t want to be castrated without anesthetic. I currently am a pretty rational being, albeit of course not fully rational. If you gave me drugs that made me very irrational, I currently rationally hold the judgement that it would be better if I were not in pain. However, if you turned me into a rosebush, I currently rationally hold no preference for avoiding thorns being plucked.
Hsiao continues, writing “Hence, if the ability to feel pain is a value-conferring property, then what makes it value-conferring cannot merely be the fact that it is a harm. Otherwise, this would render all harms as moral harms. There must be some further fact about pain that explains why it is morally relevant in a way that other harms are not. This is not to deny that our own pain experiences are morally relevant. The reason why our own pain experiences are morally relevant has to do in part with their harming us, but a bare appeal to harm cannot provide a complete explanation.”
I think my account is adequate. If you made me a car, I currently don’t care about what you did to the car. If you gave me drugs that slowly turned me into a cow, I wouldn’t want to be sexually abused or bolt gunned in the back of the head.
Hsiao later says “No Moral Standing Without a Rational Nature15 If neither sentience, consciousness, nor activity confers moral standing, what does? What are the necessary and sufficient conditions for being a member of the moral community, and why? The account of moral status that I will defend has been the traditional one: in order for a being to have moral status of any kind, it must have the capacity to reason. 16 It is this feature that is the sine qua non of morality and moral standing.”
Several problems for this account.
1 What about severely mentally disabled people who can’t reason?
2 What about people who are intoxicated making them unable to reason well?
3 This sanctions animal cruelty across the board. Would it be okay to torture a dog to produce slight benefit for humans?
4 What about babies?
5 What about a non sentient AI who could reason impeccably?
Hsiao addresses these more later, so I’ll flesh them out more when turning to what he says about these. He next says
“To see how the two are intimately connected, we need look no further than to the concept of morality itself. The two essential objects of morality are knowledge and action. By knowledge is meant knowledge about what is good for oneself (‘‘knowledge of purpose’’), and by action is meant the free will to act on the basis of that knowledge (Oderberg 2000b; Reichmann 2000; Lee and George 2008; Lee 2009). Why knowledge and action? The answer is that the whole point of morality, moral theorizing, and moral living is so that we may be moral—that is, to determine the truth about the good life so that it may be pursued through activity. Whatever ancillary aspects it might have, morality, boiled down to its essentials, is fundamentally about pursuing what is good and avoiding what is evil.17 Pursuing good (or avoiding evil) requires knowledge that is meant to be applied, namely knowledge of what is good (or evil) and how it is to be pursued (or avoided). This means that a moral subject—that is, a being to whom moral duties are directly owed—must have the ability to pursue the good and therefore know what is good. Why? To say that someone is directly owed moral duties is tantamount to saying that his welfare is intrinsically morally valuable—that is, morally valuable for its own sake. But one’s welfare cannot be the locus of this intrinsic moral value unless it is morally salient. As we have seen from the many examples in the previous section, there are many things whose welfare is not morally salient and which for that reason lack intrinsic moral value. To have moral salience is to have the properties related to the concept of morality. This means having the properties related to pursuit of goodness and avoidance of evil, specifically the properties of being able to pursue good and avoid evil.”
This seems wrong in several ways.
1 Things can harm beings even if they aren’t aware of them. Babies don’t have knowledge of what’s good for them, but it’s still bad to hurt babies.
2 Beings without free will can also have moral salience. Imagine a sentient gargoyle which is supremely rational, but unable to move or do anything. This being has no free will, but it would obviously be bad to brutally torture it.
3 Morality isn’t just about figuring out what is a good life. One should sometimes do things that make their lives worse, for the greater good.
4 Pursuing good may usually require some knowledge, but it certainly doesn’t require moral knowledge. Moral anti realists don’t have moral knowledge—they don’t claim there is any such thing. However, they can still do good things. Similarly, a cow might not have a well worked out normative system, but that doesn’t mean it doesn’t find it’s suffering to be bad. As cases like Future Tuesday Indifference show, things can be bad for people, even if they’re unaware of them being bad.
5 The phrase “properties related to the concept of morality,” is equivocatory. Obviously a being has to be morally significant for it to be morally salient. However, beings can have moral significance even while lacking moral agency, see the gargoyle example above. Similarly, imagine a causally inert disembodied mind that experiences holocaust esque levels of suffering every second. It obviously cannot pursue good or evil, but its suffering is still bad.
Hsiao continues, writing, “Now, as explained earlier, one must be able to know the good and to act on it to have moral standing. But significantly, the only kinds of beings that can do these actions are rational beings. Understanding, articulating, and fulfilling the demands of morality, after all, require the ability to reason. Hence, only rational beings can have the properties needed for their welfare to be morally salient and thus for their welfare to be the locus of intrinsic moral value to which duties are directly owed. Beings who lack the inherent capacity to reason totally lack moral standing.”
His account before was false, as I previously explained. However, even if we accepted this, we wouldn’t only grant standing to rational beings. Rationality may be required for one to fulfill the demands of morality. However, it is not a sufficient condition. Consider the earlier example of the disembodied mind who can’t interact with the world. They can’t fulfill any obligation, yet they are rational.
This can be shown with a parallel. The only kinds of beings that can know the good and act on it are existent beings. Therefore, existent beings have moral status.
Hsiao continues, writing, “Now, as explained earlier, one must be able to know the good and to act on it to have moral standing. But significantly, the only kinds of beings that can do these actions are rational beings. Understanding, articulating, and fulfilling the demands of morality, after all, require the ability to reason. Hence, only rational beings can have the properties needed for their welfare to be morally salient and thus for their welfare to be the locus of intrinsic moral value to which duties are directly owed. Beings who lack the inherent capacity to reason totally lack moral standing.”
Several problems.
1 There’s literally no justification for the “inherent capacity,” to reason criterion, rather than just the capacity to reason.
2 As was argued before, what matters is not being rational, but rather relating to things that a rational being would care about on your behalf. If I slowly became irrational in a flowers of algernon esque scenario, I currently rationally regard the suffering that my irrational self would experience as being very bad.
Hsiao next says “Since animals are incapable of acting for moral reasons, it follows that animals do not have moral standing. If animals were rational in the sense required to be a moral subject, then they would be under moral duties to each other. The cheetah would have a moral duty not to hunt the gazelle. Owls would be under a moral duty not to kill mice. Not only that, but we would be under obligations to prevent animals (who cannot help themselves) from killing each other. All of this is absurd. We do not hold animals morally responsible or put them on trial for their apparent misdeeds. Animals kill and maim each other, but there is no moral significance to these actions.18 The animal world is amoral.”
Several points are worth making.
1 To the extent we reject the moral standing and moral culpability distinction, we don’t have to hold that animals are culpable.
2 There’s a very obvious consequentialist reason for the difference. Animals are irrational so they can’t be deterred by being held responsible. However, they can still do bad things. When animals eat a human or a gazelle that is bad. They are not moral beings so we can’t really blame them, but it is bad, in much the way that a tornado is bad.
3 I think it’s plausible that we should try to prevent animals from suffering in the wild. Lions go for the groin when they eat non lions. If you have an ounce of sympathy for a sentient being having its genitals eaten by lions, you should support attempts to reduce animal suffering (if they were expected to be succesful).
Hsiao continues, writing, “Another related objection—and one that is perhaps the most common argument against any attempt to deny moral standing to animals—is the argument from marginal cases. We can summarize the argument as follows: If animals lack moral status because they cannot reason, then newborns, infants, and those who are mentally disabled must also lack moral status. But since newborns, infants, and the mentally disabled do have moral status in spite of their inability to reason, the ability to reason is not necessary for moral status.19
The response to this argument is simply that all humans, regardless of age or disability, have moral status because they possess a rational nature. 20 That is, all humans, in virtue of being the kind of organism they are, possess a basic or root capacity to reason. This capacity may be immature, undeveloped, or impeded in the case of certain individuals, but this does not entail that the capacity is non-existent. What is lacking in so-called marginal cases is not the capacity to reason, but its manifestation. Consider a rock and a human being born blind. Both lack the ability to see, yet the human being’s lack of sight is deficient in a way that the rock’s lack of sight is not. This is because human beings, unlike rocks, ought to see. There is an inherent normativity in the human ability to see that is not present in the rock. What explains this normativity is the fact that all humans as such have basic powers and capacities oriented towards fulfillment. Hence, the root capacity to reason is relevant to moral standing in a way that its developed form is not, for root capacities provide a benchmark by which can evaluate whether a thing’s life is going well or ill.
Without appeal to the normativity provided by these capacities, we would have no basis for our concepts of well-being, flourishing, maturity, immaturity, defectiveness, or disability. Root capacities are what dictate the growth, development, and proper functioning of an individual, and hence are present from the moment an individual begins to exist. Human embryos, for instance, develop in the way they do because their growth is teleologically directed from within by their nature. An embryo’s root capacities function as a blueprint that dictates how the process of biological development should proceed. Hence, all human beings, from conception to death, have moral status because they possess the root capacity for rational thought throughout every moment of their lives.21 On this view, there are simply no such things as marginal cases.”
Several problems arise.
1 What determines whether or not one has a rational nature. While it’s true that even disabled humans share a species membership with beings most of whom have a rational nature, it’s not clear why this categorization is relevant. For any irrational being we can create a set that includes that being and rational humans. For example, consider a set which includes a random zebra and all humans. Well, that set has a rational nature, so it’s a defect on the part of the zebra. It’s not clear why Hsiao is using species membership to categorize beings.
2 There are lots of related sets that we could categorize them as being part of. For example, we can say that a severely mentally disabled person is part of the set severely mentally disabled person, which does not have rationality as a constitutive feature. A terminally ill baby can be part of the set “babies who won’t become rational,” which does not have constitutive rationality.
3 Imagine we stumbled across a planet of beings who looked like humans but had the IQ of severely mentally disabled people. Would it be okay to eat them?
4 Imagine additionally that satan places a curse on some people, making it impossible for them to ever stop being unable to reason. Would we be okay with eating them? They cannot be rational at any point.
5 Imagine additionally that all mentally disabled people’s brains were transferred into the bodies of cows? Would they stop being morally relevant. Additionally, what if a few cows were rational. How many cows would have to be rational for the other cows that are not to be given moral consideration.
Hsiao continues “Puryear (2016) objects that it makes little sense to speak of root capacities if the requisite physical organs are destroyed or missing. Consider the case of a human being born without eyes. Puryear objects that such an individual lacks even the root capacity to see because he lacks the required physical organs to see. But this simply confuses the vehicle through which a capacity is expressed with the capacity itself. The person born without eyes lacks the physical organ through which sight should expressed, but he still ought to see, and what explains this oughtness is the existence of a root capacity that cannot be reduced to any physical organ. If this capacity were not present, we would be unable to explain why there is anything bad or defective about the condition of someone who is born without eyes. Indeed, the root capacity to see is what explains why humans develop eyes to begin with. The very reason why human development proceeds in lawlike ways is because of root capacities that direct the process of human life. Without appeal to these capacities, human development becomes completely inexplicable.22 In this sense, root capacities are prior to their characteristic organs in the same way that a blueprint is prior to a house. The fact that a blueprint may never be realized for whatever reason does not imply that the blueprint itself does not exist.”
What? The reason why it’s bad that people can’t see is because seeing is useful. We don’t think it’s bad for people to be born with high IQ’s or unique talents, because those are beneficial. The reason humans develop eyes is because seeing is biologically useful. If a human was born able to fly, that would be a unique feature that goes against humanities intrinsic nature, but it wouldn’t be bad.
“In light of the foregoing arguments, we can conclude that animals lack moral status. Since this entails that animals do not have any kind of right to noninterference on our part, it is morally permissible for us to use them for our own purposes. Animals may be used not just for food, but for experimentation, clothing, assistance, companionship, and a variety of other purposes. However, this is not to say that we are permitted to do literally anything we want to animals, anymore than the fact that we can use our property for our own purposes implies the right to do whatever we want with it. We may not have duties directly to animals in respect of their welfare, but we do have duties to ourselves and each other that require us to respect animal welfare. We may not torture, abuse, or otherwise mistreat animals, for in doing so we corrupt our own character, which also disposes us towards cruelty against our fellow humans.24”
Several issues with this.
1 Suppose that Mr. Hsiao gained knowledge that a trillion animals were slowly having their skins ripped off by razors, inflicting profound agony on them, before eventually killing them. He could prevent all of it by pressing a button. However, doing so would cost 20 dollars, thus harming humans slightly. If the only reason harm to animals is bad is because of how it affects us, then he’d be under no obligation to push the button. In fact, he’d be wrong to push the button, because he’d be harming humans to benefit animals. This is, however, implausible.
2 Suppose additionally that there were two children torturing cats. One of them was torturing a real flesh and blood cat. The other one was torturing a robot that looks like a cat but is not conscious. Hsiao can prevent one of those from happening. On his view, given that cats don’t matter at all, he should just flip a coin. They have the same effect on people. However, this is implausible. Part of what’s bad about setting a cat on fire relates to the harm to the cat.
He says “It is not clear why someone who denies that animals have moral status cannot be committed to the thesis that animal cruelty is wrong in part because of what it does to the animal. Animals, after all, possess welfare conditions, and so acts of cruelty harm them in a real sense. Even though it is human beings (either oneself or others) who are wronged, it is not incorrect to say that that animal cruelty is wrong in part because it harms animals, since it is through harming animals that persons are wronged. In other words, the harm dealt to animals is a necessary—but not sufficient—condition of what makes animal cruelty wrong. It is not necessary to attribute moral status to animals in order to explain the wrongness of animal cruelty in a way that references the well-being of the animal that is harmed.”
If we accept that animals don’t matter, it’s hard to account for this distinction. Setting a cat on fire is very bad. Setting a blade of grass on fire is not. If they have the same moral worth, those acts are morally indistinguishable.
He continues, writing “As a formal definition of animal cruelty, we may say that an act is cruel if it reveals a corrupt character or if it corrupts one’s character so as to make one more disposed to mistreating humans. Both animals and humans share the ability to suffer, and so taking delight in the suffering of an animal disposes one towards similar behavior when it comes to human suffering. But there is no straightforward way to give a catch-all list of the practices that meet this criteria. Since animals lack moral status, what counts as a cruel practice towards animals will be person-specific. A practice that evinces a cruel character for one person may not be considered cruel for another.”
Okay, so let’s imagine that a person sits on a cat and kills it. No one ever finds about it. On Hsiao’s view, there’s no reason to prefer that the cat wasn’t suffocated to death by accident. This is implausible.
Next Hsiao says “Some maintain that there is an inconsistency between the way we regard animals that are dear to us and the way we regard those killed in industrial farms, and that this can be exploited to show the immorality of industrial farming. Consider a thought experiment that Norcross (2004) raises. We are asked to imagine a scenario in which the only way an individual can please his gustatory appetite is to confine, torture, and kill puppies in order to secure a hypothetical compound that has the power to satisfy his urges. When he is put on trial for his actions, he insists that he not a sadist, and that he takes no pleasure torturing the puppies. He defends his actions by appealing to the priority of human pleasure over animal well-being. Yes, it is unfortunate that the puppies had to suffer, but they are mere animals. Norcross observes that in spite of this defense, nearly everyone would still find his actions to be morally reprehensible. But, he goes on to argue, what is going on in this scenario is no different from what occurs daily in industrial farms. Hence, Norcross concludes that individuals have an obligation to cease eating industrially-farmed meat. I fail to see how we can generalize from Norcross’ specific example to industrial farming in general. Take the practice of using dogs (and cats) as food for consumption. Such a practice strikes many in Western society as disgusting, sickening, and inhumane. Why? The explanation is likely due to the fact that dogs are regarded as companion animals with special social value. Individuals in Western cultures have, through various social practices, conferred ‘‘honorary moral standing’’ to dogs and cats in virtue of the roles that they have set for them (Carruthers 1992; Scruton 2000). We relate to our pets in ways similar to how we relate with human beings. For many of us, dogs occupy the role of an honorary family member.25 Hence, mistreating an animal that is regarded as a companion is good evidence of a cruel disposition, which also explains why the practice of eating a dog in the United States is bound up with a certain kind of social stigma. But while dogs have been accorded this special social value, chickens, cows, pigs, and other animals commonly farmed on an industrial level do not. This explains why many people feel the way they do when confronted with the prospect of consuming dogs, but not other animals. So we cannot generalize from our revulsion towards the slaughter of dogs to industrial farming. The disgust we associate with eating dogs can be adequately explained without having to attribute moral status to them. This is not, as Norcross suggests, to appeal to some crude form of moral relativism, but to acknowledge the fact that while animals lack moral standing, our attitudes towards them are shaped not just by our sympathy towards them, but also by the ways in which we use them. Thus, our intuition that there is something revolting about eating puppies but not other animals is defensible.”
This is certainly a response that can be given. However, on this view, it’s impossible for one to give an account of why a society that tortured cats for gustatory pleasure would be bad. If it’s only bad in virtue of people thinking it’s bad, then it would be perfectly fine if no one had any sympathy for cats.
Well, that’s all for now, thanks for reading :).
Hsiao is an accomplished scholar and no doubt a decent person. It is his views that I find repugnant, yet repugnant views should obviously not be censored.
THIS Article Gives a Devastating Argument Against Egalitarianism
Read it
Article is linked here.
Huemer starts saying
I start from three premises, roughly as follows: (1) that if possible world x is better than world y for every individual who exists in either world, then x is better than y; (2) that if x has a higher average utility, a higher total utility, and no more inequality than y, then x is better than y; (3) that better than is transitive. From these premises, it follows that equality lacks intrinsic value, and that benefits given to the worse-off contribute no more to the world’s value than equal-sized benefits given to the better-off.
He does bare this out.
A) Consider a world of 1 million people with 101 utility. Compare it to one with 2 million people with 50 utility. World 1 is better if we accept that having a higher average utility, total utility, and no more equality than world 2 makes it better than world 1. This follows from every plausible moral view. A world twice as populated with less than half as much welfare per person would be worse.
B) Compare world 1 to world 3. World 3 has 1 million people with 102 utility and 1 million people with 1 utility. World 3 is better given that it’s better for every single person. All the 1 million people have higher utility in world 3, and there are an extra million people with overall positive lives. World 3 is better for all individuals who exist in either world, so it’s better overall.
C) This means that w(3)>w(1)>w(2). This mean that by transitivity, w(3)>w(1). Thus, 1 million people with 102 utility and 1 million people with 1 utility is an overall better state of the world than 2 million people with 50 utility.
Read the paper for more information. Any objections?
Utilitarianism Wins Outright Part 19: Cases, Cases, Cases
Cases Where Utilitarianism Allegedly Gets the Wrong Answers (but Doesn't Actually)
I recently asked people to produce cases where utilitarianism gets the wrong results. I predict that all of them will fail. This article will try to show that. 6 cases were given, which I shall address here.
1
“Rich person brutally maims a homeless person to displace anger. Upon reflection feels guilt (causally determined) to alleviate his guilt, does so by donating large sum of money to EA charity offsetting the disutility. Resulting in positive net utility.Util = good to maim homeless.”
I can see how this would be a bit counterintuitive. However, let’s modify the case a little bit. Imagine that the person when they feel guilty donate the money to the person who they wronged, leading to the person being able to afford an expensive surgery to save his life. The man would have otherwise died. This means that the man was made overall better off. In this case, the conjunction of actions seem clearly good.
Well, presumably helping someone even worse off than the man wronged isn’t considerably less important than helping the man wronged. Thus, if we accept
A) Donating money to save the life of the person who was maimed in response to maiming the person would make the pair of actions good.
B) Donating money to save someone else’s life is just as good as donating money to the man wronged.
Then we’d have to accept
C) Donating money to save someone else’s life in response to maiming the person would be good.
This may seem counterintuitive. This is for two reasons.
Much of our moral intuitions relate to viciousness. Even if the actions are right, we might think they’re wrong because they indicate character defects
It’s hard to truly understand how morally counterintuitive our world is, such that the cost to save a life is a few thousand dollars. If we were looking at it from the perspective of the victim, this would seem more intuitive.
2
“Utilitarianism seems to entail that I'm not permitted to sacrifice my own life to save a friend who will live a life minimally less happy than mine if they live. Whereas, common intuition tells us that it is superogatory, not impermissible.”
This seems to be slightly confused. Utilitarianism tends not to deal in concepts of permissibility or impermissibility. Instead, utilitarianism describes actions as being good or bad, or very good or very bad.
If we accept
A) Completeness: Morality must dictate what should be done in all cases.
We have to accept this, otherwise we’re subject to being money pumped. If we accept that morality doesn’t tell you whether you should save the person’s life in either cases
A person will have marginally less happiness than you
A person will have marginally more happiness than you
Then from this it would logically follow
A person’s future happiness is irrelevant to your obligation to save them.
However, 3 is implausible, requiring us to accept
Your obligation to save the life of a person who will live an extra week=your obligation to save the life of a person who will live an extra 40 years.
These in conjunction mean we should accept
B) A plausible moral theory should tell you when you should sacrifice your life for another.
If we accept B it seems we should accept
C) You should sacrifice your life to save another if your life will be worse than theirs. You might think it’s strange to call it immoral. Well, immoral generally refers to not caring about others, rather than not doing what is best. It would be weird to call sacrificing your life to prevent another from breaking their leg immoral, but it’s obvious that it shouldn’t be done. This was addressed more in this article.
3
The repugnant conclusion (although that’s a pretty obvious one). What about systematically cultivating people with Down syndrome in order to increase utility.
The Repugnant conclusion was already addressed here. The down syndrome thing is weird. It’s not clear that this does increase utility. However, if it did, it seems fine. Much like it would be good to genetically modify children so they don’t get Alzheimer’s or depression, the same thing seems to apply here. This seems to be the only account of why it would be good to press a button that would make your child happier, but bad to press a button that would make them less happy.
4
Many well-known anti-U cases are "wrong action" cases, but there are also "wrong diagnosis" cases, following Ross. Intuitively, what *makes* promise-breaking, lying, racism, tyranny, callousness wrong is not *only* that they cause suffering.
I agree that isn’t what seems to make them wrong. However, this isn’t very relevant.
1 Humans are notoriously terrible at understanding why they do what they do. Thus, I don’t place much stock in why people think they find things wrong. As this article says “Takemoto refers to a 1977 analysis conducted by Richard Nisbett and Timothy DeCamp Wilson, which found that people were unable to identify what had prompted them to behave a certain way, even when it was seemingly obvious. For example, in one study, participants were given a placebo pill, and told that it would reduce physical symptoms associated with receiving an electric shock. After taking the pill, participants took four times as much amperage as people who hadn't taken the pill. But when asked why, only one-quarter of subjects attributed their behavior to the pill, instead saying things like they had built radios when they were younger and so they were used to electric shocks.”
2 Humans often think in terms of rough heuristics. Thus, if things are usually bad, we associate them with being bad, even if they don’t cause harm.
3 One good way of testing this is to identify the necessary and sufficient conditions for the named phenomena being wrong. They only seem to be wrong in cases when they usually cause suffering. Lying to avoid hurting one’s feelings doesn’t seem particularly wrong. If tyranny was the only way to prevent the end of the world, tyranny would stop seeming so bad. We think racism is bad, but there are lots of things that would be called racism if they generally brought about bad outcomes. Examples include
Affirmative action
People only wanting to marry members of their own race (just because those are the only people they find attractive)
Reparations
Identitarianism (ie, people are okay with supporting black businesses because they are owned by black people, in a way that would be objectionable for white businesses).
Nationalism (people are often fine with Israel existing as a Jewish religious state, even though people would generally oppose a Christian religious state).
Pride Months (most people support a black pride month, but would object to a white pride month)
We’d similarly think that generally prejudiced sentiments of our racist uncle are less objectionable than they’d be if held by the president of the United States—merely based on the harm caused.
The point of these is not to stake out a firm position on whether or not these are objectionable, but rather to argue that whether we find them objectionable depends on their expected outcomes.
4 I think there’s a knockdown argument all of these allegedly wrong making properties. Suppose that we had a choice between two worlds
Each person tells 100^100^100^100 lies and lives a life where each second they experience unimaginable good experience—more than all that has ever been experienced so far in the history of the world.
No one tells any lie and everyone is terribly miserable all the time.
If we’d prefer world 1 that means that we think that a small amount of good experience worth a lie. The reason utilitarianism seems initially counterintuitive is because it’s not iterated. When it is iterated, it becomes obvious that these only appear to be wrong making properties. The same basic process can be done with any of the other things that appear to make things wrong.
5 Much of morality relates to our judgement of other people. However, judging people involves ascertaining their intentions, explaining why the things that go into our moral calculus are relevant to assessing their virtue—not just the outcomes of their actions.
5
If you've ever been wronged in profound ways, it can be hard to see what that person did as wrong *because* they could have done something that produced greater overall happiness: to me, that seems so, too, impersonal.
The responses to the previous argument all apply. Whether something benefits others doesn’t affect whether we’ve been wronged—it just affects whether the action was all things considered wrong.
6
Watching a sad movie. Like one where you're just bawling throughout.
That wouldn’t be judged as bad by utilitarianism—utilitarianism just wants to avoid bad experiences. If one enjoys the experience of watching a sad movie then it wouldn’t be bad. Utilitarianism looks at good experiences—not just whether those experiences would be represented by a smiley face or a frowning face.
Conclusion
Counterexamples to utilitarianism seem always to be cases where our intuitions are wrong and utilitarianism is right. This article sought to demonstrate that broad thesis. I tested this by being given examples by other people at random on twitter. All of them seemed to substantiate the broad pattern.
Bailey and Rasmussen's Infinite Error
Why Bailey and Rasmussen do not have infinite value--though they do have lots of value
Bailey and Rasmussen wrote a paper arguing that humans have infinite value. Correction, their paper just argued that if humans had equal value they would have infinite value. I disagree with the paper, and thought it would be worth explaining why. It also is a hilarious paper—well written, witty, and all around fantastic, making it a treat to respond to.
To begin, they say
Human Value: Extreme and Equal Human persons are those things to which we ordinarily refer with personal pronouns. Among human persons are those things we sometimes call ‘non-cognitivists’, ‘teenagers’, ‘politicians’, ‘foreigners’, ‘heterosexuals’, and more. You are a human person. The authors of this article are people too. So how much value does a human person have? At least some. We take it to be self-evident that you are valuable. You matter. It is good that you exist. Your value seems to surpass the value of many other things, including pugs, sunsets, twitterpation, “Black Star”, valid arguments, delicious dinners, The Keeper of the Plains, and more. It is good that those items exist; the world is better for their being. Yet, there are important senses in which your value surpasses the value of beasts, landscapes, pleasant feelings, classic Yngwie J. Malmsteen tunes, proofs, meals, or majestic steel sculptures. Your value is more extreme.4 You are no exception or outlier. The authors also enjoy this remarkable status. We all do. Value is our common human birthright. We can be more specific about the value we have in common. Though we may differ with respect to instrumental value, we do not differ with respect to what we call “final” value. In this sense we are all equals. You may well be more effective than someone else when it comes to producing lucrative dance beats, for example, and thus more valuable in that instrumental sense. Or you might have powers of persuasion that make you a potent ally in campus politics. But when it comes to final value, you don’t have more of that than anyone else. You are a properly valued for your own sake to exactly the same degree as is anyone else. It is proper that someone treat you as an end in yourself, and the same goes for everyone else in exactly the same sense and to exactly the same degree. In matters of final value, we are all the same.5 So, we each enjoy final value that is extreme and equal. Were final value a kind of wealth, the tribe of humanity would be supreme in its affluence, while also blessed with a Gini coefficient of 0.
While the idea of humans being infinitely valuable sounds appealing, it is clearly false. A few examples can illustrate this.
Suppose one is having a child. They can either drink a mixture which will lead to their child having very severe depression or not drink the mixture. If all humans have equal value, then the value of bringing about a person with lots of misery in their life is equal to bringing about one with much less misery. Thus, it would be morally neutral to drink the potion on this account. This is clearly false.
Suppose one could save either a serial killer from death today who will die tomorrow even if saved or a healthy 30 year old, who will die today unless saved. They should obviously save the 30 year old. Thus, the value of saving a life depends on facts about the life—it is not intrinsic and equal.
Most would agree that all else equal it would be better to save a 30 year old than an 80 year old, because the 30 year old has more years of happy life left. Thus, lives don’t all have equal value.
If you could either bring into existence a person who would experience pure agony for their entire life or one who would experience pure bliss for their entire life, you should obviously bring into existence the one who would experience pure bliss.
These show that human lives are not all equal in value. If two things are equal in value
A) Preserving them is equally important.
B) Creating them is equally important.
However, we have seen that this is not the case for humans. Thus, humans are not all of equal value. In most practical regards it makes sense to treat humans as equally valuable, but humans aren’t really equally valuable.
The authors go on to describe that this is puzzling. Generally, in other domains like art, very different things have different value. I agree with this.
Next they say
The first argument uses a Bayesian confirmation framework. Let Equal Human Value be this datum: we all enjoy equal and extreme final value. Finite Value Hypothesis is the theory that we are each equally and extremely finally valuable to some finite degree. Infinite Value Hypothesis is the theory that we are each infinitely finally valuable. 6 Consider now two conditional epistemic probabilities. These report, roughly, the degree to which one should expect a particular item of evidence, given a hypothesis: P(Equal Human Value/Infinite Value Hypothesis) P(Equal Human Value/Finite Value Hypothesis) These probabilities are not the same. And indeed, the first is significantly higher than the second. We shall argue for this claim from both directions. Given Infinite Value Hypothesis, we may expect that people have equal value. It would not be surprising, given Infinite Value Hypothesis, that we are all deeply and equally value. It seems to be precisely what one would expect, in fact. Infinite Value Hypothesis, furthermore, offers a simple and compelling explanation of Equal Human Value. Infinity is a big number, so to speak. So no wonder we are extremely valuable. Given Finite Value Hypothesis, by contrast, Equal Human Value is unexpected. It would be quite surprising, given Finite Value Hypothesis, that we are all deeply and equally value. For equal value would require that there be some particular finite degree of value – 61.2 axins, for example – such that we are each valuable to exactly that degree. Given our variation along other dimensions, this particular equality would be a surprising result indeed. It is possible, yes – just as it is possible that all the paintings in our parable have the same degree of value. But equal finite value across wide diversity is surprising and unexpected. On Finite Value Hypothesis, there is no simple and compelling explanation of our extreme and equal value. So we judge the probability of Equal Human Value on that hypothesis to be rather low. One might, of course, supplement Finite Value Hypothesis with various auxiliary hypotheses. One could add that selective evolutionary pressures or mysterious and powerful aliens only allow the birth and continued existence of human beings that have exactly 61.2 axins of value. And the probability of Equal Human Value on this more complicated view might well be high, even 1. But this maneuver comes at familiar theoretical price. Adding extra conjuncts to a hypothesis can always expand the evidence that hypothesis can accommodate; but it drives down the prior probability (or, simplicity, if you prefer) of that hypothesis. Proponents of Infinite Value Hypothesis need not add such auxiliary hypotheses to their theory to explain or accommodate the evidence that is Equal Human Value. We have all the ingredients we need, then, for a novel argument. It unfolds as follows: C1. Equal Human Value C2. P(Equal Human Value/Infinite Value Hypothesis) [=1] >> P(Equal Human Value/Finite Value Hypothesis) [=low] C3. If so, then Equal Human Value is strong evidence for Infinite Value Hypothesis 7 C4. Therefore, Equal Human Value is strong evidence for Infinite Value Hypothesis (from C1-C3)
I agree that having equal value would be much more likely if humans had infinite value. However, I don’t think that humans have infinite value, so this doesn’t matter. Furthermore, there are devastating arguments against humans having infinite value.
Presumably headaches don’t have infinite disvalue. However, if human life has infinite value then shortening a human life would have infinite disvalue, much like removing an infinitely valuable art piece from an art collection a second earlier would have infinite disvalue. If this is true then shortening a human life by 1 second would be worse than thousands of non lethal headaches. Additionally, this would mean that a quadrillion animals being tortured to death would be less bad than one human having their life cut short.
If a human life has infinite value then bringing one extra human into existence would be worth all sacrifices that don’t eliminate human life. This is very implausible.
Additionally, if human life had infinite value, given that 2 times infinity = infinity, this would mean that saving one human would be just as good as saving two humans. This isn’t plausible.
When responding to the earlier objections to human life being equal, Rasmussen and Bailey could argue that humans have equal intrinsic value, but different instrumental value. So while all humans have the same intrinsic value, if one will have a better life, it’s more important to bring them into existence. However, if they hold that humans have infinite value, then they can’t hold this, because any non human intrinsic value would be infinitely dwarfed by the intrinsic value of humans.
This also runs afoul of their second argument, which says
Our second argument unfolds as follows:
I1. We are all equally overall valuable
I2. We differ with respect to instrumental value
I3. If so, then either we differ with respect to final value, or we are all overall infinitely valuable
I4. Therefore, either we differ with respect to final value, or we are all overall infinitely valuable (from I1-I3)
I5. We do not differ with respect to final value I6. Therefore, we are all overall infinitely valuable (from I4-I5)
This leaves them susceptible to the earlier objections to infinite value. On this account, drinking chemicals prior to pregnancy that will lead to a future baby having a worse life, would be morally neutral. After all, both infants that would be had would have infinite value.
They address one objection that I think doesn’t work, so I won’t bother quoting their response to that objection. Next, they address my basic objection (Albeit a very abridged version).
Another objection: the arguments presuppose that human beings are all equally and extremely valuable. This premise, however, is not a datum; rather, it is false. Here are three reasons. First, there is little reason to suppose we are all equally finally valuable. This equalvalue idea is a fiction; it is perhaps useful for modern liberal democracies – but hardly a sober truth. Human beings differ in all sorts of ways, including their final value. Second, it is unclear what final value could even come to here. It is probably best interpreted as (to borrow Peter Singer’s words) “obscurantist mumbo-jumbo.” Third, there is no reason to elevate human beings above the rest of nature as done in the arguments at hand. We are not angelic beings distinct from natural reality; we are parts of it, and subject to the same finitudes as the other animals (Singer could probably be recruited in defense of this final reason as well).
We reply: we have no argument for the premise in question and we will not attempt to refute these three promising objections. But we do have two comments. First, the equal and extreme value of human beings can do much to explain a wide range of ethical phenomena. That we are extremely valuable nicely accounts for the deep wrongness of murder, for example. That we are equally valuable nicely accounts for the wrongness of inequitable treatment. For this reason, we think the equal and extreme value of human beings is a hypothesis worth testing and extending, as our article does. Second, our arguments are of some value even for those who reject their starting premise. For our arguments uncover this interesting conditional: if we are all equally and extremely valuable, then there is very good reason to think we are infinitely valuable. Some may take this as a reductio of the antecedent or as a reason to accept the consequence; such is philosophy. But either way, our arguments shed new light.
I don’t think that this is able to explain things very well. Equitable treatment doesn’t follow from equal value. Murder’s badness can be explained on a utilitarian account. I think that their theory, even if able to explain two instances of bad things, is not very satisfactory. After all, nearly all theories can explain infinite bad things. This is thus a fairly shoddy criteria. I think their second point is fruitful and am in agreement. Because humans are not of infinite value, we cannot be of infinite value.
In terms of the argument attributed to Singer, one could adopt a much more modest principle to defend the view. To hold the view that humans, unlike animals, are infinitely valuable, they’d have to hold that torturing 1000000000000000000000000000000000000000000000000 animals in order to produce one extra human would be good overall, producing infinite value at a cost of merely finite value.
Finally, if human lives have infinite value, then we should ban nearly all risk taking. After all, we wouldn’t allow people to risk ruining the infinitely valuable painting. Heaven is often supposed to have infinite value—but we wouldn’t risk heaven being destroyed for trivial benefits. Thus, if humans have infinite values we should never take risks that could jeopardize our infinite value. We should never cross the street or drive, merely to have fun with friends. This would be risking infinite value for a finite value.
Utilitarianism Wins Outright Part 20: Why Your Prior Should Be Higher
Utilitarianism is the default
One consideration favoring utilitarianism, that I’ve overlooked so far, is that it’s a very intuitive principle. Generally, when people first hear about utiltarianism it just seems obviously true. It’s only when people are presented with objections to utilitarianism that they abandon it, in favor of some other theory. However, the starting intuitiveness of utilitarianism counts in favor of believing it. Generally, if something seems true, that gives us prima facie reason to believe that it is true. In the absence of a defeater, we’re reasonable to trust the reliability of our sense, cognitive processes, and many others. The notion that one should try to make the world as good as possible just seems obvious upon first blush.
This means that utilitarianism should be the default. If we cannot give a decisive objection to utilitarianism, we should continue believing it. As I have argued previously, the common objections to utilitarianism actually are evidence for utilitarianism. While it produces results that are unintuitive at first, they turn out to be correct. Much like a scientific theory making correct predictions is confirmed, a moral theory making correct moral predictions is similarly confirmed.
So we should start with a high prior in utilitarianism. We then update on all of the cases and they confirm utilitarianism. Thus, we should be overall very confident in utilitarianism.
This initial intuitiveness is often overlooked—given how much utilitarians reject common sense intuitions, they often ignore that utilitarianism is itself very intuitive. This is a mistake. A theory being very intuitive counts in favor of it.
This fact about utilitarianism seeming intuitive at first is far from decisive. But it is worth considering when analyzing utilitarianism.
Utilitarianism Wins Outright Part 21: A Piece of Moral Data That Other Theories Can't Account For
I think this is a pretty good argument
I got the argument from here. The video was very long so I thought I’d summarize it succinctly.
Other moral views have trouble accommodating the moral intuition that if X action makes everyone better off, one should take that action. Utilitarianism provides a neat account of why this is true, while other theories have to either deny this or stipulate other things.
Utilitarianism Wins Outright Part 22: Another Tricky Issue for Other Accounts
They don't work
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, there rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die. There’s also a reasonable chance (Statistics differ about this), that it will cause them to be sexually assaulted.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Maybe a person thinks that most children will have more total suffering than pleasure in their lives. If that’s true, then utilitarianism would condemn it. However, that doesn’t seem unintuitive. Other theories have the harmful consequence of condemning giving birth, even if the people will live good lives overall.
What this example shows is that utilitarianism does well in morally weird cases. This is an advantage of a theory. If a scientific theory got incorrect results about the gravity on mars, it would be rightly abandoned. So the fact that utilitarianism gets correct answers across a wide range of strange cases is just one more evidential chip in favor of utilitarianism.
Theron Pummer's Lopsided Lives Is One Of The Best Arguments For Hedonism
It's Very Persuasive
Hedonic utilitarians have done quite a good job exposing the problems with deontology and its many paradoxes. Yet they have in general done a worse job defending hedonism about value. Many of the articles one can find if they search for arguments for hedonism about value are relatively lackluster. I was recently quite happy to find an exception to this general rule in Pummer’s paper Lopsided Lives.
The basic argument is relatively simple: hedonism is the best at accounting for how well off people are who have lopsided lives. Those lives are lopsided in the sense that they have lots of non-hedonic value, but very little hedonic value.
Pummer explains “Contemplating a range of such hedonically lopsided lives might lead us to claim that any plausible theory of well-being must accommodate one or both of the following constraints. No Pleasure, No Well-Being (NPNW): Any life that contains no pleasure cannot have a positive overall well-being score, no matter how much nonhedonic goodness it contains. There is disagreement about the plausibility of NPNW.7 Whether or not NPNW is plausible, the following constraint seems more plausible: Enough Pain, Limited Well-Being (EPLW): Any life that contains no pleasure and at least finite amount of pain P cannot have an overall well-being score that exceeds finite limit L, no matter how much nonhedonic goodness it contains. Agreeing that lives with certain hedonic features cannot have overall well-being scores that exceed finite limit L is not by itself committing to what limit L is, only that there is some such L. Hedonism clearly entails both NPNW and EPLW. To the extent that these constraints are plausible and cannot be accommodated by rival views of well-being, this is to hedonism’s advantage. Indeed, EPLW does seem plausible, and arguably pluralism is plausible only if it can accommodate it. But it is not necessarily the case that pluralism is plausible if it can accommodate EPLW, as it might turn out that pluralism can accommodate EPLW only if it entails claims which are themselves implausible. I will next explore whether pluralism can accommodate EPLW without incurring such further implausibility.”
Pummer goes on to present a series of clever arguments explaining why pluralist views about well-being can’t accommodate these intuitions while respecting certain minimal side constraints on theories, such as avoiding hypersensitivity, whereby very small changes in hedonic value result in infinite differences in overall well-being. Additionally, he argues that those who bite the bullet run into
“Enough Pain at Each Time, Limited Well-Being (EPTLW): Any
life that contains no pleasure and at least finite amount of pain P at
each time cannot have an overall well-being score that exceeds finite
limit L, no matter how much nonhedonic goodness it contains.”
This is near impossible to deny. Those who are interested in defending hedonism should read this paper, as it’s remarkably persuasive. It is probably the most compelling defense of hedonism I’ve come across.
Scott Alexander's Needless Doubts About Utilitarianism
Don't Worry Scott--Utilitarianism is the way!!!!!
Scott starts his article with a cry for help
God help me, I’m starting to have doubts about utilitarianism.
Don’t worry Scott!! Help is on the way! Bentham’s Bulldog is on the case—much like Christian apologists, no one should leave the church of utilitarianism.
Whose Superstructure?
The first doubt is something like this. Utilitarianism requires a complicated superstructure – a set of meta-rules about how to determine utilitarian rules. You need to figure out which of people’s many conflicting types of desires are their true “preferences”, make some rules on how we’re going to aggregate utilities, come up with tricks to avoid the Repugnant Conclusion and Pascal’s Mugging, et cetera.
I have never been too bothered by this in a practical sense. I agree there’s probably no perfect Platonic way to derive this superstructure from first principles, but we can come up with hacks for it that come up with good results. That is, given enough mathematical ingenuity, I could probably come up with a utilitarian superstructure that exactly satisfied my moral intuitions.
And if that’s what I want, great. But part of the promise of utilitarianism was that it was going to give me something more objective than just my moral intuitions. Don’t get me wrong; formalizing and consistency-ifying my moral intuitions would still be pretty cool. But that seems like a much less ambitious project. It is also a very personal project; other people’s moral intuitions may differ and this offers no means of judging the dispute.
There is a solution here—hedonic utilitarianism. There is a fact of the matter about what makes people happy. I agree—preferences are confusing and hard to define, and run into lots of other problems, see here and here. While it may be hard to quantify to some degree, there is a fact of the matter about what maximizes happiness. Happiness refers to mental states that would be taken as desirable by one who was fully rational. For example, we can know that setting me on fire would make me less happy than giving me a cake. A good theory should have uncertainty at the margins.
Whose Preferences?
Suppose you go into cryosleep and wake up in the far future. The humans of this future spend all their time wireheading. And because for a while they felt sort of unsatisfied with wireheading, they took a break from their drug-induced stupors to genetically engineer all desires beyond wireheading out of themselves. They have neither the inclination nor even the ability to appreciate art, science, poetry, nature, love, etc. In fact, they have a second-order desire in favor of continuing to wirehead rather than having to deal with all of those things.
You happen to be a brilliant scientist, much smarter than all the drugged-up zombies around you. You can use your genius for one of two ends. First, you can build a better wireheading machine that increases the current run through people’s pleasure centers. Or you can come up with a form of reverse genetic engineering that makes people stop their wireheading and appreciate art, science, poetry, nature, love, etc again.
Utilitarianism says very strongly that the correct answer is the first one. My moral intuitions say very strongly that the correct answer is the second one. Once again, I notice that I don’t really care what utilitarianism says when it goes against my moral intuitions.
In fact, the entire power of utilitarianism seems to be that I like other people being happy and getting what they want. This allows me to pretend that my moral system is “do what makes other people happy and gives them what they want” even though it is actually “do what I like”. As soon as we come up with a situation where I no longer like other people getting what they want, utilitarianism no longer seems very attractive.
Hedonistic utilitarianism seems to be the solution here. If wireheading doesn’t make them happy, it’s not good.
If they really were happy then it would seem bad to force them to enjoy art and such, for the same reason that forcing kids to listen to music from the 4th century rather than music that they truly enjoy would make them worse off.
Whose Consequentialism?
It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism. This both means utilitarianism can’t ground my moral intuitions, and it means that if I’m honest I might as well just admit I’m following my own moral intuitions. Since I’m not claiming my moral intuitions are intuitions about anything, I am basically just following my own desires. What looked like it was a universal consequentialism is basically just my consequentialism with the agreement of the rest of the universe assumed.
Another way to put this is to say I am following a consequentialist maxim of “Maximize the world’s resemblance to W”, where W is the particular state of the world I think is best and most desirable.
This formulation makes “follow your own desires” actually not quite as bad as it sounds. Because I have a desire for reflective equilibrium, I can at least be smart about it. Instead of doing what I first-level-want, like spending money on a shiny new car for myself, I can say “What I seem to really want is other people being happy” and then go investigate efficient charity. This means I’m not quite emotivist and I can still (for example) be wrong about what I want or engage in moral argumentation.
And it manages to (very technically) escape the charge of moral relativism too. I think of a relativist as saying “Well, I like a world of freedom and prosperity for all, but Hitler likes a world of genocide and hatred, and that’s okay too, so he can do that in Germany and I’ll do my thing over here.” But in fact if I’m trying to maximize the world’s resemblance to my desired world-state, I can say “Yeah, that’s a world without Hitler” and declare myself better than him, and try to fight him.
But what it’s obviously missing is objectivity. From an outside observer’s perspective, Hitler and I are following the same maxim and there’s no way she can pronounce one of us better than the other without having some desires herself. This is obviously a really undesirable feature in a moral system.
Hedonistic utilitarianism gives a great account. Hitler’s dream world is much worse than yours.
Whose Objectivity?
I’ve started reading proofs of an objective binding morality about the same way I read diagrams of perpetual motion machines: not with an attitude of “I wonder if this will work or not” but with one of “it will be a fun intellectual exercise to spot the mistake here”. So far I have yet to fail. But if there’s no objective binding morality, then the sort of intuitionism above is a good description of what moral actors are doing.
Can we cover it with any kind of veneer of objectivity more compelling than this? I think the answer is going to be “no”, but let’s at least try.
I think Huemer and Parfit make a compelling case for moral realism, but one can deny it and be a utilitarian. If utilitarianism always matches our reflective intuitions, that gives us good reason to be utilitarian.
I’m with Scott that I don’t think that one can prove objective morality, any more than they can prove their starting axioms. But there are good reasons to accept objective morality. Here is the following argument
P1 If we have desire independent reasons to care about things objective morality exists
P2 We do have desire independent reasons to care about things
Therefore, objective morality exists
P1 is true because morality is about what we have desire independent impartial reason to do, so if there are desire independent reasons, there must be objective morality.
P2 Can’t be proved, but it’s hard to deny. One who denies it would have to say that if one desires to eat a car, set themself on fire, not go into a Doctor’s office when they’re 4 years old because they don’t like shots despite causing immense long term harm, cut themselves, not eat enough food because of an eating disorder, suffer on a future tuesday for no good reason, and endure immense agony despite it giving them no joy would be rational. This is implausible.
One idea is a post hoc consequentialism. Instead of taking everyone’s desires about everything, adding them up, and turning that into a belief about the state of the world, we take everyone’s desires about states of the world, then add all of those up. If you want the pie and I want the pie, we both get half of the pie, and we don’t feel a need to create an arbitrary number of people and give them each a tiny slice of the pie for complicated mathematical reasons.
This would “solve” the Repugnant Conclusion and Pascal’s Mugging, and at least change the nature of the problems around “preference” and “aggregation”. But it wouldn’t get rid of the main problem.
I’m not sure I understand this proposal, but the RC is not something to be solved.
The solution to Pascal’s mugging is easy: it won’t maximize expected utility. The odds are so low they’re telling the truth that it’s statistical noise. However, we can’t just discount small risks of very bad things, if we accept the plausible principle that for any amount of utility there is some % chance of some amount of greater utility that would be overall better e.g. 100% chance of 50 units of utility can be surpassed by less than 100% chance of some amount of utility far greater than 50. If we accept this, that’s sufficient to get to caring a lot about infinite impacts and near infinite ones.
The other idea is a sort of morals as Platonic politics. Hobbes has this thing where we start in a state of nature, and then everybody signs a social contract to create a State because everyone benefits from the State’s existence. But because coordination is hard, the State is likely to be something simple like a monarchy or democracy, and the State might not necessarily do what any of the signatories to the contract want. And also no one actually signs the contract, they just sort of pretend that they did.
Suppose that Alice and Bob both have exactly the same moral intuitions/desires, except that they both want a certain pie. Every time the pie appears, they fight over it. If the fights are sufficiently bloody, and their preference for personal safety outweighs their preference for pie, it probably wouldn’t take too long for them to sign a contract agreeing to split the pie 50-50 (if one of them was a better fighter, the split might be different, but in the abstract let’s say 50-50).
Now suppose Alice is very pro-choice and slightly anti-religion, and Bob is slightly pro-life and very pro-religion. With rudimentary intuitionist morality, Alice goes around building abortion clinics and Bob burns them down, and Bob goes around building churches and Alice burns them down. If they can both trust each other, it probably won’t take long before they sign a contract where Alice agrees not to burn down any churches if Bob agrees not to burn down any abortion clinics.
Now abstract this to a civilization of a billion people, who happen to be divided into two equal (and well-mixed) groups, Alicians and Bobbites. These groups have no leadership, and no coordination, and they’re not made up of lawyers who can create ironclad contracts without any loopholes at all. If they had to actually come up with a contract (in this case maybe more of a treaty) they would fail miserably. But if they all had this internal drive that they should imagine the contract that would be signed among them if they could coordinate perfectly and come up with a perfect loophole-free contract, and then follow that, they would do pretty well.
Because most people’s intuitive morality is basically utilitarian [citation needed], most of these Platonic contracts will contain a term for people being equal even if everyone does not have an equal position in the contract. That is, even if 60% of the Alicians have guns but only 40% of the Bobbites do, if enough members of both sides believe that respecting people’s preferences is important, the contract won’t give the Alicians more concessions on that basis alone (that is, we’re imagining the contract real hypothetical people would sign, not the contract hypothetical hypothetical people from Economicsland who are utterly selfish would sign)
Contractarian ethics that is just the consensus intuitions isn’t very good. Throughout most of history many people have supported horrific things like slavery, subjugation of women, and oppression of gay people. Utilitarianism has a better track record and is supported through lots of good arguments that I’ve previously discussed.
So what about the wireheading example from before?
Jennifer RM has been studying ecclesiology lately, which seems like an odd thing for an agnostic to study. I took a brief look at it just to see how crazy she was, and one of the things that stuck with me was the concept of communion. It seems (and I know no ecclesiology, so correct me if I’m wrong) motivated by a desire to balance a desire to unite as many people as possible under a certain banner, with the conflicting desire to have everyone united under the banner believe mostly the same things and not be at one another’s throats. So you say “This range of beliefs is acceptable and still in communion with us, but if you go outside that range, you’re out of our church.”
Moral contractualism offers a similar solution. The Alicians and Bobbites would sign a contract because the advantages of coordination are greater than the disadvantages of conflict. But there are certain cases in which you would sign a much weaker contract, maybe one to just not kill each other. And there are other cases still when you would just never sign a contract. My Platonic contract with the wireheaders is “no contract”. Given the difference in our moral beliefs, whatever advantages I can gain by cooperating with them about morality are outweighed by the fact that I want to destroy their entire society and rebuild it in my own image.
I think it’s possible that all of humanity except psychopaths are in some form of weak moral communion with each other, at least of the “I won’t kill you if you don’t kill me” variety. I think certain other groups, maybe along the culture level (where culture = “the West”, “the Middle East”, “Christendom”) may be in some stronger form of moral communion with each other.
(note that “not in moral communion with” does not mean “have no obligations toward”. It may be that my moral communion with other Westerners contains an injunction not to oppress non-Westerners. It’s just that when adjusting my personal intuitive morality toward a morality I intend to actually practice, I only acausally adjust to those people whom I agree with enough already that the gain of having them acausally adjust toward me is greater than the cost of having me acausally adjust to them.)
In this system, an outside observer might be able to make a few more observations about the me-Hitler dispute. She might notice Hitler or his followers were in violation of Platonic contracts it woud have been in their own interests to sign. Or she might notice that the moral communions of humanity split neatly into two groups: Nazis and everybody else.
I’m pretty sure that I am rehashing territory covered by other people; contractualism seems to be a thing, and a lot of people I’ve talked to have tried to ground morality in timeless something-or-other.
But the best form of contractualism converges with utilitarianism, as Harsanyi showed. Utilitarianism treats everyone as equally relevant, so it is the contractarian approach.
Most of these issues seem easily solved by hedonic utilitarianism. Why does Scott reject it. He lays out his reasons here.
It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.
By extension, any society that truly believed in Benthamism would end out developing a superdrug, and spending all of their time high while robots did the essential maintenance work of feeding, hydrating, and drugging the populace. This seems like an ignoble end for human society. And even if on further reflection I would find it pleasant, it seems wrong to inflict it on everyone else without their consent.
Even if we accept this is somewhat counterintuitive, preference utilitarianism has way more unintuitive results, including that we should all be brutally tortured to death if lots of dead aliens wanted other civilizations to die from slow torture. I already linked to my objections.
But the super-drug seems obvious. We frequently force children, for example, to do things in their best interest. If a person was addled, we’d give them cancer treatment because they’re making very irrational decisions. And yet this stipulated superdrug is so good that failing to consume it would be way more irrational than foregoing cancer treatment. NOt taking the drug would be like an eating disorder, but far more pernicious. This just seems obvious.
So Scott, embrace hedonism. We have debauchery, licentiousness, and iniquity
Utilitarianism Wins Outright Part 23: Another Other Tricky Issue for Other Accounts
Utilitarianism does best in weird cases
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
The non utilitarian may object that the badness of the act depends on how much harm is done. They might say that the first action is a more serious rights violation. Suppose the formula they give is that the badness of a rights violation = twice the amount of suffering caused by that rights violation.
This leads them open to a few issues. First, this rules out all harms that don’t cause any suffering. Thus, this account can’t hold that harmless rights violations are bad. Second, it doesn't seem to go well with the idea of rights. Rights violations seem to add bad things to the act done, independent of suffering caused.
Maybe the deontologist can work out a complex arithmetic to avoid this issue. However, this is an issue that is easy to solve for utilitarians, yet which requires complexity for deontologists and others who champion rights.
Utilitarianism Wins Outright Part 24: Another Other Tricky Issue for Other Accounts
Showing that I can write an actually infinite (not just potentially infinite) number of pro utilitarian articles
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
One might object that the relevant question is whether they would consent if they were conscious. However, this seems to misidentify the correct action in this scenario. Consider another situation in which the person injured is a child. It seems obvious in this case that the person should be rushed to the hospital. This is true even if the child wouldn’t consent if conscious.
One might object that the relevant question is whether they’d consent rationally. Yet as Parfit argues in On What Matters, what we could rationally consent to is the state of affairs that makes things go best. This is intuitive in this case. Whether the child could rationally consent to the action would seem to depend on whether it would be good for the child.
It’s not inconceivable that a non-utilitarian could work out a justification in this case. Yet this case is a clear example of one in which utilitarianism easily gets the correct result, while other theories need additional ad hoc stipulations.
I'll Be Doing A Written Debate About Utilitarianism
Should be exciting
As the title suggests, I’ll be doing a written debate about utilitarianism with Ungit. We’ll each publish our responses on our blogs each week. There will be a total of 4 5000 word posts.
Can I defend utilitarianism in the battle of ideas, the war of words, the battlefield of normative ethics? Only time will tell. Given that we have yet to get a critic to review my many part series arguing for utilitarianism, it’ll be a useful challenge to my ideas. Thanks to Ungit for doing this debate!
My Opening Statement Arguing For Utilitarianism
Part 1
As I’ve written elsewhere, when it comes to normative ethics, utilitarianism wins outright. There are numerous considerations that decisively favor utilitarianism, making it clearly the best view of normative ethics.
This article will present several plausible ways of proving utilitarianism from plausible axioms and virtues of the theory. However, in my view the main reason to support utilitarianism is abductive—it makes great sense of morality. All of the supposed counterexamples to utilitarianism end up supporting utilitarianism, showing that it gets the right result even when it seems wrong at first. Yet I don’t have the time to prove that in this article, given the sheer diversity of cases. I shall, however, show in my rebuttals that my interlocutors counterexamples to utilitarianism support it (assuming there are any).
1 Theoretical Virtues
When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity. Utilitarianism does excellent by this criteria. It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures, explains all of ethics, applies to all of ethics, and has perfect clarity. Thus, utilitarianism starts out ahead based on its theoretical virtues. Additionally, in terms of its prior plausibility it does well, being immensely intuitive. It just seems obvious that ethics should be about making everyone’s life as good as possible.
2 History As A Guide
History favors utilitarianism. If we look at historical atrocities, they were generally opposed by utilitarians. Utilitarian philosophers were often on the right side of history. Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human animals. Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality. In contrast, philosophers like Kant harbored far less progressive views, supporting killing people born to unmarried parents, favoring racial supremacy, and believing masturbation to be a horrifically wrong “unmentionable vice.”
Additionally, the atrocities of slavery, the holocaust, Jim Crow, and all others have all come from excluding a class of sentient beings from moral consideration, something prevented by utilitarianism.
If utilitarianism were not the correct moral view, it would be a bizarre coincidence that it both has the mechanism to rule out every historical attrocity and that utilitarians are consistently hundreds of years ahead of their time when it comes to important moral questions.
3 A syllogism
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
I shall present a defense of each of the premises.
Premise 1, is true by definition
Premise 2 states that a rational egoist would do only what produces the most happiness for themselves. This has several supporting arguments.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Premise 3 says therefore only happiness is good (for selves who are rational egoists). This follows from the previous premises.
Premise 4 is trivial. For things to be better for person A than for person B, they must have something about them that produces extra benefits for person A, by definition.
Premise five says happiness does not have unique benefits that only apply to rational egoists. This is obvious.
Premise six says follows from previous premises
Premise seven is trivial
Premise eight says therefore, only happiness is good for selves. This follows from the previous premises.
Premise nine says Something is good, if and only if it is good for selves
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
People only deny this premise if they have other commitments, which I’ll argue against later.
Premise 10 says Therefore only happiness is good. It follows from the previous premises.
Premise 11 says we should maximize good
First it’s just trivial that if something is good we have reason to pursue it, so the most good thing is the thing we have the most reason to pursue.
Second, this is deeply intuitive. When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy. Better is a synonym of more good, so if an action produces more good things it is better that it is done.
If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness
Third, as Parfit has argued the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject.
Fourth, an impartial observer should hope for the best state of the world to come into being. However, it seems clear that an impartial observer should not hope for people to act wrongly. Therefore, the right action should bring about the best world.
Fifth, as Yetter-Chappel has argued, agency should be a force for good. Giving a perfectly moral agent control over whether some action happens shouldn’t make the world worse. In the trolley problem, for example, the world would be better if the switch flipped as a result of random chance, divorced from human action. However, if it is wrong to flip the switch, a perfectly moral person being given control over whether or not the flip switches by accident would make the world actively worse. Additionally, it would be better for a perfectly moral person to have muscle spasm which results in the switch flipping, than to have total control of their actions. It shouldn’t be better from the point of view of the universe for personally benevolent agents to have muscle spasms resulting in them taking actions that would have been wrong if they’d voluntarily taken them.
Sixth, as Yetter-Chappel has argued, a maximally evil agent would be an evil consequentialist trying to do as much harm even if it involves them not violating rights, so a maximally good agent would be the opposite.
Premise 12 says therefore, we should maximize only happiness. This follows from the previous premises.
4 Harsanyi’s Proof
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
5 But Mr. Bulldog, What About Rights?
Rights are both the main reason people would deny premise 9 of the earlier syllogism and the main objection to utilitarianism. Sadly, the doctrine of rights is total nonsense.
1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one. The person who denies it could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied it.
2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities. For example, suppose we take the violation of rights to be bad, even if it’s bad for no one. A world where everyone violated everyone else's rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.
3 Those who deny this principle usually do so, not on the basis of the principle sounding implausible, but on the basis of the principle denying other things that they think matter, primarily rights. However, the concept of rights fails disastrously.
1 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. If things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This seems deeply implausible.
5 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. Greater reflection and less emtotional hinderance makes people much more utilitarian as has been shown by research by Koenigs , Greene et al, and Fornasier. This evidentially supports the reliability of utilitarian judgements.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer if given the choice between one person killing one other to prevent 5 indiscriminate murders to 5 indiscriminate murders or them not doing so, should obviously choose the world in which the one person does the murder to prevent 5. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
8 Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong.
(More paradoxes for rights can be found here and here, but I can’t go into all of them here).
6 Some Cases Other Theories Can’t Account For
Many other theories are totally unable to grapple with the moral complexity of the world. Let’s consider four cases
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
7 Conclusion
In this opening statement I’ve presented a series of considerations favoring utilitarianism. Utilitarianism not only follows from plausible actions but gets the correct answer to hard moral conundrums every single time. Other theories utterly fail to account for the cases given above, and have nowhere near as plausible axiomatic derivations as utilitarianism does.
Utilitarianism Debate Rebuttal (Part 2)
Ungit's Numerous Errors
A Preliminary Note
Before we get into the details, it’s worth noting that Ungit’s case was just against hedonistic utilitarianism. However, he had quite literally nothing to say about any account of utilitarianism that holds we should maximize anything other than pleasure including desire fulfillment and objective list fulfillment
Dispensing Briefly with some nonsense
Ungit begins his case against utilitarianism claiming that we say that happiness is desired so we should all pursue it, which is a bad argument and one I don’t make.
Utilitarians do this under the assumption that all people want happiness and therefore it is an idea that requires no justification.
Nope.
Most people if asked will prefer a life lived well (according to their own definition) rather than a happy life. From a personal standpoint, some of my best moments are those moments when I passed on happiness to do what was right. That time I turned down a high paying job because I felt the company was unethical. That time I didn’t cheat on my wife because I care about fidelity. That time I spent time with my sick kid because I didn’t care at all about my own happiness in that moment.
Utilitarians would agree that those are your best moments because they produced a better world. The thesis of ethical hedonism is that what determines how good your life is for you is how happy you are, but a better life from a neutral standpoint is one that helps others more. Utilitarianism would trivially say that you shouldn’t do bad things that would cause harm.
For an illustration of this in the extreme, imagine living in some totalitarian state like the USSR. You get called before Stalin and are accused of not being sufficiently loyal. You are told to recant any disloyalty and express your love for Stalin and the USSR. Doing as you are told is certainly the thing that would promote the most happiness. No one would see a heroic stand in that moment and even if they did it would be highly unlikely that it would have a positive effect. And you are certain to suffer more if you do. The best thing from a utilitarian standpoint is to recant and go back to regular life. And yet I would hope that if I was in a similar situation I would bravely speak against tyranny in that moment. I would hope that I would forego happiness in pursuit of truth and virtue. I would hope that I would pursue the heroic and not safety.
Self immolation for no reason that benefits quite literally no one is not virtuous in the slightest. No one is better off, so I’d just reject the intuition here.
The pursuit of happiness as the ultimate goal is the philosophy of pigs. Pigs seek food. They seek sex. They seek leisure in the cool mud. But humans inherently realize that we should seek more than that. We should strive for something beyond ourselves. We should want goodness not happiness. I think most people if offered a pill that would leave them in a state of perfect bliss for the rest of their lives would choose not to take it. This alone proves that most people do not view happiness as the ultimate goal. But with utilitarianism happiness is goodness. And pigs are the most virtuous beings there could be.
Total nonsense. The major contributors to ones life are not how much sex they have. The things that bring the most happiness are the deep meaningful things like significant relationships, knowledge, and contentment. All of the good mental states are broadly included in a utilitarian conception of happiness. I am quite confident that my life will contain more total joy than the trivial pleasures of a pig.
The pill objection is very much like the experience machine. While people mgiht not want to take the pill, hat would be an error on their part. What makes a pleasure real and genuine, making it worthwhile? Simulated pleasure is still pleasure.
If we say that happiness is not at all good, absent virtue, then it would be morally neutral to press a button that would make people who have taken the pill only gain half as much happiness from it. This is a very counterintuitive view. Additionally, if we accept this view, we would have to accept one of two other conclusions. If the suffering of the people in the experience machine is morally bad, but their happiness is not morally good, giving them five million units of happiness and one unit of suffering would be morally bad, because it brings about something bad, but nothing good. This is a very difficult pill to swallow. However, if neither their happiness nor their suffering is morally relevant, then it would be morally neutral to cause them to suffer immensely (aside from issues of consent).
Additionally, if the only thing that makes people better off is authentic pleasure then
1) In what sense is pleasure gained by eating good food authentic (And if it’s not does it not matter)?
2) Why does heaven give authentic pleasure?
3) If a person didn’t want authentic pleasure and would take the pill would they be irrational?
4) What determines whether an instance of pleasure makes you better off.
Several factors count against our a about the experience machine. First, there is widespread status quo bias. As Singer and Lazari-Radek argue , most people wouldn’t want to take a pill that would make them happy only because of status quo bias. If you new your current life was the result of you having taken a pill that made you happy, most people wouldn’t unplug and return to another unknown life.
This strongly counts against the conclusion that we have an intrinsic preference for reality. Additionally, there is a strong evolutionary reason for organisms to have a preference for actually doing things in the real world, rather than wireheading.
Preference for the real can be explained on a utilitarian account; preferring the real tends to maximize happiness. In cases where it does not, this intuition seems to fade. It is counterintuitive to think that there would be something wrong about plugging into a virtual reality game, for a short time, because that is something we have familiarity with, and tends to maximize happiness.
There’s a major issue for any account which says that only “real” pleasures matter, but experience machine pleasures don’t.
The Problem In Question
Suppose we adopt a particular account of consciousness, according to which one’s experience of an object or event is wholly produced by their brain, and does not truly observe the external world. Instead, the brain receives inputs from the external world and hallucinates a conscious experience that best makes sense of the inputs. Regardless of the plausibility of this account of consciousness, it seems that objective list theory would hold that it would have significant normative implications. If one’s experience of being in love with their wife is truly a mental image generated entirely by their brain in a way that makes sense of the signals it’s receiving from the external world, then when one appreciates their loved one, they’re truly only appreciating the joy they get from the mental hallucination. On objective list theories, this would seem to dramatically undercut the value of all things, for being in love is truly just a mental hallucination that roughly tracks what’s going on in the external world. However, this is implausible. Whether or not we are directly observing things as they are, or our brains are making up the world that we see, in a way that roughly mirrors the real world but is understandable to us should not undercut the value of being in love.
To clarify the case, suppose that when one sees someone with whom they’re in love, the only inputs from the person that they’re truly seeing as they actually are in the real world are a series of photons. The rest of the person is hallucinated by their mind, in a way that roughly tracks what the other person is actually doing, but is not totally identical. If this is true, then consciousness is very much like the experience machine, with our mind creating an experience machine-esque experience in ways that roughly track what’s happening in the real world.
One might object that the well-being we’d experience if this were the case would still fulfill our objective list because it roughly tracks the real world. Thus, it is close enough to actual perception of the good things for it to count as appreciating the good. However, this reply runs into several problems.
First, the objective list theory holds that even if events roughly track what is going on in the real world, as long as they’re not in the real world, they don’t possess the value. Objective list theorists would presumably object placing one in a modified experience machine, where their actual self is replaced by a robot version of themself, which acts exactly like they would actually act but is not conscious and placing their actual self in an experience machine, where their experiences are identical to what they would be in the real world. Even if the experience machine caused one to experience a life exactly like the one they were living prior to being in the experience machine, and replaced them with a robot twin who acted like them, such that the world they hallucinated tracked perfectly the life they were previously living, an objective list theorist would presumably find that objectionable.
Additionally, we can modify the case slightly. Suppose that the version of the world that they hallucinated was subtly different from the actual world. For example, in the actual world their wife is relatively physically unattractive. However, their hallucination of the actual world perceives their wife as being very attractive. In this case, the reality that they’re experiencing does not truly track the external world. However, this still does not seem to have any significant normative implications.
To illustrate this point more, suppose that my true self is a million miles away. I am a brain in a vat, but I experience everything exactly as it would affect me if I were actually generated by the brain inside the body that I experience being part of, and the choices I make have exactly the same effects as they would if I were actually generated by the brain inside the body I experience being a part of. This case is exactly like the case of my actual self, except the thing generating my subjective experience is merely located out of my body and far away. This doesn’t seem to possess any significant normative implications.
Now suppose we accept epiphenomenalism--the view that consciousness is causally inert. Thus, while I’m still the brain in the far away vat, my thoughts don’t cause events. Instead, it’s the brain waves in the body that I feel like I’m part of that both generate my thoughts and the actions that the body takes. Surely epiphenomenalism wouldn’t undercut the value of objective list fulfillment. However, in conjunction these make it so that I’m a discombobulated brain millions of miles away that’s merely experiencing what life would be like inside my body, but that is causally unconnected from my body. The objective list theory holds that being a causally inert consciousness is not enough to undercut the value of love, but being a causally inert consciousness a galaxy away, whose experiences are projected information sent by observing brain waves does undercut the value of true love. This is deeply implausible.
It seems like if we accept
1 Having the true source of our consciousness be far away is morally irrelevant
2 Having the true source of our consciousness be causally inert is morally irrelevant
We’d also accept that having the true source of our consciousness far away and causally inert is morally irrelevant.
More Dispensing with nonsense
The next objection is that in brave new world they say they’re happy but their lives actually seem bad. However, this would just mean that in brave new world people would obviously not be happy if their lives are mostly meaningless. By its nature, one cannot not enjoy happiness when they’re experiencing it.
Further, even if I do pursue happiness for myself as the ultimate goal, the utilitarian assumption that I should therefore pursue happiness for others does not follow. Why should I care if someone that I don’t know living in a country I have never heard of is happy?
I don’t make that assumption, nor do other utilitarians. He’s basically exclusively going after Mill here, ignoring the numerous other arguments provided by utilitarians.
Further, why should we? Utilitarianism assumes that we should. But why? How can you use utilitarianism to prove utilitarianism? How does my looking out for a starving kid in Africa improve my own happiness?
Utilitarianism is an account of what is moral, not an account of why we should be moral. This is thus irrelevant to the topic of the debate, though I’ve explained why here. Utilitarianism makes no claims about whether we should be egoists—I think quite strongly that we shouldn’t be.
For countless people throughout history, the idea of pursuing universal happiness was a foreign idea. It was not obvious at all. Yet Utilitarianism is built on this being an axiom that requires no justification. How can something that few people in all of history have actually believed be taken as an axiom of truth?
Well, lots of people think it sounds intuitive at first. But quite importantly, I’m not claiming that it’s an axiom that we should accept, I provided robust arguments for it.
It also does seem intuitively obvious that things continue to matter even if people are far away. I would have a duty to stop a genocide, even a far away one.
Sometimes Utilitarians will state that by pursuing universal happiness it will increase the odds that every individual person is happy.
I do not say that.
But now we come to an even bigger problem with Utilitarianism: we have no good way to define happiness.
Sometimes we get the idea that we can measure happiness like we can measure mass. But consider the following: which is better? To be happy all day long (maybe sipping cocktails by the pool with a beautiful woman) or to be knee-deep in mud in a jungle with a bunch of stinky men? The answer is not as easy as you might initially think.
Suppose in the second case you are a soldier, and you are fighting for your country’s freedom. Suppose you are at that moment doing what you think God made you to do. If a social scientist handed you a poll asking how happy you were on a scale of zero to ten, you would probably put a negative number. But you would not trade places with the cocktail-sipping coward on the beach for anything - doing so would make you less happy, not more.
You can add up all the happy moments you have and find that a person who had considerably fewer happy moments actually had a happier life. Consider a missionary who lives in a mosquito-infested warzone feeding, clothing, and spreading the gospel with the poorest of the poor. Compare this missionary to our rich playboy on the beach. Who, as they lay down their heads to take their last breath, is happier? And how does a social “scientist” measure that? They do not. They cannot. And yet, Utilitarianism demands that we know the answer to this question.
Happiness is defined as mental states that are desirable. There may be tricky edge cases in which it’s unclear which action creates more happiness, but we can know introspectively that, for example, being set on fire causes lots of suffering and being in love causes lots of happiness.
The playboy at the beach example isn’t a counterexample, because the person would clearly be happier at the beach (plausibly). However, it is sometimes good to make other people’s lives go well even at the expense of your own. The person who sacrifices themselves for others is being noble, but their life will be less happy. They should still be noble because one oughtn’t only maximize their own happiness.
Even if you grant the foundation, it fails. Let’s pretend for a moment that universal happiness was our goal and that we all agreed to pursue it. Would it work?
The answer is that there is no way it can work. It cannot work because it requires constant very subjective calculations on happiness. To illustrate why this is a problem let me offer a few scenarios.
Imagine that you are a businessman on a trip to Paris. You meet a beautiful Parisian woman who knows that you are married and does not care. She wants to have sex and you very much want to have sex with her. But you are a Utilitarian so you stop to determine if this is right. What action will maximize happiness? Because you are overseas and alone there is almost no chance that your wife will find out. And you will wear a condom to make sure that no STDs or pregnancy could result. So your wife’s happiness is extremely unlikely to be affected by your decision. On the other hand, your own happiness will be greatly increased. The memory of having sex with a beautiful woman in Paris will be something you can enjoy the rest of your life. And the Parisian woman says that this is what will make her very happy as well. You can tell the woman is lonely and are sure that this will be a net positive for her happiness. Further, you think, even if (one in a million chance) somehow your wife finds out, you are pretty sure whatever unhappiness she gets from this unpleasant discovery will not outweigh the great happiness that you will get. She will forgive eventually and all will be fine but you can eternally treasure this moment.
I think that having an affair is like driving while drunk when you made an oath not to. While it’s technically possible that there will be no harm done, you shouldn’t do it because the risk of harm is immense and its for trivial benefits. Things will go best if you’re the type of person not to cheat on your spouse. If you cheat on your spouse, let’s say there’s a 1% chance of getting caught and the sexual act lasts 15 minutes. Let’s even ignore everything else and say that there’s a 10% chance of divorce if that happens, and if not, no risk of harm at all. Well, that means that if you have an affair there will be a .001 chance of having a divorce. Let’s stipulate that you will be married for on average 15 more years if you don’t have an affair. Well, this would mean that the affair costs on average over 130 hours of marriage—a cost certainly not worth paying in utility for yourself or your wife. So no, utilitarianism doesn’t sanction affairs.
Now imagine the judge in a frontier town somewhere in the Old West. He has before him a notorious horse thief. As a judge in the Old West he knows that horse stealing is one of the worst offenses. A man’s horse is his means of living. If a farmer has no horse he can’t plow his fields. If he can’t plow his fields his family will starve to death. So what as a judge does he does with this horse thief? As a good Utilitarian, he sits down and thinks about it. He decides that the only moral punishment for this horse thief is to drag him to the town square and have him tortured to death. Yes the idea sounded harsh0 at first but then he did the calculation. He realized that starving to death is a one of the worst forms of torture. Therefore a single stolen horse could result in the torture of an entire innocent family. He knows that by torturing that horse thief to death, he is maximizing the deterrence to other potential horse thieves. And if by doing so he can avoid just a single family starving, the calculation will pay off. So it is clear: one man being tortured to death rather than a whole family (or more) being tortured to death.
What? For one, if stealing a horse really will kill someone, the person has effectively just committed murder and should thus be punished harshly. But second, if the claim is that utilitarians are too harsh on murders when it tends to be non utilitarians who want them to suffer intrinsically, whatever value it may bring, it’s just false. All of the deterrence based reasons for punishing would be considered by a non utilitarian, but they also want to harm bad people for its own sake.
I could provide other hypothetical scenarios. Times when dictators might humanely gas a certain portion of society to death in order to maximize the happiness of the whole. Times when stealing from a big corporation might hurt no one significantly but make one person much happier. Times when lying about accomplishments hurt no one but bring happiness and wellbeing. Utilitarianism can be used to justify every sort of degenerate and immoral act that you can think of if you do the calculation right.
But my Utilitarian readers might object: “You are not doing the calculation right!” They might explain why the adulterer in Paris was wrong or the judge of the horse thief was wrong. They might explain why the dictator, the thief and the liar were all wrong. But whatever objection you come up with isn’t relevant because you are not the one making the calculation in that moment! Utilitarianism requires constant very subjective calculations to be done by everyone all the time. There are no 10 Commandments. There is only one precept: maximize happiness. As a result, if we all lived as Utilitarians, no one could say, “adultery is wrong”. We could only say, “adultery that leads to a net unhappiness is wrong.” And the only person who is able to make that calculation in the moment is the person potential adulterer.
The fact that people apply a moral theory incorrectly is not an argument against it. Lots of Christians have used Christianity to justify slavery, subjugation of women, and other attrocities. If we look at historical Christians their track record was waaaaaaaaay worse than that of the utilitarians, who have used belief in god to justify all sorts of horrors.
It fails to provide morals without religion because the happiness calculation depends very much on whether or not any given religion is true. Let’s take the adultery example. If adultery sends you to hell where you are tortured with pitch forks for all eternity, then the calculus changes. Did the adulterer calculate hell in his equation? So determining what has the most utility to happiness also requires determining what God (or gods or whatever your religion tells you will lead to happiness) wants. Utilitarianism then has a very practical flaw of failing to do the very thing it was built to do: determine an ethical framework without regards to religion.
This is not the point of utilitarianism. Utilitarianism wasn’t developed to get around religious morality, it was developed because people thought it was true. Yes, you should add god to the calculation, but probably ignore it given the vanishingly small probability of his existence. A good ethical theory SHOULD include in its calculus the odds that an act will make you burn forever. Utilitarianism is a moral theory, so it doesn’t say anything about the causal facts like god.
So what is the alternative? The alternative is a Divine law.
1 God cannot solve the problem of morality, if morality would be subjective absent a god then god could not make it objective. If morality is simply a description of preferences, then god cannot make objective morality any more than he could make objective beauty, or objective tastiness.
2 Utilitarianism could be the best moral view even if morality were subjective (though I do think it’s objective). Additionally, there’s a robustly realist account of the goodness of pleasure. Much like mental states can have properties like brightness they can have normative properties.
3 This runs into Euthephro’s dilemma is it good because god decreed it or did god decree it because it’s good. If the former is true, then good is just whatever god decrees and there’s no reason good is binding, if satan were ultimate he could decree that things were good. However, if god decrees it because it’s good then it proves that good exists outside of god. Some try to avoid this problem by saying that gods nature is good, so it’s not true either because of or in spite of divine decree. However, this just raises the deeper question of whether it’s good because it corresponds to his nature or whether it corresponds to his nature because it’s good. Thus, it doesn’t avoid the problem because if gods nature were evil than evil would be justified.
4 Either God has reasons for his commands or he doesn’t. If he does then that would ground morality and if he doesn’t then it’s arbitrary and lacks reason giving force
5 There already has to be objective morality for there to be an objectively moral being. Thus, this is like arguing that we should believe that Millard Filmore was a good president because it accounts for goodness.
6 God is presumably not the objective standard for Glubglosh—which is gibberish. Yet if one thinks that morality wouldn't exist without God, then saying God is good is like saying God is the standard for Glubglosh. God needs objective morality to exist.
7 This seems to obviously misidentify what morality is. Morality has to have reason giving force. However, it’s not clear how theistic morality does. God’s character being anti child murder misidentifies why child murder is bad. If God disappeared the badness of child murder would not disappear. The theist has to say that the badness of brutally torturing children has nothing to do with harm to children and everything to do with God’s character being disapproving. This is not a plausible account of moral ontology.
8 If God grounds morality then morality can just be grounded in what God would decree if he existed.
9 Morality has to either be true in all possible worlds or true in none. God can’t affect things that are true in all possible worlds any more than he can ground mathematics or logic
10 In order for God’s commands to give us morality, we have to already have a moral obligation to obey God’s commands, which means God needs morality to exist. This argument came from none other than Mackie, the guy Craig quoted to prove atheists can’t have objective morality. He doesn’t think theists can either.
Divine command theory is a view of meta-ethics, not normative ethics. Thus, it’s not a rival to utilitarianism. The view of well-being espoused was objective list theory. However, it’s worth noting
1 You can be an objective list theorist and a utilitarian, just not a hedonist.
2 Objective list theory has lots of problems.
First, it’s difficult to give an adequate account of what makes something part of the objectively good list. Hedonism is monist, saying that there’s only one type of thing that is good. Objective list theories problematically say that there are a series of unconnected things that are good. This is less parsimonious and fails to provide an adequate account of how things are good. It seems a priori more plausible that there would be some good experiences than that there would be an unrelated bundle of good things that aren’t tied to experience.
Second, objective list theories can’t account for why things are only good for sentient beings. It seems conceivable that, on objective list theories, non-sentient beings could fulfill things on their objective list. Objective list theories just say that things are good in virtue of being part of an objective list, however, there’s no necessary correlation between beings experiencing happiness or suffering and things being able to be part of the objective list.
Third, objective list theories can’t account for why all the things that are on the objective list are generally conducive to happiness. Virtue, friendship, love, and decency are generally conducive to happiness.
Fourth, objective list theories are elitist, holding that things can be good for people even if they neither want them nor derive any positive experience from them. It’s counterintuitive that an unenjoyable experience that one doesn’t want can be good for them.
Fifth, all of the things on the objective list only seem good if they’re generally conducive to happiness. We might hold that knowledge is good, but it would be strange to suppose that arbitrary facts that benefit no one are good. The world would not be a better place if we all had the information about whether the number of particles in the universe were even or odd. Friendship might be good, but only if the friends are made mutually better off.
Sixth, it runs into all of the issues that I highlighted supporting hedonism, especially the problem of lopsided lives.
Virtue ethics was also highlighted, however, it has many problems of its own.
1 It just plugs in our intuitions, but the track record of moral failures shows our moral intuitions are unreliable.
2 It leaves us with no method for deciding upon what the virtues are.
3 It gives us no way to make decisions because there will always be virtues on both sides. How, for example, would we use virtue ethics to decide upon tax policy?
I lay out more objections here.
Utilitarianism Debate Rebuttal Part 3
Responding to Ungit's post
In my ongoing debate with Lewis Ungit, I was hoping that the many arguments I gave in my post would be grappled with. Sadly, most of them were not in Ungit’s rebuttal to my argument, and the ones that were responded to totally missed the boat.
In my article I began by pointing out several theoretical virtues including simplicity, explanatory power, explanatory scope, and intuitiveness. Utilitarianism does well in terms of these, which counts in favor of it.
Mr. Bulldog says that the philosophy has some “theoretical virtues.” Among these, he states that simplicity is an important virtue. He likes that the theory requires only a single moral maxim: “saying one should maximize the positive mental states of conscious creatures, explains all of ethics, applies to all of ethics, and has perfect clarity.” He also likes that it is “intuitive,” stating that, “It just seems obvious that ethics should be about making everyone’s life as good as possible.”
Both of these theoretical virtues are profoundly flawed. First, the simplicity of the idea is an illusion. While it is true that it has a single maxim, the application of it is complex beyond any other ethical framework requiring a new calculation for every moral decision. Let’s take an example. Let’s compare a person that takes the 10 Commandments as their moral guide and the person that takes just the single command of Utilitarianism as their guide (maximizing happiness). Let’s suppose one of your neighbors is on trial for murder. You know this man is a monster. He has already confessed to multiple depraved and violent acts. You are quite confident that the world would be much safer without him in on the streets. And everyone is very sure that he is guilty of the murder. You were in the area and saw someone do the murder. It might have been him but honestly you didn’t get the best look. Do you lie and say you got a good look and it was definitely him or do you tell the truth? If you are a follower of the 10 Commandments, you have the minor complication of having to sort through 10 commandments to find one that applies. You get to the ninth commandment “You shall not give false testimony against your neighbor” and that clearly applies. Based on that, you tell the truth and hope that the police find other evidence. But if you are a follower of Utilitarianism the decision is not so simple. You have to do a calculation: would lying bring more happiness into the world? And the answer to that question is not obvious at all. By lying you could help get a dangerous man off the streets. You could potentially save lives. You could bring justice to a murderer. By telling the truth, you could endanger your neighborhood. Of course there are risks the other way. If you lie and get caught you could go to jail (unhappy). You could be wrong and some other murderer could get away as a result (unhappy). Etc etc. What should you do? As you can see a fairly simple question: should I lie in court against my neighbor becomes very complicated with the supposedly simple Utilitarianism and it becomes very simple with the supposedly complicated Christian ethics.
For one, none of the other explanatory virtues were addressed at all, so it has been functionally conceded that these favor utilitarianism. The response to simplicity is a total confusion. The claim is not that utilitarianism is easy to apply in the real world—it’s that it’s ontologically simple. Ockam’s razor means we should posit the fewest needed entities, so to the extent that a theory requires positing a wide array of complex moral maxims that counts against it. How complex utilitarianism is in the real world is not relevant in the slightest to analyzing its theoretical virtues.
The point about framing an innocent person has already been addressed by me in print here. I’ll post that article in full at the bottom because I have the word count to do so.
Secondly, as I discussed in my post, Against Utilitarianism, Mr. Bulldog is wrong that “It just seems obvious that ethics should be about making everyone’s life as good as possible.” It seems strange that we would call something obvious that so few people in history have believed. Most people look out for themselves and their own family. Some have loyalty to their tribe or race. But in the whole history of humanity precious few have ever sincerely stated that they cared about “everyone” in the whole world.
People often reject utilitarianism when they hear about its other implications. However, when people first hear about utilitarianism a very common first reaction is “hmm, that does sound reasonable.” Most people don’t end up as utilitarians, but it has lots of prima facie plausibility.
Ungit then explains the is ought problem and accuses me of running afoul of it.
How does the Is / Ought problem apply to Mr. Bulldog’s argument? Even if he were right and everyone thought it was obvious that we should maximize wellbeing, that would not tell us if we should maximize wellbeing. The IS (we all want to do something) doesn’t justify the OUGHT (we should want to do that thing). It is very easy to conceive of us all wanting to do something bad. Hatred and violence are qualities that we all are born with some propensity towards. They are the IS. But we all agree that hatred and violence OUGHT not be. So just because we are born with the capacity to have empathy towards others (wanting their happiness) doesn’t make that a good thing. In order to determine it if is good we need some other law to appeal to. But with Utilitarianism there is no other law.
This totally misrepresents my argument. The claim is not that its truth ontologically hinges on many people thinking it. The claim is merely that lots of people finding it intuitive counts in favor of believing its true. While you can’t get an ought from an is, if it is the case that lots of people think x is wrong, that is evidence that x is wrong. My opponent would presumably hold that one thing favoring Christian ethics is that it seems plausible upon reflection, yet that doesn’t run afoul of the is ought problem.
The is ought problem as a meta-ethical stopping point to dispel with any normative claim is a total nonstarter. One who is a moral realist is either a moral naturalist, in which they think that moral claims are a type of is claim so we can get them from other is claims, or non naturalists in which case they won’t get an ought from an is because they think it’s non natural. Either way, it’s fully impotent.
Another claim Mr. Bulldog’s argument makes is that Utilitarianism holds up well in the light of history. He says that atrocities were “generally opposed by Utilitarians”. I don’t know what sort of survey he has done to try to validate this claim but it seems obviously wrong to me. Every one of the worst dictators in the past 100 years has pointed to some greater good as their justification for doing what they did. Famously, Aristotle used a very Utilitarian argument to support slavery arguing that it was for the greater good of all (including the slaves) that some of the best men have leisure time to think about good government, agriculture, and higher ideas. Taking away the institution of slavery would make everyone less happy. Interestingly, in the US Old South these sorts of arguments were quite common. And when we look at historic reforms such as the end of slavery, the introduction of welfare programs for the poor, the establishment of schools for the public, etc, most of those reforms came from deeply religious people (often Christian) living by a religious ethical standard not a utilitarian one.
Pointing to people who were not utilitarians does not negate the point that utilitarians were generally on the right side of history. All of the early utilitarians were very progressive and ahead of their time—Bentham supported legal homosexuality in 1798. The amorphous greater good claims were not utilitarian, and the utilitarians tended to be far ahead of their time. I actually ahd a citation for this (obviously not a poll result) from utilitarianism.net. My interlocutor was unable to produce a single utilitarian who supported slavery, Jim Crow, or the holocaust, while numerous Christians did and used Christianity to justify it. There was also no response to the expanding moral circle argument which says that every attrocity in history has come through a contracted moral circle, so utilitarianism is better because it expands our moral circle to everyone.
Mr. Bulldog argument then makes a syllogism with a list of 12 premises to attempt to prove Utilitarianism. In the interest of space, I am not going to go through every premise but I will note that his very first premise seems very wrong (thereby undermining the entire syllogism). He writes, “A rational egoist is defined as someone who does only what produces the most good for themselves.” But is this right? If a soldier dies for his country (not for himself) is he acting irrationally? If this premise is correct, then no one should ever be willing to die for any cause. And yet don’t we view the martyr as the most moral person?
This was the entire response to the syllogism that I spent several thousand words defending. I would assume that this was not really an objection to premise 1, but instead to premise 2, given that premise 1 was a definition of a word that defined how it was used in the rest of the syllogism.
Premise 2, which I offered a dozen arguments for in a thousand or so words, which was totally ignored. So we have the only objection to this logically valid syllogism be a defense of premise two, while ignoring all 11 arguments for premise 2. Thus, even if I totally ignored this response, it would still favor utilitarianism based on the sheer quantity of argument.
The question about the soldier is totally confused. The claim is not that one is only rational if they do what’s best for themselves. The claim is that they’re a rational egoists if they do what’s best for themselves. This is true by definition. Now note, this doesn’t get us to utilitarianism. We need the other 11 premises to do that, ones which were totally ignored in the response.
He then provides an argument from economist and philosopher John Harsanyi. Harsanyi’s argument the beginning of the argument is as follows:
“Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.”
Do you notice anything strange about that introduction? Look at this claim: “Ethics should be impartial.” Really? Says who? How do we form an ethical framework by starting with a giant unsupported ethical claim? How is that not a circular argument?
Ungit would do well to read the second sentence. “This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.” Given that this is what we mean by morality, the question is merely whether utilitarianism describes how we’d act if we were rational and impartial. If you think this is not what we mean by it, can you name one example of someone calling someone else immoral where they aren’t claiming the other person is failing to be either impartial or rational?
He then says, “If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.” But I thought we were basing Utilitarianism on an inherent desire for happiness? But now it seems there is some other good (the desire to be impartial) that should supersede our desire to be happy? Why does this not undermine the entire Utilitarian thought experiment?
NO!!!! Let me make one thing very clear—I am not rehashing John Stuart Mill’s mediocre argument. I do not think it works. I do not think that happiness is all that’s desired. If Ungit responded to my argument rather than the replica of Mill living rent free in his head, this would be obvious.
This also totally misses the point. The claim is not that we’re in fact impartial. The claim is just that morality is about what we’d do if we were rational and impartial. This is a definition.
Another problem with Harsanyi’s is that the demand for impartial ethics is asking for something that never exists. There are no impartial people. We are all inclined to be biased. We might clothe our judgments in claims of impartiality but in reality we are always looking out for our own interests, the interests of people we love, and the interests of those that can advance our interests
This is a non sequitor. The claim is not that we’re in fact impartial, the claim is that morality is about what we’d do IF we were impartial. An analogy here can be given with rationality. If we’re trying to figure out whether it would be rational to believe in god, we’re asking whether if you were totally rational you’d believe in god. Obviously you won’t be totally rational, but the question is about what the views of a perfectly rational person would be?
“Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.”
Here I think it is worth noting that there is a major difference between Utilitarianism as a system of thought by which we get all morals and utility as a way of evaluating some moral judgments. Utilitarianism as Mr. Bulldog has noted starts off with a single premise (maximize happiness) while other ethical systems start off with a list of precepts (for example the 10 Commandments or the Sermon on the Mount). But it does not follow that the follower of the 10 Commandments cannot think about utility when trying to live out his life. If the Sermon on the Mount says, “Blessed are the peacemakers, for they will be called sons of God,” the precept is to be a peacemaker. But we can then evaluate which actions maximize making peace (do we do it at the political level, the social level, via poetry or art, etc). In the same way, as we think of Bulldog’s example, this is not actually a moral question about rights versus utility as it is a discussion on how to preserve the human rights of as many people as possible. If we could perfectly know that shooting two people would save lives, this would not be a rights violation so much as an effort to maximize the human rights preserved.
He just totally ignored the reason why it would be a rights violation. People who believe in rights think you shouldn’t violate a person’s rights to prevent 2 more rights violations. However, this is a counterexample.
“Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off. “
But as in the case of the circle of shooters, this is not actually a question of utility versus precept but a question of practical application of a variety of precepts at the same time. Respecting their right to consent and the right to life are both good precepts and the person in that situation needs to weigh which of those are the higher precept. Most people would state that the right to life is more important the right not to be moved. Evaluating this question is not giving into Utilitarianism at all.
This is a different case from the one about the circle of shooters. And no account was provided of why this wasn’t an objectionable rights violation. The only adequate account is utilitarian.
However, Ungit had quite literally no response to dozens of argument for utilitarianism here. I provided 8 separate responses to rights. Only one of them was addressed. The response to the others was crickets. Reread my original article and see if you think he addressed it. It will be abundantly clear they were mostly ignored. There is a reason I included the other arguments, because they were worth addressing. I also provided 5 cases in part 6 where only utilitarianism can get the right answer. Only a single one was addressed. I won’t repeat the arguments, but just note, most of my article was totally 100% ignored.
In conclusion, having read Mr. Bulldog’s opening statement, I don’t think we are any closer to being able to support Utilitarianism. He rightly noted some situations where utility can be evaluated but he wrongly assumed that is only part of Utilitarian ethics and not a necessary part of all other ethical systems of thought.
Well, given that most of the arguments were ignored and the ones that were addressed were misrepresented, I think that we are closer to showing that utilitarianism wins outright.
Appendix, My Article About Framing Innocent People For Fun And For Profit
Michael Huemer has another objection to utilitarianism, thinking that it justifies framing innocent people to save lives, which he thinks is bad. Huemer writes “b. Framing the innocent
You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?”
Yes!
One first point that can be made is that in most realistic situations, one ought not frame people. Thus we have strong reason to say no to the question of whether innocent people ought to be framed, even if we can imagine sparse situations in which it would maximize happiness to do so.
Second, we can explain away our revulsion sociologically, by appealing to the revulsion which we rightly feel for framing an innocent person.
Third, we can make modifications like the ones made in part five by making the people family members. Surely you would rather one family member was framed, rather than all your family members killed.
Fourth, suppose we could prevent either five murders from a lynch mob, or one innocent person from being framed and killed by a lynch mob. Surely we should prevent the former. One could appeal to the act omission distinction in this case. However, we can modify the case to avoid this.
Imagine a case in which a person (we’ll call him Tim) wrote an anonymous letter that would be delivered, which would frame an innocent person, who would be killed by the mob. However, after writing and mailing the letter, Tim had a change of heart, and decided to prevent the letter from being delivered. When he uncovered the stash where his letter was stored, it turned out to be in an iron box, with a robust security system, such that if two letters were taken out and destroyed, it would sound the alarm, and letters would be returned to their original location. As Tim is about to take out his letter, he sees another letter, which has the names of five people on it, who Tim knows are being framed, and will be killed by the mob. If Tim takes his letter out, he will not have framed an innocent person, and no one will be left worse off, as a result of Tim’s actions. However, if Tim takes out the letter containing five names, he will prevent a mob from killing five innocent people. In this case, it seems very intuitive to take out the letter with five names, yet it is very much like the earlier case. When one is taking out the letters, it should not matter who wrote the letter. Additionally, suppose one wrote the letter when they were asleep (sleep letter forging is considerably rarer than sleep walking, but not impossible). In that case, it seems even more bizarre to take out the letter, because it was written by them, rather than the five that would save more lives.
One could object that the cases are not parallel. However, the cases have been designed to be identical in morally relevant respects. In both cases, one is framing an innocent person to save five people. The only difference is that one framing is dragged out over a longer period of time, and is done over mail. Those, however, are not morally relevant differences.
Fifth, let’s imagine another parallel case. A mob is planning on killing a large number of people, because they think that they all conspired to do the crime. They believe that conspiring to do the crime is especially horrendous (they take a hardline stance against conspiracies), so if they believed that only one person committed the crime they would beat that person severely, but not kill that person. However, that person would otherwise be killed by the mob, along with everyone else. Thus, it would be better for literally everyone if that person is framed. In that case it seems intuitive that you should frame the person to reduce the severity of harm to them, and save everyone else’s life.
Sixth, we can consider a case similar to the case I appealed to when arguing against magic rights. Consider the following case.
There are 100 concentric circles, the first with one person, the second one with five, the third with 25, fourth with 125, etc. Each person has 5 corresponding people in the circle outside of them. (Eg, person 1 in circle one corresponds to all five people in circle 2, Jim in circle 2 corresponds to Mary, Sue, Edith, Rampampam, and Gulac in circle 3m etc.) The first 99 circles are comprised of perfectly moral people. Each person has two options.
They can either
1 Frame an innocent person
2 Give the five people corresponding to you the options you were given.
However, the people in the 100th circle will certainly frame the innocent person. If we hold that it’s not worth it to frame an innocent person to prevent 5 murders, presumably we’d hold that it’s not worth it to frame an innocent person to prevent five framings. However, if it’s not moral to frame an innocent person to prevent five framings then it seems clearly immoral to frame an innocent person to give a perfectly moral person two options, one of which is framing an innocent person. Giving perfectly moral people extra options can’t be bad, because if the extra option is worse than framing an innocent person they won’t take it. They’ll only take it if it’s better than framing an innocent person. Thus, we have the following conclusion.
If you should frame an innocent person to prevent 5 murders, you should frame an innocent person to prevent 5 framings. If you should frame an innocent person to prevent 5 framings then you should frame an innocent person to give a perfectly moral person two options, one of which is framing an innocent person. However, if this is true, then one ought to take option 2 always, because they should not frame an innocent person to prevent 5 perfectly moral people from having two options, one of which is framing an innocent person. Unfortunately, if everyone takes option 2, we’d have to stipulate that a ring of perfectly moral people acting rightly would result in 5^99 innocent people getting framed, when, had they acted differently, only one person would have been framed. This is far more unintuitive than the utilitarian conclusion.
Morality doesn’t care about who takes actions—it instead cares about what actions are taken. Our intuitions are wrong. This conclusion may seem repugnant. Fortunately, Huemer already accepts the repugnant conclusion.
Non utilitarians will have to do better than this rights nonsense in order to demonstrate a decisive counter-example.
Utilitarianism Debate Part 4
Responding to Ungit's response my response to his opening statement
“my arguments weren’t addressed,”
—William Lane Craig
“He did mention premise 1 and I’m grateful for that.”
—Alvin Plantinga
Ungit has responded to my response to his article. Many of my points were not addressed, those that were were not addressed adequately.
I stated that most people would not take a pill that just made you happy for the rest of your life. I made this observation as part of my argument that most people do not actually pursue happiness as their ultimate goal. Mr. Bulldog states that a failure to take the pill is a mistake. He writes,
“The pill objection is very much like the experience machine. While people mgiht not want to take the pill, hat would be an error on their part. What makes a pleasure real and genuine, making it worthwhile? Simulated pleasure is still pleasure.”
He then goes on an extended argument explaining why we should want it.
But yet I don’t want it. I would rather death than a happiness pill without meaning and reality. Mr. Bulldog’s argument is that everyone wants happiness and yet when I say I don’t want that sort of happiness he says I am mistaken. That is not how this works. If we are going to build a whole philosophy on innate desire for happiness you can’t complain that my other innate desires (for meaning, reality, purpose, etc) are wrong.
I will restate what I have said before. Most people do not pursue their own happiness. They pursue other things. They pursue what is beautiful. They pursue meaning. They pursue duty. You can argue that they get happiness from these things but I would argue that happiness just becomes a meaningless term at this point. Happiness is rarely pursued. We are almost always pursuing other things. Claiming that all these things just bring happiness speaks to the meaningless of the phrase ‘the pursuit of happiness’ is. It is now the pursuit of anything is the pursuit of happiness.
For the umpteenth time let me reiterate, I AM NOT JOHN STUART MILL. I do not agree with Mill’s argument, nor have I made it. Thus, the question is not what we desire but what we should. There is no response to the multitude of reasons I gave to why we should take the pill. 5 year olds don’t want to get shots but they should still get shots. This pill is the same for all of the reasons I gave in my article.
A Fatal Flaw: We Cannot Define or Measure Happiness
In my opening statement I made the point clear: we cannot define what happiness is. We do things we think will make us happy (enjoying a day at the beach), it does make us happy (we take a poll and put our happiness at 10), and yet later we regret doing it because we missed doing something else (it makes us sad). Did sitting on the beach bring happiness?
We have no way of answering this. We have no clue. We don’t know what will make us happy. We do not even know what happiness is. C.S. Lewis wrote that heaven and hell reach backward changing our experiences that we thought were good or bad at the time and reversing their effect on our happiness.
Mr. Bulldog attempts to hand wave this objection by writing,
“There may be tricky edge cases in which it’s unclear which action creates more happiness, but we can know introspectively that, for example, being set on fire causes lots of suffering and being in love causes lots of happiness.”
I think it is funny that he talks about the “tricky edge” and then jumps to an extreme (being on fire or being in love). What about the countless things not on the edge. Like will getting this degree make me happy? Will having adultery make me happy? Will adopting a child make me happy? Will going to the beach make me happy? The answer to all those questions is that we have no idea. And we might never be able to truly answer them even after we do them. We might do something like adopt a child and it might make us happy for a while but then the kid grows up and hates us and we then are sad then in our old age we might remember the best memories and forget the painful ones and it makes us happy again. Was the adoption a net positive in happiness? Was it good or bad? Did it make us happy?
I age that it’s sometimes hard to make decisions. But what’s the alternative? How would Ungit propose we decide whether to adopt a child, get a degree, or get married? The fact that happiness is hard to measure doesn’t mean we have no clue. There is a fact of the matter about what will bring the most happiness, so any measurement error on our part is irrelevant. A moral system shouldn’t hand waive away difficult decisions.
Happiness is not a measurable thing. We don’t know anything about it. Even in his extreme case of fire versus love, it is possible that the fire taught us how to endure pain and the love made us weak and that later in life we view the fire as the thing that brought the most possible happiness to our lives because it made us better people and that the love turned out to be a bad thing that wasted time and ended poorly. Even that extreme example is not as simple as Mr. Bulldog claims.
This is possible, but getting set on fire minimizes expected happiness. The fact that there are some cases in which we’re wrong about what would bring happiness doesn’t mean we can’t apply utilitarianism. Lots of people are applying utilitarianism in the real world: that’s the whole point of effective altruism. Dutch book arguments show we have to do expected utility calculations.
In my opening statement I gave a few examples to display why Utilitarianism doesn’t work in practice. I noted that Utilitarianism requires an endless number of very subjective calculations based on what would bring the most happiness. As predicted, he objected to the decisions made in those scenarios (I intentionally wrote them so that the calculation caused the person to do something traditionally thought to be immoral). He then wrote,
“The fact that people apply a moral theory incorrectly is not an argument against it.”
But this is absolutely wrong. When you are talking about how effective a moral framework is in practice, whether or not it can be reliably followed is absolutely an argument for or against it. Utilitarianism is not a practical basis of morality. There are no precepts. Every single moral decision has to be made real time by the actor and it is his judgment not yours that matters. This is a giant problem in that the people making those decisions are the very people wanting to do those things. There is a giant moral hazard
For one as I pointed out, which was not responded to at all, people have misapplied Christianity, so this would take out Ungit’s moral system. Additionally, utilitarians tend to be better as I showed in my opening statement (another point I raised which was unadressed). Finally, this would just show that we shouldn’t be utilitarians in the real world, not that it’s false. This isn’t contradictory, it could be the case that religion is false but it’s good to be religious in the real world. Ungit also didn’t address my calculations showing that these wouldn’t be optimific or my argument that constantly calculating doesn’t make things go best (it’s only when analyzing from the armchair counterexamples that it makes sense to do rigorous calculations).
Where does good come from? If you answer, "God defines what good is," I might answer, “but what if God said something was good that was not good? If God said killing innocents was good would that make it good? It would seem that the concept of good cannot come from the concept of God.”
This is what philosophers call the Euthyphro dilemma. It was originally found in Plato's dialogue Euthyphro, in which Socrates asks Euthyphro, "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?" Although this dilemma is old, it is still often quoted by atheists and is quoted again here by Mr. Bulldog.
If Euthyphro's Dilemma holds, morals cannot come from God but must find its source elsewhere.
But I think this philosophical dilemma is solved very simply: goodness equals God-ishness. In other words, if instead of saying that morals and God are two separate things, let us consider that morals are simply us observing God himself. They are attributes of him. What we call good is actually us observing various aspects of God in the world around us.
Since no man sees God face to face, we tend to see his attributes more often. We know what those are and we call them good. Then you have atheists who deny that God exists but cannot deny the attributes. And, in their mind, they start to separate good from God conceptually. But they should not be separate.
To understand my point, imagine a sculpting class where the goal of the class is to make clay dogs. But the class if full of people from a remote island who have never seen a dog. Instead of saying, "we are going to make clay dogs (which they would not understand)" the teacher says, "make four legs, floppy ears, a long snout and etc." And at the end of the class he grades each work based on its doggishness. Later, the class gets their grades and they come to recognize that "good" equals something that (to us) looks like a dog (even though they do not know what a dog is or the word dog). After a while, the teacher brings in a dog and shows them. They all say "that is very good!" For them, good equals dogish. And the actual dog, is the perfect representation of dogishness.
A class member stands up and says "aha! I now realize that everything we considered good is based on attributes of this dog. Good equals dogish."
We humans do not see God. But we have come to accept his attributes (God-ishness) as things we call good. Everything we consider good is based on attributes of God. Kindness. Love. Peace. Healing. Laughter. Generosity. Etc. All are part of who God is. When we look at God and in particular at his incarnation (Christ) we see God-ishness and what we call goodness converge. We see an answer to Euthyphro's dilemma. It is an answer to the question "where does good come from?" in God incarnate. We see that although we have been molding our clay gods trying to make good in the world around us, the perfect representation of good is found in God incarnate.
This is not an adequate response to Euthyphro’s dilemma. In this case, if the dog were different, dogishness would be different. Yet if god’s character was such that he was in favor of torturing infants for fun, it seems like it would still be bad. Thus, goodness can’t be grounded in god in this way. Now you might say that if god were different our conception of good were different, but we can stipulate that no facts about the world are changed, only god’s preferences are changed. In that case, would it be objectively moral to torture infants for fun? Probably not.
Additionally, I gave 9 other objections to divine command theory which weren’t addressed at all. So even if Eythyphro fails, THERE ARE 9 OTHER OBJECTIONS ON THE TABLE.
Final Point of Clarification: That I Focused on Hedonistic Utilitarianism but That He Holds to Another Form of Utilitarianism
Mr. Bulldog states that I focused on hedonistic utilitarianism but he is quick to note that there are other important goals besides hedonism that can bring happiness (he lists desire fulfillment and objective list fulfillment in his opening paragraph). He later notes that good actions are not what is hedonistic but actions that “help others more.” That true happiness is brought by “The things that bring the most happiness are the deep meaningful things like significant relationships, knowledge, and contentment.”
I don’t think any of that changes any of my arguments. There is hedonism in pursuing goals. There is hedonism in pursuing “meaningful relationships” and knowledge. If you are doing it only for your own happiness, all of my arguments would apply. Distinguishing between pleasure and happiness was never my intent nor was it important to anything I wrote.
Mr. Ungit would do well to google objective list theory. The claim is that there are things other than hedonic value that make people well off. One could be an objective list theorist and a utilitarian—utilitarians do not have to be hedonists. Again, utiltiarianism doesn’t have to maximize happiness, it just requires saying we should maximize the types of things that make people well off, whatever they be. An objective list theorist would say that they are not just happiness, but things like virtue, meaning, and connection.
Additionally, I gave a series of objection to both objective list theory (Even though it’s compatible with utilitarianism, I still thought it was worth arguing against because it seemed to be the view defended arguing against hedonistic utilitarianism) and virtue ethics. They weren’t addressed. I’ll just quote my earlier article given how little the arguments were rebutted.
Objective list theory has lots of problems.
First, it’s difficult to give an adequate account of what makes something part of the objectively good list. Hedonism is monist, saying that there’s only one type of thing that is good. Objective list theories problematically say that there are a series of unconnected things that are good. This is less parsimonious and fails to provide an adequate account of how things are good. It seems a priori more plausible that there would be some good experiences than that there would be an unrelated bundle of good things that aren’t tied to experience.
Second, objective list theories can’t account for why things are only good for sentient beings. It seems conceivable that, on objective list theories, non-sentient beings could fulfill things on their objective list. Objective list theories just say that things are good in virtue of being part of an objective list, however, there’s no necessary correlation between beings experiencing happiness or suffering and things being able to be part of the objective list.
Third, objective list theories can’t account for why all the things that are on the objective list are generally conducive to happiness. Virtue, friendship, love, and decency are generally conducive to happiness.
Fourth, objective list theories are elitist, holding that things can be good for people even if they neither want them nor derive any positive experience from them. It’s counterintuitive that an unenjoyable experience that one doesn’t want can be good for them.
Fifth, all of the things on the objective list only seem good if they’re generally conducive to happiness. We might hold that knowledge is good, but it would be strange to suppose that arbitrary facts that benefit no one are good. The world would not be a better place if we all had the information about whether the number of particles in the universe were even or odd. Friendship might be good, but only if the friends are made mutually better off.
Sixth, it runs into all of the issues that I highlighted supporting hedonism, especially the problem of lopsided lives.
Virtue ethics was also highlighted, however, it has many problems of its own.
1 It just plugs in our intuitions, but the track record of moral failures shows our moral intuitions are unreliable.
2 It leaves us with no method for deciding upon what the virtues are.
3 It gives us no way to make decisions because there will always be virtues on both sides. How, for example, would we use virtue ethics to decide upon tax policy?
I lay out more objections here.
And for my objections to divine command theory, they can be found here
1 God cannot solve the problem of morality, if morality would be subjective absent a god then god could not make it objective. If morality is simply a description of preferences, then god cannot make objective morality any more than he could make objective beauty, or objective tastiness.
2 Utilitarianism could be the best moral view even if morality were subjective (though I do think it’s objective). Additionally, there’s a robustly realist account of the goodness of pleasure. Much like mental states can have properties like brightness they can have normative properties.
3 This runs into Euthephro’s dilemma is it good because god decreed it or did god decree it because it’s good. If the former is true, then good is just whatever god decrees and there’s no reason good is binding, if satan were ultimate he could decree that things were good. However, if god decrees it because it’s good then it proves that good exists outside of god. Some try to avoid this problem by saying that gods nature is good, so it’s not true either because of or in spite of divine decree. However, this just raises the deeper question of whether it’s good because it corresponds to his nature or whether it corresponds to his nature because it’s good. Thus, it doesn’t avoid the problem because if gods nature were evil than evil would be justified.
4 Either God has reasons for his commands or he doesn’t. If he does then that would ground morality and if he doesn’t then it’s arbitrary and lacks reason giving force
5 There already has to be objective morality for there to be an objectively moral being. Thus, this is like arguing that we should believe that Millard Filmore was a good president because it accounts for goodness.
6 God is presumably not the objective standard for Glubglosh—which is gibberish. Yet if one thinks that morality wouldn't exist without God, then saying God is good is like saying God is the standard for Glubglosh. God needs objective morality to exist.
7 This seems to obviously misidentify what morality is. Morality has to have reason giving force. However, it’s not clear how theistic morality does. God’s character being anti child murder misidentifies why child murder is bad. If God disappeared the badness of child murder would not disappear. The theist has to say that the badness of brutally torturing children has nothing to do with harm to children and everything to do with God’s character being disapproving. This is not a plausible account of moral ontology.
8 If God grounds morality then morality can just be grounded in what God would decree if he existed.
9 Morality has to either be true in all possible worlds or true in none. God can’t affect things that are true in all possible worlds any more than he can ground mathematics or logic
10 In order for God’s commands to give us morality, we have to already have a moral obligation to obey God’s commands, which means God needs morality to exist. This argument came from none other than Mackie, the guy Craig quoted to prove atheists can’t have objective morality. He doesn’t think theists can either.
Additionally, I defended hedonism at great length in my original article arguing for it. To quote the arguments that weren’t addressed
This has several supporting arguments.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Thus, Ungit has failed to grapple with the vast majority of points made. This is a common trend in those arguing against utilitarianism—they’ll start with a mediocre objection like Omellas, organ harvesting, the repugnant conclusion, or organ harvesting, and then ignore every single thing utilitarians have written in response to it.
Is this really the best that the critics of utilitarianism have to offer?
Utilitarianism Wins Outright Part 25
Another Argument for utilitarianism
This is the intuition that convinced me of utilitarianism—hopefully you will all find it persuasive too. I’ll lay it out in syllogistic form.
P1 The moral action is the one that would be taken by us if we were fully rational and impartial.
P2 If we lived the life of everyone and experienced everything, and maximized the value of our life, we would be fully rational and impartial.
P3 If we lived the life of everyone and experienced everything, and maximized the value of our life, the actions that would be taken by us would be the utilitarian actions.
Therefore, the moral action is the utilitarian action.
I’ll defend each premise.
Premise 1 seems to be true by definition. If we think about what it means to say someone is being immoral it seems to either mean
A They value things that aren’t actually valuable.
or
B They are not taking everyone’s interests into account.
I can’t think of a third thing they’d be saying.
P2 is pretty obvious. If we were a collective intellect that experienced the lives of everyone on earth, we’d obviously be impartial because we experienced everyone’s life, and if we were fully rational, we’d be fully rational.
P3 is true if we accept hedonism, which I’ve defended here, here, and here.
This just seems to really intuitively encapsulate ethics. When I think about how terrible the holocaust, it seems like what makes it bad is that if you lived the life of both the Nazi’s and the victims, you would obviously be opposed to it. If Jeffrey Dahmer had to bear the costs he imposed on others, rationality would have dictated that he not kill people. Animals matter because if you were rational and experience things from their perspective, you wouldn’t want them to be harmed. That’s why only sentient beings matter. Future people matter because if you had the opportunity to live the life of future people, you’d rather they exist and have good lives. This is why the counterexamples to utilitarianism are so uncompelling—they describe scenarios that sound bad until you realize that if you lived everyone’s life and weighted their interests equally, you’d agree with utilitarianism.
We’d be against retributivism from this position because if you lived the life of the evil person, you wouldn’t want them to be harmed for no reason. We’d accept the repugnant conclusion for the same reason.
One might object that the scenario is incoherent, arguing that it would be impossible to experience everyone’s life. However, it’s not clear what about this is incoherent. Even if we accept that it couldn’t be you living their lives, it could still be you experiencing what they experience. Additionally, it seems like it could be you experiencing what they experience. We could just stipulate a gradual morally neutral process of continuity as you slowly morph into each person and then experience their life. This seems possible on every view of personal identity.
Huemer responded to something similar here
More importantly, this argument requires a confusion between necessary conditions and sufficient conditions for justice. The claim of the luck egalitarians, presumably, is that fairness or equality or something like that is a necessary condition for justice, not that it is sufficient. But then, the fact that the OP starts with a fair/equal situation doesn’t imply that it leads to a just outcome.
You could try claiming that fairness & initial equality is sufficient for justice. But then you’d have to refute all other norms of justice. For instance, you’d have to refute the idea of natural rights to property, the idea of justice requiring rewards proportional to merit, etc. You’d have to do this before relying on the OP. Rawls makes no attempt to do this (which of course cannot be done).
However, neither of these objections apply to my version. The claim is not that rationality and impartiality are things that seem to be party of morality. It’s that they’re the only part of morality. What would you be saying when you call someone immoral other than that they’re wrong about what matters or ignoring some people who matter? The notion that things could be bad in a way that you wouldn’t care about if you lived everyone’s life is very unintuitive.
So here’s what convinced me of utilitarianism (I think, it’s a bit fuzzy given that it was when I was like 11). Hopefully it will convince you too.
Against Negative Utilitarianism
We shouldn't be negative utilitarians
Negative utilitarianism is quite an interesting view. It’s quite close to being right—being utilitarian. Yet it is not a fully adequate of morality. It misses a full half of what matters.
The most basic version of negative utilitarianism which says that only the avoidance of pain is morally relevant is trivially false. Preventing a pinprick is less valuable than bringing about googolplex utils. However, this view is not widely believed and thus not particularly worth discussing.
A more popular form of negative utilitarianism takes the form of Lexical Threshold views, according to which certain forms of suffering are so terrible that they cannot be outweighed by any amount of happiness. This view is defended by people like Simon Knutsson, Brian Tomasik, and others. My main objection to this view is that it falls prey to the sequencing objection. Suppose we believe that the badness of a horrific torture cannot be outweighed by any amount of happiness. Presumably we believe that the badness of a mild headache can be outweighed by some amounts of happiness. Therefore, the badness of horrific torture can't be outweighed by any amount of headaches (or similar harms, headaches were just the example that I picked.)
This view runs into a problem. There are certainly some types of extreme headaches whose badness are as bad as brutal tortures at least in theory. Suppose that the badness of these horrific headaches are 100,000 units of pain and that benign headaches contain 100 units of pain. Presumably 5 headaches with 99,999 units of pain would be in total worse than 1 headache with 100,000 units of pain. Additionally, presumably 25 headaches with 99,998 units of pain would be worse than 5 headaches with 99,999 units of pain. We can keep decreasing the amount of pain and making it affect more people, until 1 headache with 100,000 units of pain is found to be less bad than some vast number of headaches with 100 units of pain. The Lexical Threshold Negative Utilitarian view would have to say that there's some threshold of pain below which no amount of pain experienced can outweigh any amount of pain above the threshold, regardless of how many people experience the pain. This is deeply implausible. If the threshold is set at 10,000 units of pain, then 10^100^100 people experiencing 9,999 units of pain would be preferable to one person experiencing 10,001 units of pain.
The negative utilitarian might object that there is no neat cutoff. However, this misunderstands the argument. If there is no neat cutoff point then the gradual decrease in pain, despite being applied to an increasing number of people, would always be preferrable to the previous point with far fewer people experiencing marginally more pain.
The negative utilitarian might say that pain can't be neatly delineated into precise units. However, precise units are only used to represent pain. It's very intuitive that pain that is very bad can be made gradually less bad until it's reduced to being only a little bit bad. This process requires the negative utilitarian to declare that at some point along the continuum, they've passed a threshold whereby no amount of the things below the threshold can ever outweigh the things above the threshold. Being scalded in boiling water can be made gradually less unpleasant by lowering the temperature of the water until it's reduced to merely a slight inconvenience.
Simon Knutsson responds to this basic objection saying "Third, perhaps Ord overlooks versions of Lexical Threshold NU, according to which the value of happiness grows less and less as the amount of happiness increases. For example, the value of happiness could have a ceiling, say 1 million value “units,” such that there is some suffering that the happiness could never counterbalance, e.g., when the disvalue of the suffering is 2 million disvalue units." However, the way I've laid out the argument proves that even the most extreme forms of torture are only as bad as large amounts of headaches. If this is the case, then it seems strange and ad hoc to say that no amount of happiness above 1 million units can outweigh the badness of a headache. Additionally, a similar approach can be done on the positive end. Surely googol units of happiness for one person and 999,999 units for another is better than 1,000,000 units for two people.
The main argument given for negative utilitarianism is the intuition that extreme suffering is very bad. When one considers what it's like to starve to death, it's hard to imagine how any amount of happiness can outweigh it. However, we shouldn't place very much stock in this argument for a few reasons.
First, it's perfectly compatible with positive utilitarianism (only in the sense of being non negative, not in the sense of saying that only happiness matters) to say that suffering is in general far more extreme than happiness. Given the way the world works right now, there is no way to experience as much happiness as one experiences suffering when they get horrifically tortured. However, this does not imply that extreme suffering can never be counterbalanced--merely that it's very difficult to counterbalance it. No single thing other than light travels at the speed of light, but that does not mean that light speed is lexically separate from separate speeds, such that no number of other speeds can ever add up to greater than light speed. Additionally, transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture.
Second, it's very hard to have an intuitive grasp of very big things. The human brain can't multiply very well. Thus, when one has an experience of immense misery they might conclude it's balance can't be counterbalanced by anything, when in reality they're just perceiving that it's very bad. Much like how people confuse things being astronomically improbable with impossible, people may have inaccurate mental maps, and perceive extremely bad things as bad in ways that can't be counterbalanced.
Third, it would be very surprising a priori for suffering to be categorically more relevant than well-being. One can paint a picture of enjoyable experiences being good and unenjoyable experiences being bad. It's hard to imagine why unenjoyable experiences would have a privileged status, being unweighable against positive experiences.
How Mr. Ungit Lost The Thread
Why utilitarianism comes away from the encounter rather unscathed
Mr. Ungit seems like a rather good Christian. However, as part of the Christian tribe—someone whose every idea is filtered through the interpretation of a text written by cranks before the year 100 CE, he sometimes departs rather dramatically from reason—and, it would seem, from his rocker.
The modern Christian tradition is rather robust in many ways. They do manage to make a valiant and impressive effort, generating some interesting arguments for god—defending god is a difficult task that requires all the intellectual firepower they can muster. However, as a result of their reference frame for morality being such a deranged, archaic, cauldron’s brew of confusion and malevolence—they tend to perform rather poorly when it comes to thinking deeply about morality. This was on full display recently.
The final statement in my debate with Mr. Ungit was released. This final statement failed to track many of the points that had been made earlier. But alas, such is to be expected from the critics of utilitarianism. For if they truly understood the arguments, it’s far from obvious that they’d remain critics of utilitarianism. At the very least, they would cease being the type of critics whose primary objection to utilitarianism relates to the inability to define or measure happiness. That is an objection that’s not taken particularly seriously in academia.
So, let us go through Ungit’s rather confused essay and respond to it. Note, my concluding remark here is not a part of the debate and Mr. Ungit is under no obligation to respond to the critiques presented. I would be rather surprised if he does.
Mr. Bulldog seemed upset that I didn’t address every one of his premises in his various arguments writing, “Many of my points were not addressed, those that were were not addressed adequately.” But the reason I didn’t is because that is not how logic works. When you build a logical argument, your premises build upon each other. If your first or second premise is wrong, I don’t need to then respond to your third or fourth premise. It is like trying to build on a foundation made of sand. You can’t do it. Your building tips over.
This response sounds rather devastating. What kind of Benthamite canine would claim that a person had to address all of his premises—that’s now how logic works!! If this were true, well—this Bentham’s bulldog character would sound like quite the moron.
Well, it would sound rather devastating if one had not read the rest of the exchange. I at no point claimed that every premise had to be addressed. I claimed every argument had to be addressed. If there is a successful argument, the conclusion of which is utilitarianism’s correctness—well, that means a debate has been lost. Consider the litany of arguments presented which were not addressed in Ungit’s final posts—every response he gave to them was responded to by me.
Harsanyi’s argument.
My argument preceding from the premises
The historical argument
The argument from theoretical virtues.
An abductive argument based on lots of cases about which only utilitarianism can successfully reason.
I presented a series of arguments—Ungit ignored them. So now, one needn’t object to every single premise. But they must object to every argument. It is Ungit who leaves Chevkov’s proverbial gun in act 1—never to move over to acts 3 or 4.
So what I would like to do with this conclusion is not argue every fine point but just note again the fatal flaws with Mr. Bulldog’s argument. These are foundational flaws. He cannot ask me to look at other arguments until he can answer these questions and…
he cannot answer these questions.
One rather amusing preliminary point is that they are not phrased as questions. But I shall answer the objections presented. I already have yet I shall be more thorough, now unconstrained by dastardly word counts. These also don’t grapple with any of the premises of the arguments, so even if true, they wouldn’t matter.
To the extent that one fails to grapple with any argument for utilitarianism, they have—as the title of my post suggests, lost the thread.
Utilitarianism Foundational Error #1 - The inability to define happiness
In an earlier post I pointed out that happiness is impossible to define and impossible to measure.
Keen readers will note that this is only an objection to hedonistic utilitarianism—if one is, for example a preference utilitarian (not that you should be of course!!), then happiness based considerations are wholly irrelevant. But Ungit’s repeated conflations of utilitarianism and hedonistic utilitarianism aren’t much to worry about—for I am a hedonistic utilitarian (as I’m sure was rather obvious both from my extensive writings on the subject and from my substack domain name).
Let’s take these concerns one by one. First, the claim that it’s impossible to define happiness is false—I specifically defined happiness as desirable mental states. We all recognize that there’s a workable definition of happiness. When you ask someone if something made them happy, presumably they understand what you mean.
We measure happiness through introspection. We imagine whether, if we lived the life of every single person who would ever life, we’d rather do that if we took action one or action two. To the extent that we imagine being a person who will experience everything that is ever experienced, interpersonal comparisons of utility become no more baffling that personal comparisons of utility. When deciding whether to push 500 people into poverty or kill one person (assuming one has to be chosen), you imagine whether you’d rather live everyone’s life if the 500 were in poverty or if the one was dead.
There are lots of difficult decisions when it comes to such matters. However, this is to be expected. The correct moral theory should make moral questions difficult. When a person decides upon a college, they in large part decide which one will make their life go better. Interpersonal comparisons of utility, while difficult, are conceptually necessary.
To illustrate why they’re necessary, suppose we are deciding a difficult real world question. Consider the one above—500 in poverty or 1 death. If we had to pick one, we would have to have an answer to which one is worse. One can’t be undecided in the real world.
If a person thinks that they’re literally equal—then they could just flip a coin. However, if they hold a different claim, like that they’re impossible to compare even in principle, so we shouldn’t try, as Von-Neuman and Morgenstern showed, that would not do.
If one thinks that you can’t evaluate between the comparative desirability of
A) 500 people in poverty
and
B) 1 person dying
then suppose they compare B to
C) 520 people in poverty.
If they think that neither A to B or B to C can be compared such that they’re indifferent between them in both cases—well, that leads to a problem. If B=A and B=C then C=A. However, C is clearly worse than A—it has 20 extra in poverty.
Thus, we have to make interpersonal comparisons of utility. Because in the real world, we often have to make decisions between different things. We can’t throw up our hands and not decide unless we want to be indifferent between extra bad things and no extra bad things.
Simply asking people how happy they are is deceptive (sometimes things that make us happy in the moment end up having the opposite effect in the longer term and vice versa). And every marketing executive realizes that often people have no idea what will make them happy.
To quote Plantinga, “that’s not in dispute at all.” People can be wrong about what will make them happy. But that doesn’t mean nothing will make them happy. There are some judgements we can be can be rather confident in. For example, I’m quite confident that if a person set me on fire it would make me less happy overall. Certainly in expectation. While there are judgement calls that must be made about tough cases, the same is true of every single system. How would Mr. Ungit propose we decide upon colleges, for example or what food to eat? How would we decide how to spend our time?
The point that I raised repeatedly about the VNM axioms also applies clearly here. In order to avoid Dutch Book arguments, we must have a coherent utility function. If a person does not have a precise utility function that is able to compare different states of the world, they can have their money taken. If you say, for example, that too colleges are literally incomparable, but also that college 1 if slightly better would still be incomparable to college 2, then college 1’s choice worthiness=college 2’s choice worthiness=college 1+improvements choice worthiness, which means that you’re indifferent to 1 and 1+improvements.
If we’re indifferent between X and Y and between Y and Z but X is better than Z, then we’d rationally take a slight benefit to move from X to Y, and then another slight benefit to move from Y to Z, even though the differences between X and Z are greater than the two slight benefits accrued. We can keep doing this infinitely to bring about an infinitely undesirable state of affairs.
Thus, this objection to utilitarianism just boils down to complaining that difficult decisions are difficult, it’s not much of an objection.
And yet Ungit ignores a second damning objection, namely that this is entirely irrelevant. Even if utilitarianism can’t apply adequately in the real world, that wouldn’t negate its truth. Quantum physics is accurate even when describing higher level phenomena—this is true even though it’s not particularly useful for modeling them. Utilitarianism is a theory about what things matter. Thus, even in a world where modal realism were true and none of our decisions mattered, utilitarianism would still be true.
None of the points raised were objections to utilitarianism as a theory. They were merely expressing random ancillary confusions—yet not seriously grappling with the arguments presented.
TLDR in terms of my objections
Making such interpersonal and personal comparisons of utility is inevitable and we do it all the time. It’s just basically complaining about hard decisions.
Dutch book, money pump arguments, and similar prove that we must make similar decisions or based on our values, an infinitely bad state of affairs can be brought about through trades preferrable on our current values.
Reasonable social utility functions must satisfy vnm axioms—making the type of aggregation that was objected to inevitable.
We can employ introspection to come to such verdicts and employ similar reasoning.
The fact that people apply a moral theory incorrectly is not an argument against it. Lots of Christians have used Christianity to justify slavery, subjugation of women, and other attrocities. If we look at historical Christians their track record was waaaaaaaaay worse than that of the utilitarians, who have used belief in god to justify all sorts of horrors. How, for example, should we parse out intractable biblical disputes?
Theories that don’t rely on utility can’t make decisions that we need to make like where to go to college, what job to get, and where to live. The bible gives no guidance there.
Even if we couldn’t apply utilitarianism that wouldn’t make it untrue.
Continuing
And shockingly, Mr. Bulldog agrees with me here. He writes, “I agree that it’s sometimes hard to make decisions [based on happiness calculations]. But what’s the alternative?” But later he writes, “The fact that there are some cases in which we’re wrong about what would bring happiness doesn’t mean we can’t apply utilitarianism.”
Yes it does. It does mean we cannot apply utilitarianism.
It obviously does not mean that. What a bizarre confusion. X can’t be easily applied in every situation does not mean X should never be applied. Ungit did not quote the article in full. Here it is in full. Multiplication with sufficiently large numbers would become impossible—the universe would lack the computing power. However, that doesn’t mean that multiplication wouldn’t apply to those numbers—just that we couldn’t calculate its results. It certainly would not mean that we can’t apply multiplication in any situation.
Ungit explicitly defends the notion that if it’s hard to apply X to all situations then X is nonsense. By this standard, rationality is nonsense, being hard to apply.
I age that it’s sometimes hard to make decisions. But what’s the alternative? How would Ungit propose we decide whether to adopt a child, get a degree, or get married? The fact that happiness is hard to measure doesn’t mean we have no clue. There is a fact of the matter about what will bring the most happiness, so any measurement error on our part is irrelevant. A moral system shouldn’t hand waive away difficult decisions.
(age should be agree). I didn’t adequately edit my earlier posts.
None of the arguments contained here were addressed about the difficulty of making such decisions. Instead, Ungit just repeats his little mantra about the impossibility of comparing utility—while ignoring every objection given. He repeats it time and time again like a prayer to ward off those vicious arguments for utilitarianism.
And he, without proper explanation, minimizes the problem with the word “sometimes.” No. Not sometimes. All the time. All the time we do not have a good definition for happiness.
Ungit would do well to read the definition given—namely, desirable mental states. Repeating that there is no definition of a word after a definition was provided explicitly is a rather strange slight of hand.
If we have no definition of a word then the word is meaningless. And yet happiness is clearly not meaningless. When a person says that watching movies makes them happy, it’s very clear what they’re saying. What about this does Ungit find confusing? It’s not clear.
The second claim is also false. We do not always have trouble figuring out what would maximize happiness. For example, when deciding whether or not to stab oneself in the eye with scissors for no reason—it’s very clear that doing so would not maximize happiness. But the fact that the correct moral system doesn’t give us easy answers to what we should do in every single situation is a feature, not a bug.
This can be analogized to rationality. It’s sometimes unclear what’s rational, how to be rational, and rationality can be difficult to apply. But that doesn’t mean that rationality isn’t good. It certainly doesn’t mean that we can’t be or try to be rational in any situation.
All the time we do not know if the decisions we make will make us happy. All the time. Happiness is not a scientific idea. It is not something that can be measured.
Oh really?
It is extremely subjective from person to person and even in a single person’s life a choice may make us both happy and unhappy depending on when you ask the question.
The fact that at T1 something makes us happy but at T2 makes us unhappy has no significant implications that go against utilitarianism. An object can weigh 50 lbs at T1 and 100 at T2. Its average weight (assuming identical intervals) would be 75 lbs.
Our memory of our mental states is infallible. Thus, it’s quite unsurprising that something would be described as making us happy at one point but not at a different point. This is no more surprising than the fact that we might describe an object as big at one point but small at another point, based on our memory. This does not, however, mean objects have no objective (pun intended) size.
The reports one gets about whether something made them happy is very similar to the reports one would give about whether something is bright. It’s rather imprecise and could certainly be misremembered later. Yet that neither means brightness nor happiness are subjective and false.
This is a giant problem not a small one for utilitarianism. It is a system of thought that is built on an undefinable concept. This is a fatal problem with it.
This is false for the reasons described above. However, even if it were true, it wouldn’t make utilitarianism false. Knowledge is notoriously tricky to define as is reality. However, that doesn’t mean we should distrust an ideology built on knowledge.
And he asks, what the alternative is. With utilitarianism there is no alternative. But with religion, there is. You follow divine rules with religion. Be faithful to your wife even if she makes you unhappy. Love your enemy even if that makes you unhappy. Show up to church even if it makes you unhappy. Do not lie about your neighbor even if it makes you unhappy. Do these things not out of happiness calculations (which are impossible anyway) but because they are duties. Then trust that somehow your life (and afterlife) will be better off as a result. This is an ethical system built not on impossible happiness calculations but on ancient wisdom.
While I thank Mr. Ungit for explaining how his ethics solves a random mish mash of things that I didn’t ask about, I would appreciate if he had responded to how to make decisions about the particular things I did, in fact, ask about. I asked “How would Ungit propose we decide whether to adopt a child, get a degree, or get married?”
None of these were answered. How do we even decide what to eat, if it is not at all out of concern for utility? Why choose tastier food over less tasty food? If we, like Ungit, reject any concern for utility at all, thinking the concept is incoherent, why try to make people happy? Why treat depression?
I’ll summarize this with a limerick.
“You can’t define happiness,” is Ungit’s reply
He misses the points said by I
by which I mean me
“What is bad?” says he
“About a pain in my eye”
In reply I would say
Suffering is obviously bad
It’s a thing that shouldn’t be had
While hard to define
The structure’s affine
Despite all of Ungit’s pizazz.
Utilitarianism Foundational Error #2 - Repeated confusion in thinking IS can get us to OUGHT
This reminds me of the meme—are those utilitarians in the room with us now? Where have I equated those things. We can’t derive an ought from an is. That is, unless we’re moral naturalists, in which case facts about what ought to be are just specific facts about what is. However, that is not the argument I’ve given. I provided a series of philosophical arguments for why well-being is desirable.
In a previous post I noted that you cannot get an ought from an is. This is a logical fact. Utilitarianism says:
‘Because most people prefer happiness to pain and suffering that it is logical to have a system that promotes happiness and minimizes suffering.’
This is not a statement that’s been made by me. The fact that something is desired is some evidence that it’s desirable but it doesn’t necessitate that it’s desirable. Generally if people want P that makes it more probable that P is desirable than if they don’t, given that P is desirable if and only if we’d care about P if we were fully rational and impartial.
But consider alternative statements using this same logic.
‘Because most people are suspicious towards those that look different from them it is logical to have a system that promotes racism and minimizes race mixing.’
‘Because most people are greedy, it is logical to have a system that promotes greed and minimizes altruism.’
‘Because most people are horny, it is logical to have a system that maximizes promiscuous sex.’
The first 2 would be absurd. The fact that people have X doesn’t mean X is good. However, the fact that people desire X serves as some evidence that X is desirable. If people were not horny, the value of sex would be lower.
We are all born with many different impulses and inclinations. Empathy and the desire to be happy are some that we consider good but they are not the only ones. We are born with a desire to hate, to hurt, and to destroy as well. The fact that an impulse “is” tells us nothing whether something “ought” to be. This is the IS/OUGHT problem.
It doesn’t tell us nothing about it. For example, if it is the case that every single ethicist is a utilitarian, that would be evidence for the claim that we ought to be utiltiarians. Additionally, I gave dozens of arguments for hedonism, ones that were not addressed. Here they are.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Next, Ungit says
And Mr. Bulldog attempted to hand wave this giant problem away by stating this confusing assertion: “While you can’t get an ought from an is, if it is the case that lots of people think x is wrong, that is evidence that x is wrong.” Why would that be evidence of that? Most people in history didn’t think slavery was wrong. Is that evidence that it is not wrong?
The fact that people think X is good for them is certainly some evidence that it’s good for them. People are unlikely to be totally mislead about what makes them well off. The answer to Ungit’s question would be yes, it would be some evidence. However, it would be very very very weak evidence, compared to the overwhelming evidence for the wrongness of slavery.
Being a good bayesian, let’s plug it into the formula. P(people thinking X)|X>|~X.
The existence of Zeus is more probable than the existence of Shmeus—an isomorph of Zeus but merely with a different name. This is because people have believed in Zeus but have not believed in Shmeus.
If a simple majority is all that is needed to determine morality, many horrible human rights abuses in history would have remained in place forever.
This is a confusion.
People thinking X is good for them is good evidence that X is, in fact, good for them. It’s not as good of evidence that X is good all things considered.
People thinking X is good does not mean necessarily that X is good. Rather, it serves as some evidence that X is good. This is because people are more likely to be right than random chance. The odds that a random person is right about morality is higher than the odds that a random metaphysically possible moral view would be correct.
Ungit makes a rather elementary mistake—one that he keeps making. Evidence is not the same as decisive proof. B is evidence for A if the probability of B given A is higher than the probability of B given not A. Mr. Ungit would do well to look into Bayes theorem.
No. How reform happens is when a reformer notices that the majority is wrong and proceeds to convince a greater and greater number of people that the majority is wrong until that is not the majority view anymore. How would the reformer be able to do that if right and wrong were determined by the majority.
Right and wrong are not determined by the majority. Nobody beyond cultural relativists in Phil 101 seem to think it is. Rather, majority view can serve as some evidence for the truth of a proposition. Ungit has lost the thread here. He is failing to track. Thus is the tragic tale of Ungit’s pwnage.
And Mr. Bulldog is wrong when he says that, “The is ought problem as a meta-ethical stopping point to dispel with any normative claim is a total nonstarter.” But this too is wrong. It is as though Mr. Bulldog has never considered the ability of a Divine law to work. The Divine law is by definition a great Ought. It is the solution to the Is/Ought problem. As Plato explained with his divine forms, the Divine law is a picture of how things Ought to be. The Is/Ought problem is only an issue for atheists.
If you’ve read the articles in our written debate, dear reader, you will realize that I have in fact considered the ability of divine law to work. I have explained a series of reasons why divine law is a terrible, horrible, no good, very bad account of morality. But we’ll get to that later.
Mr. Ungit has additionally missed my responses to the is ought problem. I’ll quote them in full.
This totally misrepresents my argument. The claim is not that its truth ontologically hinges on many people thinking it. The claim is merely that lots of people finding it intuitive counts in favor of believing its true. While you can’t get an ought from an is, if it is the case that lots of people think x is wrong, that is evidence that x is wrong. My opponent would presumably hold that one thing favoring Christian ethics is that it seems plausible upon reflection, yet that doesn’t run afoul of the is ought problem.
The is ought problem as a meta-ethical stopping point to dispel with any normative claim is a total nonstarter. One who is a moral realist is either a moral naturalist, in which they think that moral claims are a type of is claim so we can get them from other is claims, or non naturalists in which case they won’t get an ought from an is because they think it’s non natural. Either way, it’s fully impotent.
Perceptive readers will note that this reply was not addressed by Mr. Ungit. So will non perceptive readers. All who read will note this fact.
Utilitarianism Foundational Error # 3 - Unworkable in Real Life
I have repeatedly pointed out that utilitarianism fails dramatically because rather than having set rules it has endless happiness calculations for every situation. Setting aside the inability to calculate happiness at all (see foundational error #1) we have a giant moral hazard of the person being tempted and the person making the very subjective happiness calculation being the very same person.
Mr. Ungit would do well to consult the historical record and compare the track record of the early utilitarians to the early non utilitarians to see which of them did better. Look at the life of Bentham, Mill, and Sidgwick—compare them to the ancillary moral views of Aristotle. As I have pointed out earlier in the debate, it is very clear that Bentham, Mill, and Sidgwick did better.
Another limerick.
{{“Utilitarianism is unworkable.”
Says Ungit whose reasons are circular
You’d calculate frequently
With no time for being free
No! That would hurt those who are hurtable. }}
Spending all of ones time calculating would not make things go well. If we spent all of our time calculating, we’d have no time for doing good things. Your objection to utilitarianism can’t be that it would lead to people not maximizing utility. For any action if that action is bad in terms of utility then utilitarianism wouldn’t endorse that action.
If you look at modern utilitarians, they manage to make it work rather well. The most mainstream effective altruism movement is effective altruism, which has saved vast numbers of lives and dramatically improve the conditions of vast numbers of factory farmed animals.
This explains why utilitarians rely on moral heuristics—general rules that tend to make things go best. Even if they’re not perfect always, adhering to them makes things go best overall.
If one is likely to be tempted to do evil then, following Aristotle, they would do well to hedge against benefitting themselves when in difficult moral conundrums.
Mr. Bulldog states that there is a right answer to my specific examples (of adultery, judicial rulings, etc). He states that “…people have misapplied Christianity, so this would take out Ungit’s moral system.” But this is absolutely false. With utilitarianism there is no “right” answer. It is a happiness judgement. What makes me happy is not what makes you happy. What pleases me is not what pleases you. My calculation and your calculation are going to be different. And both are very subjective. You don’t get to tell me what will make me happy.
This paragraph additionally represents profound confusion on a multitude of distinct fronts.
There is a right answer to what would make happiness the most overall. Different things make different people happy but utilitarianism is about maximizing overall happiness.
As I pointed out, this applies to Mr. Ungit’s views. People disagree about what god thinks we should do. So this would eviscerate Mr. Ungit’s morality.
The fact that people disagree doesn’t mean there’s no right answer—as any moral realist would accept. People disagree about theism, morality, and even scientific questions—but that doesn’t make it subjective.
No utilitarian that I know of have held that what maximizes happiness is subjective. There is, as I pointed out earlier, a science of happiness.
This is, something that my friend has referred to as the who chooses fallacy. Obviously, whoever is making decisions will be the one who chooses. We sadly are unable to consult an omniscient being when we make decisions. But, if utilitarianism makes better average decisions that other theories that’s a reason to prefer it. Even if utilitarianism instructs us to only commit adulty in wildly implausible situations while Christianity says we should never commit it—Christianity has no magic enforcement mechanism that guarantees people will adhere to it. So it’s a draw on that front.
If the problem is about application, then let’s compare the application. Ungit ahs granted that utilitarianism has produced people who are overwhelmingly on the right side of history like Bentham and Mill, who supported women’s suffrage and legalization of homosexuality in 1798—a rather impressive feat. Utilitarianism also has the theoretical infrastructure to go against every atrocity in history—all of which have come from excluding some sentient beings from our moral circle. Modern utilitarians like Singer and MacAskill take important actions to reduce existential risks and improve conditions of animals on factory farms—unlike those dastardly Christians which Mr. Ungit represents.
Ungit confuses “what would make me happy,” with “what would make the world happiest overall.” One shouldn’t just do what makes them happy—this is rather obvious to anyone with even an elementary understanding of utilitarianism.
People can, in fact, tell others what will make them happy. The fact that a person thinks X will make them happy doesn’t mean that X will actually make them happy. Duh!
And this is a giant problem in practice.
And this is very different from moral questions with a divine law (such as Christianity). When someone commits adultery within Christianity, they are violating a clear rule given in the bible and taught throughout the centuries by the church. There are rules. Like them or not they are there. No calculation is required once the church has ruled on something. You can follow the church or not.
But the Church is silent on most important questions in your life. The Church will not tell you who to marry, where to go to college, what job to get, how to spent your days, or resolve any important issue. They merely give demands—whose piety is matched only by their vacuity.
The church, unlike utilitarianism, has nothing to say about what you should do with most of your time or your money. Given that some charities are thousands of times better than other charities, being indifferent to which charities are given to is a rather big problem for a system, if it is to provide a holistic account of how to live.
We don’t need the cathedral to tell us what to do in most cases. In the cases where decisions are difficult, we generally implicitly use utilitarianism.
I won’t rehash the points about why this would be irrelevant even if true or heuristics, no need to beat a dead horse.
This is a philosophy that can constrain us in our moments of temptation. Committing adultery, killing a prisoner, or stealing something from the corporation might make me happy but I have a rule that says not to do it anyway.
Utilitarianism is also opposed to doing such things in all plausible realistic situations. However, there are clear counterexamples to all of those. If committing adultery, killing a prisoner, or stealing from a corporation would prevent infinite child rape, it would clearly be good. Thus, we’ve already established what kind of person Mr. Ungit is, we’re just haggling over the price.
Thus, Ungit would do well to give cases where utilitarianism actually gives incorrect results, rather than cases where wrong utilitarianism would get the wrong result. The fact that wrong people could apply a theory incorrectly is not much of a problem for a theory. Fools could misread the bible and think it sanctions torturing people—yet that wouldn’t be an argument against the bible.
Indeed, Christians have frequently used the bible to justify killing. It isn’t the utilitarian foundational text (not that there are such things), that explicitly permit killing “everyone, men and women, young and old, everyone except Rahab and the others in her house.” Utilitarians don’t frequently defend genocide in the modern age. Nor do they defend the vicious beating of slaves—something the bible holds is only a punishable offense if people don’t get up after three days. We don’t believe in a nearly omnicidal deity, who drowned kittens allegedly.
Foundational Error # 4 - People do not generally speaking seek happiness
I would recommend Mr. Bulldog read Dostoevsky’s great Notes from the Underground. In that book, the author took on the utilitarian thinkers of his day that claimed that if we could just teach people what would make them happy they would act rightly. A perfect society could result from education. Dostoevsky told a story of a man who - over and over - makes self destructive choices. Dostoevsky is making the point that many people know very well what will make them happy but often make other choices for other reasons (often irrational ones).
If Mr. Ungit is giving me book recommendations—I shall give him similar writing recommendations. I would recommend Mr. Ungit read the blog posts I wrote in response, as part of the debate. After all, this was not a point I failed to address.
Here is what I said “NO!!!! Let me make one thing very clear—I am not rehashing John Stuart Mill’s mediocre argument. I do not think it works. I do not think that happiness is all that’s desired. If Ungit responded to my argument rather than the replica of Mill living rent free in his head, this would be obvious.”
I agree people often make self destructive choices and that sometimes the thing that will bring about the most well-being overall won’t bring about the most well-being for you—sometimes morality and self interest conflict.
None of my arguments rely on the premise that people seek happiness. However, as a side point, it’s obviously true that people generally do seek happiness all else equal, in most situations. People tend to prefer happiness to sadness, joy to misery, bliss to unpleasantness. Pointing out that sometimes people don’t desire happiness doesn’t prove that they usually do desire happiness. When deciding upon foods, for example, people primarily select them on the basis of what foods they’d enjoy.
I do not think that people tend to be utilitarians—nor that they tend to maximize their overall happiness. I employed over a dozen arguments for hedonism—if Ungit thinks this is my augment then I question whether he has truly read my opening statement.
As Pascal says, people tend to be drawn to what is beautiful not what is true. And in the same way, people often pursue many other things besides happiness. Duty, beauty, momentary joy for long term suffering, habit, tradition, etc etc etc. The utiltarian often responds to this by saying we choose these things because they make us happy but as I noted in previous posts this has the effect of making the word happiness all encompassing and is essentially saying that every choice is always done out of a happiness (and if this is the case, what is utilitarianism).
Alas, this is not what I’d say—certainly not across the board. I think it’s caused by
People being wrong about what things matter (thinking things other than happiness matters, more specifically).
Sometimes, people are wrong about what will make them happy.
Sometimes, people are just plain irrational, for example, when they procrastinate.
The definition of happiness is desirable mental states. Anything that is not a desirable mental state wouldn’t be happiness.
Other loose ends:
Euthyphro’s dilemma: He says my response is not adequate but he missed the point: God is good. We know what good is because of God. Having hypothetical on what it would be like if God was not good is nonsensical. This is like debating what squares would be like if they had round sides. Definitionally it is nonsensical.
Euthyphro’s dilemma seems to induce collapse of reasoning in the mind of those who ponder it. The question is, is X good because god says so or does god say so because X is in fact good. If the former is true, then if god decreed that we should torture infants for fun, we should torture infants for fun. It doesn’t involve asking what if god weren’t good—it instead asks about what things would be like if god’s character were different. If the latter is true, morality is outside of god.
The statement god is good is ambiguous. It’s either
A) A tautology, saying god is godly. This runs into the first horn of the dilemma. If true this would mean that saying god is good is not a meaningful statement and god could decree anything. If god sanctioned the grisly scenario described here, it would be objectively moral.
B) Substantive. But if it’s substantive then morality is outside of god. For god is good to not be a tautology, there must be an extra concept of goodness outside of god.
On the analogy about squares, a square by definition can’t have round sides. A square with rounds sides is incoherent—it entails a contradiction—both 4 and not 4 sides. However, there’s nothing incoherent about god having different whims, unless his whims are fixed by the concept of goodness, but if that’s true then goodness couldn’t depend on god.
Ungit has not clarified which horn of the dilemma he is taking. He has also ignored my numerous other objections to divine command theory. I’ll quote them in full.
1 God cannot solve the problem of morality, if morality would be subjective absent a god then god could not make it objective. If morality is simply a description of preferences, then god cannot make objective morality any more than he could make objective beauty, or objective tastiness.
2 Utilitarianism could be the best moral view even if morality were subjective (though I do think it’s objective). Additionally, there’s a robustly realist account of the goodness of pleasure. Much like mental states can have properties like brightness they can have normative properties.
3 This runs into Euthephro’s dilemma is it good because god decreed it or did god decree it because it’s good. If the former is true, then good is just whatever god decrees and there’s no reason good is binding, if satan were ultimate he could decree that things were good. However, if god decrees it because it’s good then it proves that good exists outside of god. Some try to avoid this problem by saying that gods nature is good, so it’s not true either because of or in spite of divine decree. However, this just raises the deeper question of whether it’s good because it corresponds to his nature or whether it corresponds to his nature because it’s good. Thus, it doesn’t avoid the problem because if gods nature were evil than evil would be justified.
4 Either God has reasons for his commands or he doesn’t. If he does then that would ground morality and if he doesn’t then it’s arbitrary and lacks reason giving force
5 There already has to be objective morality for there to be an objectively moral being. Thus, this is like arguing that we should believe that Millard Filmore was a good president because it accounts for goodness.
6 God is presumably not the objective standard for Glubglosh—which is gibberish. Yet if one thinks that morality wouldn't exist without God, then saying God is good is like saying God is the standard for Glubglosh. God needs objective morality to exist.
7 This seems to obviously misidentify what morality is. Morality has to have reason giving force. However, it’s not clear how theistic morality does. God’s character being anti child murder misidentifies why child murder is bad. If God disappeared the badness of child murder would not disappear. The theist has to say that the badness of brutally torturing children has nothing to do with harm to children and everything to do with God’s character being disapproving. This is not a plausible account of moral ontology.
8 If God grounds morality then morality can just be grounded in what God would decree if he existed.
9 Morality has to either be true in all possible worlds or true in none. God can’t affect things that are true in all possible worlds any more than he can ground mathematics or logic
10 In order for God’s commands to give us morality, we have to already have a moral obligation to obey God’s commands, which means God needs morality to exist. This argument came from none other than Mackie, the guy Craig quoted to prove atheists can’t have objective morality. He doesn’t think theists can either.
These arguments were, as we say in collegiate debate “Dropped.” Ungit did not supply a rebuttal to them.
In this article, I have listed multiple fatal flaws in Utilitarianism. In my conversation with Mr. Bulldog, he has done nothing to explain any of these fatal flaws. I don’t blame him for this as people have been pointing out these logical and practical flaws for centuries. The only surprising thing to me is that there are people out there that still genuinely believe this system of thought. My guess is that the reason it is still held is that it proceeds from atheism. People start by assuming there is no God (clearly wrong) and then are forced to try to get the Oughts that only a divine law can provide. This leaves the atheist with either some form of Utilitarianism (a fatally flawed philosophy as I have shown here) or nihilism. Most atheists are uncomfortable with nihilism so they end up being forced to cling to a fatally flawed philosophy.
These flaws are as far from fatal as any could be—they don’t even qualify as flaws in my mind. If Ungit thinks his arguments haven’t been addressed, he would do well to reread my responses to his arguments—they were, in fact addressed.
Ungit also shows immense ignorance of the history of philosophy. There are lots of atheist philosophers who are not utilitarian—Wielenberg, Huemer, Parfit, Nagel, and many others. One need not be a utilitarian to avoid nihilism. Nothing about utilitarianism makes it an especially helpful lifeboat for maintaining belief in robust morality through the vicissitudes of belief. Well, nothing other than the potent arguments favoring it over other theories.
So here are the lines of argument favoring utilitarianism, presented in bayesian form for maximum fun.
U will denote Utilitarianism
S will denote simplicity
E will denote explanatory power
EMC is expanding moral circle
RSOH is right side of history
IOR is incoherence of rights
POB is permissibility of birth
POC2 is permissibility of case 2 (described here)
POC3 is permissibility of case 3 (described here)
POC4 is permissibility of case 4 (described here)
POC5 is permissibility of case 5 (described here)
HA is Harsanyi’s argument
US is utilitarian syllogism (the one with 12 premises).
1 P(S)|U>P(S|~U).
Thus, S favors utilitarianism.
2 P(E)|U>P(E|~U)
Thus, E favors utilitarianism
3 P(EMC)|U>P(EMC)|~U
Thus, EMC favors utilitarianism
4 P(RSOH)|U>P(RSOH)|~U
Thus, RSOH favors utilitarianism
5 P(IOR)|U>P(IOR)|~U
Thus, IOR favors utilitarianism
6 P(POB)|U>P(POB)|~U
Thus, POB favors utilitarianism
7 P(POC2)|U>P(POC2)|~U
Thus, POC2 favors utilitarianism.
8 P(POC3)|U|P(POC3)|~U
Thus, POC3 favors utilitarianism.
9 P(POC4)|U>P(POC4)|~U
Thus, POC4 favors utilitarianism.
10 P(POC5)|U>(POC5)|~U
Thus, POC5 favors utilitarianism.
11 P(HA)|U>P(HA)|~U
Thus, HA favors utilitarianism
12 P(US)|U>P(US)|~U
These are the arguments that should be responded, rather than random, ancillary extra points. I EVEN MADE THEM BAYESIAN, for fun and for clarity.
There are serious critiques to be had of utilitarianism—Heumer’s were sufficiently potent that I dedicated ten blog posts to dispatching them. Yet the nonsense Ungit churns out is not the way to criticize utilitarianism. He has brought sorites paradox to a gun fight.
The first questions you have when you hear about utilitarianism are not potent objections to it—and if they’re all that’s presented by a critic of utilitarianism—things will end up going rather poorly for them. To really grapple with utilitarianism, one needs a better understanding of the theory—as well as various concepts in economics.
While I appreciate his writing ability and gusto—Ungit seems unwilling or unable to truly grapple with what’s being said. He levies ill thought out objections about application—while ignoring the ways utilitarianism ACTUALLY is applied in the real world. He invokes the alleged impossibility of applying it, while ignoring the economic analysis provided. He apparently is able to deduce a priori what god would want him to do in every situation—no thinking needed—and takes utilitarianism’s absence of revelation directly from god about various matters—ones which god never seems to directly comment on—to be a mark against it.
It’s abundantly clear—this is not the worldview shattering type of devastating objection that should cause any utilitarian on the face of this earth to abandon their views. Least of all me.
Some Objections To Moral Particularism
Why I don't think that we should be moral particularists
Moral paricularism is the notion that there are not true moral principles—or at least that moral principles are not necessary to do ethics properly. Moral particularism is the view of ethics that I find most objectionable—it seems to rob us of our ability to do good ethics. Thus, in this article I shall explain why I am not a moral particularist.
1 Simplicity.
Moral particularism holds that there are no generalizations about all moral actions—no universal moral rules. Thus, it is infinitely complex, holding to an infinite number of distinct morally supervenient facts. Let’s consider how complex it is by looking at the SEP page based on theories of information, which are used to figure out the complexity of a theory. Note, I’m rather an amateur here, so it’s possible I’ll misunderstand some information theory.
Nyquist’s function: Nyquist (1924) was probably the first to express the amount of “intelligence” that could be transmitted given a certain line speed of a telegraph systems in terms of a log function: W=klogmW=klogm, where W is the speed of transmission, K is a constant, and m are the different voltage levels one can choose from.
Here one would need an infinite amount of information to simulate the verdicts of moral particularism, because there is no simple formula. You would have to directly transmit verdicts about an infinite number of cases. An infinitely complex theory must be rejected if simplicity is to be a virtue at all.
Fisher information: the amount of information that an observable random variable X carries about an unknown parameter θθ upon which the probability of X depends (Fisher 1925).
There’s a high chance I’m confused here, but this also seems to result in infinite complexity, because one needs an infinite number of parameters.
The Hartley function: (Hartley 1928, Rényi 1961, Vigo 2012). The amount of information we get when we select an element from a finite set S under uniform distribution is the logarithm of the cardinality of that set.
This seems the same as the previous one.
Shannon information: the entropy, H, of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X (Shannon 1948; Shannon & Weaver 1949).
A clearer explanation is provided here which would make particularism infinitely complex, holding there there are an infinite number of distinct things that determine moral worth and no simple formula. One would need an infinitely long code to run the program for determining whether things are good or not.
Kolmogorov complexity: the information in a binary string x is the length of the shortest program p that produces x on a reference universal Turing machine U (Turing 1937; Solomonoff 1960, 1964a,b, 1997; Kolmogorov 1965; Chaitin 1969, 1987).
Same applies for Kolmogorov complexity.
Moral particularism is a terrible theory in terms of simplicity. It requires postulating an infinite number of different things with no common uniting factor—that would be infinitely complex by any standard of information. It does worse than positing 100^100^100^100^100 fairies who collectively decide moral judgements, because those would be easier to simulate. Thus, this is not just an argument that undermines it a bit. This is a total headshot to particularism. It is infinitely complex!
When asked why happiness is good, the answer I would tend to give is that it just is, there’s no deeper account. It’s rather like asking why 1+1=2 or why we sound arguments are true. But the particularist has to say that an infinite number of things are true, with no deeper explanation.
2 There are some universally true moral principles
This argument says
Premise 1: If there are true moral principles, moral particularism is false
Premise 2: There are true moral principles
Therefore, moral particularism is false
Premise 1 is true by definition
Premise 2 is hard to deny. Here are several moral principles that seem true
1 If there is a large group of people who are
A) The only existing sentient beings
B) Have a very high quality of life
C) Don’t want to be tortured
D) No benefits to anyone else will accrue from torturing them to death
E) Are infants who have not harmed anyone at any point
F) Will not harm anyone at any future point
You should not torture them to death.
2 If some action would cause an infinite number of people who have not harmed anyone at any previous point and will not harm anyone at any future point to endure infinite suffering, when otherwise they would have lived good lives, you shouldn’t take the action
3 If some action violates rights, decreases desire fulfillment, decrease joy, decreases virtue, causes people’s well-being to be less proportional to their virtue, increases impure pleasures, decreases virtuous pleasures, decreases objective list fulfillment, that action should not be taken.
Moral particularists must deny these moral rules as being universally applicable.
3 Partner’s in crime to scientific particularism.
Premise 1: If moral particularism is true, scientific particularism is true
Premise 2: Scientific particularism is not true
Therefore, moral particularism is not true
Premise 2 is widely accepted—there are virtually zero scientific particularists. Everyone thinks there are true either laws or law like generalizations about physics.
Premise 1 is also plausible. The reasons people give for accepting moral particularism seem to also apply to scientific particularism. Dancy says, for example,
The first reason is that absolute principles cannot conflict, and that if they cannot conflict a vital aspect of our moral lives (that is, conflict) has been left out of account altogether by any theory that supposes that morality is entirely governed by absolute principles.
If two supposed absolute principles conflict in a single case, one of them must be abandoned. Suppose, for instance that one principle says that all actions of type A are wrong and another says that all actions of type B are right. Suppose also that no action can be both overall wrong and overall right, and that it is possible for an action to be of both types, A and B. Things are all right so far, but if there were an action of both types, one or other of the principles would have to have abandoned. But this means that we have no room for conflict. What is meant by moral conflict here is not conflict between two individuals, but conflict between reasons for and against in a given case. There cannot be that sort of conflict, if all reasons are specified in absolute principles, because if the reasons conflicted the principles specifying them would conflict, and this would just show that one of the principles was a fraud. Conflict would, then, never be more than a product of our own misconceptions. There would be no real conflict.
What this criticism amounts to is the complaint that we need to be able to make sense of cases in which there are moral reasons on both sides, for and against. But we cannot do this effectively if all moral reasons are specified in absolute principles. Morality cannot, therefore, be just a system of absolute principles. The only way in which we could continue to think of morality as governed by absolute principles is to suppose that there is only one such principle, so that there is no possibility of conflict between principles, or to arrange things in some other way so that the principles are incapable of conflict. (Even then, of course, there would be the worry that conflict is real, and that to arrange things so that conflict is merely apparent is to erase something important.) We know of one position that offers only one principle: classical utilitarianism. The argument against this ‘monistic’ position is rather different. The argument is the direct claim that monism is false; there is more than one sort of relevant property, or more than one way in which features can get to be morally relevant. So a position with only one absolute principle is false, and one with more than one such principle cannot make proper sense of conflict.
But the same is true of scientific particularism. If scientific particularism is false, there are universal laws of physics that never conflict. Additionally, given the many different contributory facts to scientific features, it seems that one could make a similar reply, namely, that a single absolute principle can’t account for the diversity of scientific factors that go into any individual case.
(Additionally, I would argue that the utilitarian account is plausible across all cases, as I have previously in my series).
The second prong of the particularist attack is to ask why we should suppose that a feature that counts in favour in one case must count the same way wherever it appears. To this question, I think, no real answer has been produced. Generalists tend to point out that if one claims that a feature counts in favour here and against there, one has something to explain. But the particularist is happy to admit this. It is true that if a feature counts in favour in one case and against in another broadly similar case, there must be an explanation of how this can be. That explanation will presumably be given by pointing to other differences between the cases. In the second case, perhaps, something that is required for the feature to count in favour is in fact absent, though it was present in the first case. Such explanations must be available, and they can be found. None of this does anything to restore a generalist conception of how reasons function.
One could make a similar reply in the scientific case. Why think that laws applying one way in one case mean that they must apply another way in a similar case.
The third prong of attack on contributory generalism involves asking for an appropriate epistemology. How are we to tell, from what we can discern case by case, that this feature will function in the same way wherever else it appears? Ross, our paradigm generalist, holds that we start with the recognition that this feature counts in favour here, but that we can immediately tell (by a process which he calls ‘intuitive induction’) that it must count in favour everywhere. The question is how this is supposed to work. What is it that is discernible in one case and tells us that what we have here must repeat in all other cases? (Ross rightly does not suppose that we learn our moral principles by ordinary induction.) The standard, and probably the only, answer to this question is wrong. This answer amounts to an account of what it is to make a difference in a particular case—what it is to be relevant here. That account understands a feature as relevant here if and only if, in any case where it is the only relevant feature, it would decide the issue. Now if this account of particular relevance were defensible, we would indeed have some reason to suppose that what is relevant here would be relevant in any other situation. For on each further situation it will still be true that if it were the only relevant feature, it would decide the issue. So relevance is indeed general relevance, on this showing. And this gives the generalist the epistemology he needs, for it is now easy to see how, in discerning that this feature matters here, we immediately see that it would make the same difference on every occurrence. For it is true of it on each occurrence that if it were the only relevant feature, it would decide the issue.
Sadly, the account of relevance that this all depends on is not defensible. It is, after all, true of any feature whatever that if it were the only relevant feature, it would decide the issue. The word ‘relevant’ appears within this formulation, and it cannot be removed. For if we said merely that if this feature were the only feature, it would decide the issue, we would have said something that is probably both false and, worse, incoherent. It would be incoherent because the idea that a feature could be present alone, without any other features whatever, is surely nonsense. The idea that an action could be merely kind, say, without having any other features at all, makes no sense at all. Further, there may be some features that can only be relevant if some other feature is also relevant—features that (in terms of reasons) only give us reasons if some other feature is giving us reasons as well. For instance, in the Prisoner’s Dilemma one prisoner only has reasons if the other one does. If this can occur, any ‘isolation test’ for reasons must miss some reasons out. Finally, trying to isolate the contribution of a feature by asking how things would have been if no other feature had made any contribution is, when one comes to think of it, a rather peculiar enterprise. It is uncomfortably like trying to determine the contribution made by one football player to his team’s success today by asking how things would have been if there had been no other players on the field. So the notion of relevance that is required as a basis for generalist epistemology is unacceptable.
One could make a similar reply in the case of scientific realism. How would we know that laws of physics apply universally?
Additionally, in the moral case it seems like by reflecting on features we can deduce that they count for or against some state of affairs. Reflecting on the nature of suffering, for example, allows us to realize that the fact that some action would cause suffering counts against doing that act.
Particularists often will argue that whether some fact counts for or against doing some act is context dependent, however, one could say the same about the scientific case. Whether some celestial body will pull in another one requires knowledge of the full context. However, this doesn’t mean scientific particularism is true.
4 Poor track record of moral intuitions
Our moral intuitions are very often wrong, most people throughout history have thought lots of terrible things like that slavery was permissible, women should be second class citizens, and child abuse is permissible. This undermines the arguments for particularism. If general principles govern an infinite number of cases, it’s unsurprising that they would seem wrong to us. This is because our moral intuitions sometimes produce bad results, so we’d expect our intuitions to, in some cases, diverge from the true moral principles.
This also undermines particularist epistemology. If our case specific judgements are often dreadfully wrong, a moral system that relies on them to make decisions is unreliable. The particularist can always appeal to specific facts of the case which make it impossible to adequately apply principles to.
To quote this lovely book “moral particularism as a decision procedure provides far too much scope for special pleading and rationalization of self-serving action.”
5 Moral particularism Undermines Moral Knowledge
A) It undermines thought experiments. If I’m trying to argue for the moral obligation to care about the future, care about the interests of animals, or abolish slavery, I would likely appeal to similar cases. For example, if I ask someone what the justification is for enslaving people of a particular skin color, the generalist might say something like “Well, people of that skin color aren’t morally relevant.” However, the particularist can just appeal to particular facts about the case, such that thought experiments can’t be employed. This undermines our ability to employ moral reasoning, because the thought experiment will never be identical to the case at hand, so the particularist can always dig in their heels and say that the specific case at hand has unique features.
B Moral particularism must hold that moral knowledge is a posteriori, however, that would make it impossible to make a priori judgements about other cases when combined with the commitments of particularism towards there being no general principles, as this book argues. According to moral particularism, moral facts can’t be deduced a priori, because they depend on the specific features of a case. Thus, we need to experience a case in order to know facts about it. However, this would mean we can’t make judgements about merely imagined cases that it seems like we can know a priori, like that an alien holocaust that inflicts massive suffering and benefits no one would be bad.
C Moral particularism undermines moral knowledge based on experience because we can’t extract general principles. Small children, for example, seem to learn that it’s bad to hurt others by being hurt by others and seeing the reactions of others from being hurt. However, if particularism is true, then they can’t extrapolate the general principle that it’s wrong to hurt people. However, if this is true, then it’s unclear how we can have moral knowledge that we can apply to other cases. What are we learning in moral cases if we’re not gaining knowledge of moral principles?
Dancy replies
The third question asks us what relevance other cases do have to a new case, if not the sort of relevance that the generalist supposes. The answer to this is that experience of similar cases can tell us what sort of thing to look out for, and the sort of relevance that a certain feature can have; in this way our judgement in a new case can be informed, though it is not forced or constrained, by our experience of similar cases in the past. There is no need to suppose that the way in which this works is by the extraction of principles from the earlier cases, which we then impose on the new case.
This raises several problems.
1 It leaves us unable to determine when we should assume that like cases implications to spill over to other like cases. If there aren’t true moral principles, what reason do we have to think that harming someone is morally wrong in general, rather than the notion merely that harming someone is wrong in a few specific circumstances.
2 How does it inform us of anything if there are no principles that can be extracted? If, for example, scientific particularism were true, we couldn’t extrapolate trends from other cases, because there would be no static rules that govern all cases. Particularism holds the same is true.
3 If there are no true moral principles, what reason do we have to even think that like cases should be similar in their moral judgements. If, for example Z is a function of X^Y^A^B^C+BCA-LR^N, where each of them are variables, we couldn’t extrapolate trends based on the values for the values of any of the variables. All of them depend on the others. In the particularist case, it’s even more dire, for there are an infinite number of variables. Thus, how can we extract any general lessons about other cases if the permissibility of some action depends on an infinite number of specific features of the case?
D Leads to skepticism because we can’t reach reflective equilibrium. This has been argued persuasively by Torbjörn Tännsjö. The arguments is as follows. In order to have knowledge, our beliefs can’t be analyzed in a vacuum, they must be able to form a coherent web. However, on particularism, there is no coherent web to be formed, just a series of disjunctive moral judgements. Thus, on particularism we can’t have any moral knowledge. Given that our views can’t be wrapped into a coherent web, we can’t be confident in any of them, as coherentism is the only appropriate method for ethics.
Note, this claim about coherentism being the only appropriate method for ethics is consistent with the view that foundationalism is true broadly. This is not about our fundamental axioms, just about higher order ethical principles. When asked why I accept, for example, the claim about 10!!! dust specks being worse than torture , I can appeal to a series of other judgements that are more plausible. However, the particularist rejects any such judgements, so they can provide no deeper account of how they have moral knowledge.
If asked how they know whether we should flip the switch in the trolley problem, the moral particularist can’t appeal to principles or to other scenarios, because those other scenarios lack identical features, so they require distinct moral judgements. Only the generalist, thus, can have moral knowledge. For the particularist, any moral judgement they make has no deeper explanation. The reason I shouldn’t torture Jim has no deeper account relating to the badness of torture.
This means particularism makes ethical reasoning impossible—we can neither appeal to thought experiment nor general principles. This makes us fully unable to correct bad moral judgements. This is because moral particularism can’t account for the cluster structure of moral reasoning, where similar cases tend to have similar results.
Caplan Is Wrong--You Should Happily Live Forever
Contra Caplan On Continuous Contentment
(Caplan, 2008) has given the following scenario
“Suppose you were offered the following gamble:
“1. With probability p, you will live forever at your current age.
“2. With probability (1-p), you instantly, painlessly die.
“What is your critical value of p? If you combine expected utility theory with the empirical observation that happiness is pretty flat over time, it seems like you should be willing to accept a very tiny p. But I can’t easily say that I’d accept a p<1/3.
“Perhaps the main reason is that all the people I care about would suffer a lot more from my instant death than they’d gain from my immortality. But even if I were fully selfish, I wouldn’t be enthusiastic even at p=.5. Should I get my head examined?”
I wouldn’t want to comment on whether Caplan should get his head examined. However, Caplan should take a small risk of living forever, assuming that his life would continue to be good indefinitely. The following very plausible principles, when combined with transitivity, is sufficient to justify the low chance of living forever.
The first principle is the Lengthening Principle: Let M be a number of years lived and N be a greater number of years lived. Let P be some probability less than 100%.
P(N)>100% chance of M, for some values of N and P.
The second principle is the modified independence: If A>B then the expected value of P(A)>P(B)--assuming P represents the same probability here.
To see how these principles are sufficient to justify the conclusion, suppose one is deciding between a 100% chance of 60 years and a 99.9% chance of 200 years. Clearly the 99.9% chance of 200 years of life would be better.
As per the second principle, a 99.9% chance of 200 years would be worse than a 99.8001% chance of 400 years of life, if a 100% chance of 200 years would be worse than a 99.9% chance of 400 years. After all, 99.8001% is 99.9%^2. Thus, as per the second principle, if a 100% chance of 200 years is less good than a 99.9% chance of 400 years, then a 99.9% chance of a 100% chance of 200 years is less good than a 99.9% chance of a 99.9% chance of a 99.9% chance of 400 years, which would mean that a 99.9% chance of 200 years would be worse than a 99.8001% chance of 400 years of life. We can keep doing this process--each time finding a low chance of a very long life to be better than a certainty of a shorter life.
One could reject the modified independence principle. However, this principle is widely accepted--denying it seems to require a confusion of the type required by rejecting transitivity. To see why this principle is obvious, compare a 99% chance of a 99% chance of X to a 99% chance of Y, in a world where we conclude that a 99% chance of X is better than certainty of Y. The 1% chance of getting neither X nor Y, regardless of which action is chosen, doesn’t affect the decision at all. There’s a 1% chance that nothing is had, regardless of what they choose. Thus, that additional 1% of the probability space should be ignored in decision making.
However, in the 99% segment of probability space, the choice is just between a 99% chance of X and a 100% chance of Y. However, we’ve already granted that a 99% chance of X is better than a 100% chance of Y. Thus, this follows as a basic logical principle, once the terms and ideas are properly considered.
Rejecting this requires us to reject dominance. Consider the problem in the following way. Imagine there are two random number generators, which generate numbers between 1 and 100. They can choose two offers.
Offer 1: Use random number generator 1. If it generates any number other than 100, you get Y.
Offer 2: Use random number generator number 1. If it generates any number other than 100, use random number generator 2. If that generates any number other than 100, get X.
Thus, random number generator 1 will either generate number 100 or not. If it does generate number 100, the offers are equal. However, if it does not, then one is choosing merely between a 99% chance of X and a 100% chance of Y. Given that we’ve already stipulated that a 100% chance of Y is less good than a 99% chance of X, the offers are either equal or offer 2 is better.
Thus, modified independence should be accepted. I’ve defended transitivity previously in this chapter. In order to hold the view espoused by Caplan, one would have to reject the lengthening principle. However, this principle is not plausible to reject.
Even if we accept that utility has diminishing marginal value (from a moral standpoint), as long as its value never drops to zero, the utility will be sufficiently vast to make the lengthening principle true. Even if one accepts that the moral value of some amount of utility is a logarithmic function of utility, the lengthening principle would still be true.
Thus, one would have to hold that vast amounts of utility beyond a certain point have either zero moral value or asymptotically approach zero moral value. However, holding such a view is deeply implausible.
First, suppose you came across some entity that had been around since the beginning of the universe. Additionally, suppose that the universe was far older than we thought, so the entity was really 100^100^100 years of age. The entity has lived a pretty good life up until this point. It’s not at all plausible that the moral value of benefits to the entity would be near zero.
Suppose you can either give a slight benefit to a human or give benefits to that entity equivalent to the sum total of all well-being experienced during all of human history. Rejecting the diminishment principle and accepting the moral value of happiness beyond a certain threshold being near zero, would lead to the conclusion that one should give slight benefits to a human (say, equivalent to eating one chocolate chip), instead of producing for that very old entity the sum total of all joy experienced during the history of the world. This is deeply implausible.
Another related implausible implication follows from this judgment. Suppose that one had a dream during which they subjectively experienced a vast amount of time. To them, the dream felt like living 100^100^100 years. They had experiences for eternities, though in the external world only a few hours passed. After they woke up, this account would say that making them well off would become almost entirely morally irrelevant. This is not at all plausible. Long dreams shouldn’t rob a being of any moral worth.
If one bites the bullet on these cases, even larger problems arise relating to the experience of pain. If pleasure has diminishing marginal value, does pain also have such diminishing marginal value? Whether the answer is yes or no, the consequences are very implausible.
First, suppose the answer is yes, pain has diminishing marginal value. The moral badness of pain diminishes the more pain one has experienced. If this is true then, returning to the being that has lived 100^100^100 years, torturing such a being would be worth preventing a pinprick. They’ve lived so long and experienced so much pain that, if pain has decreasing marginal value, the marginal disvalue of them being tortured would be nearly zero. This, however, is deeply implausible. Beings that have been around for cosmic timescales are still bad to torture for slight benefits to others.
One might appeal to rights, arguing that you shouldn’t torture them to prevent other slight pains merely because it would be a rights violation. However, they could argue, the value of their pain does still diminish. This, however, still runs into similar problems. The scenario can be modified so that instead of torturing them to prevent a pinprick, one can either prevent their torture or prevent a pinprick. The person who adheres to declining marginal value of pain would have to argue that one should prevent the pinprick. This, however, is deeply implausible.
One could argue that there’s only declining marginal value of pain if their life is painful overall. This, however, is deeply implausible. It would hold that if a person has been miserable for 100^100^100 years, then torturing them wouldn’t be very bad--and it would be less bad than a pinprick. This is ludicrous!
One might object that after being in pain for 100^100^100 years, they are less harmed by the pain, so the judgment isn’t counterintuitive. This, however, rests on a confusion. For the purposes here, we’re stipulating that their subjective experience of disliking the pain is equivalent to that of the average person being tortured. While experiencing lots of pain might make one less bothered by pain, such considerations are accounted for by the utilitarian account I defend. Thus, for the hypothetical, it is stipulated that they have a similar reaction to pain to the one that most of us would have to pain, finding it equally unpleasant.
Thus, this account which holds that the more pain one experiences, the less bad their pain is, will not work.
Now suppose that their answer is no, and they hold that pain doesn’t have decreasing marginal value. Pain is just as bad, regardless of how much pain one has experienced previously. This judgment produces implausible results when combined with belief in the declining marginal value of pleasure.
If pleasure above a certain threshold has virtually no moral value, but pain continues to have the same value, then from the standpoint of this entity who has been around for 100^100^100 years, they should not be willing to trade off any amount of pain for any amount of pleasure. If pleasure produces virtually zero value for them, but pain still produces disvalue, then they should prioritize minimizing pain over pursuing pleasure. This would produce the following implausible verdicts.
The being can undergo a hundred years of unfathomable bliss, each experience as pleasurable as the sum total of all joys in human history. However, if they do this, they’ll stub their toe once. This account would say they shouldn’t make that tradeoff. Such a judgment is ridiculous.
The being should not be willing to trade off having its pleasure increasing a hundred thousand fold in exchange for a papercut. One might object here that increasing their pleasure a hundred thousand fold would reduce the pain they experience more than the papercut would cause. However, we can stipulate that the being will experience no future pain, so no pain is being reduced.
Suppose the being knows that in the future they’ll experience one pinprick. However, for the rest of their existence they’ll be unfathomably well off. This principle would hold they should commit suicide. This is deeply implausible.
Such judgments are hard to stomach. However, additional problems present themselves for such a view.
Suppose you personally found out that there was a 50% chance that you’d been around for 100^100^100^100 years previously and a 50% chance that you hadn’t. Then a genie appears to you and gives you two offers.
Offer 1: If you’ve been around for 100^100^100^100 years you’ll get a vast amount of utility--as much as has been experienced previously throughout all of human history.
Offer 2: If you haven’t been around for 100^100^100^100 years, you’ll get a chocolate bar that is tasty.
It seems clear one should take option 1. However, option 1 would only benefit you if you’ve experienced vast amounts of previous utility, which would mean your extra utility has virtually no value. Despite that, it’s hard to imagine that one should take option 2.
One might object that having no memories of your previous lives means that the people in them were not you. We can modify the scenario to avoid this. Suppose that right before you gain the utility, you’d have all of the memories of the 100^100^100^100 years of existence. This still doesn’t seem to change the verdict.
Additionally, such a view about personal identity is deeply controversial and most likely don’t accept it. It’s hard to imagine that having your memories erased would turn you into a different person. After all, when you dream, you often don’t have access to your memories. Despite that, your dream self still seems like you.
Finally, we can modify the scenario so that you still have access to the memories but just are not thinking about them. Strangely, you haven’ thought of your previous lifetimes full of memories for your entire lifetime. You still have access to them, you just haven’t thought about them, and still don’t think about them when presented with the offer.
This scenario may seem bizarre. However, not focusing on particular memories is not contradictory, merely implausible in the scenario. Yet a scenario being implausible does not mean that it has no normative implications.
It’s clear that even if you don’t think about particular memories, that doesn’t mean you aren’t the person who you were during those memories. I currently am not thinking about memories from when I was 8, but that doesn't mean, even on the memory based theory of personal identity, that I’m not the same person as my 8 year old self. The memory theory is about the memories which we have access to, not the ones we happen to remember at any particular moment.
Finally, suppose that I have memories of when I was a child, during which I had memories of my previous lives, during which I had memories of the previous ones, during which I had memories of previous ones, etc. If you remember events during which you remembered particular events, then you are still the same person on the memory theory who was around during those original events. I am, for that reason, still the same person as my two year old self, despite not having many memories of being 2.
Thus, with such questions out of the way about whether or not one remains the same person as their much younger self, and whether they would be the same person they were in previous lives, we now turn to the specific moral verdict and why it’s counterintuitive. Is it really plausible that the value of making me well off would hinge on whether I’d experienced lots of well-being in previous lives? The answer seems to clearly be no. Regardless of whether one experienced lots of happiness in past lives or dreams, that doesn't seem to influence the value of making them better off. Finding out that I’d been happy in previous lives wouldn’t make me any less concerned about how well my life goes.
It’s hard to maintain that the things that have happened to me decades or more ago would really affect how good my happiness is for me. If there are events I have no memory of, these wouldn’t seem to undermine whether my happiness makes me better off. It seems like each moment of experience has value which is not affected by causally inert previous moments.
A final thought experiment can elucidate the unintuitiveness of the rejection of the lengthening principle. Suppose that you are offered a 1% chance of eternal life or a 100% chance of 100 years of life, with two additional modifications.
After each 100 year period, your memories will be erased and you’ll be placed into a different situation.
If you live 100 years, you have no guarantee that the you that will exist will be the being that you currently think of as you. The life that you are currently living is a random 100 year period in the first 100^100^100 lifetimes. Thus, the probability that if you live 100 years you’ll live the life that you currently are living is 1/100^100^100, while if you accept a 1% chance of eternal life, the odds are 1%.
In this case, it seems intuitive that we should take offer two. After all, offer one means that the lifetime that we currently occupy will almost necessarily never exist. Thus, when rather than having our current life be guaranteed, causing us to be affected by status quo bias, our current life will almost definitely not exist, the intuition seems to flip. This shows that the judgment is rooted in status quo bias. If we conceived of our current life as a random period of 100 years, the intuition would go away.
This verdict about living forever is somewhat unintuitive. However, there are lots of reasons to distrust our intuitions about such cases.
Status quo bias. Humans have a status quo bias. It is no coincidence that, when deciding upon the optimal length of a lifetime before diminishing marginal returns kick in, the intuitions of most people seem to be about the average length of time. Humans often favor status quo lifetimes, even if longer ones would be better. I recall when I was ten or so thinking that I’d accept a certainty of living to 30 over the current expected trajectory of my life. Now, being 18, that seems obviously foolish. It’s much easier to think that an age is the optimal life length when one is far away from approaching that age.
Large number biases. Humans are quite bad about reasoning about large numbers, as was argued in part 3 about the repugnant conclusion. It’s hard to conceptually understand the difference between a thousand years of life and a billion years. Thus, our intuitions about such cases would be expected to be unreliable, particularly given the background information about the unreliability of intuitions about large numbers.
The scenario is hard to imagine. Humans cannot conceive adequately what it would be like to live for a billion years, much less forever. Thus, our intuitions here are likely to be far off. When imagining it, we might imagine finding billions of years of life to be tedious. It’s hard to imagine enjoying a trillion years of life.
A Bayesian Analysis Of When Utilitarianism Diverges From Our Intuitions
Why we shouldn't worry too much about the cases in which utilitarianism goes against our intuitions
“P(B)|A=P(B)xP(A)|B/P(A)”
—Words to live by
Lots of objections to utilitarianism, like the problem of measuring utility, rest on conceptual confusions. However, of the ones that don’t rest on basic conceptual confusions, most of them rely on the notion that utilitarianism is unintuitive. Utilitarianism entails the desirability of organ harvesting, yet some people have strange intuitions that oppose killing people and harvesting their organs (I’ll never understand such nonsense!).
In this post, I will lay out some broad considerations about utilitarianisms’ divergence from our intuitions and explain why these are not very good evidence against utilitarianism.
The fact that there are lots of cases where utilitarianism diverges from our intuitions is not surprising on the hypothesis that utilitarianism were correct. This is for two reasons.
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
The correct view would likely be able to be proved from lots of independent plausible axioms. This is true of utilitarianism.
We’d expect the correct view to make moral predictions far ahead of its time—like for example discerning the permissibility of homosexuality in the 1700s.
While our intuitions would diverge from the correct view in a lot of cases, we’d expect careful reflection about those cases to reveal that the judgments given by the correct moral theory to be hard to resist without serious theoretical cost. We’ve seen this over and over again with utilitarianism, with the repugnant conclusion , torture vs dust specks , headaches vs human lives, the utility monster, judgments about the far future, organ harvesting cases and other cases involving rights, and cases involving egalitarianism. This is very good evidence for utilitarianism. We’d expect incorrect theories to diverge from our intuitions, but we wouldn’t expect careful reflection to lead to the discovery of compelling arguments for accepting the judgments of the incorrect theory. Thus, we’d expect the correct theory to be able to marshal a variety of considerations favoring their judgments, rather than just biting the bullet. That’s exactly what we see when it comes to utilitarianism.
We’d expect the correct theory to do better in terms of theoretical virtues, which is exactly what we find.
We’d expect the correct theory to be consistent across cases, while other theories have to make post hoc changes to the theory to escape problematic implications—which is exactly what we see.
There are also some things we’d expect to be true of the cases where the correct moral view diverges from our intuitions. Given that in those cases our intuitions would be making mistakes, we’d expect there to be some features of those cases which make our intuitions likely to be wrong. There are several of those in the case of utilitarianism’s divergence from our intuitions.
A) Our judgments are often deeply affected by emotional bias
B) Our judgments about the morality of an act often overlap with other morally laden features of a situation. For example, in the case of the organ harvesting case, it’s very plausible that lots of our judgment relates to the intuition that the doctor is vicious—this undermines the reliability of our judgment of the act.
C) Anti utilitarian judgments get lots of weird results and frequently run into paradoxes. This is more evidence that they’re just rationalizations of unreflective seemings, rather than robust reflective judgment.
D) Lots of the cases where our intuitions lead us astray involve cases in which a moral heuristic has an exception. For example, in the organ harvesting case, the heuristic “Don’t kill people,” has a rare exception. Our intuitions formed by reflecting on the general rule against murder will thus be likely to be unreliable.
Conclusions
Suppose we had a device that had a 90% accuracy rate in terms of identifying the correct answer to math problems. Then, suppose we were deciding whether the correct way to solve a math problem was to use equation 1 or equation 2. We use the first equation to solve 100 math problems, and the result is the same as the one given by the device 88 times. We then use equation 2 and find the results correspond with those given by the device in all 100 cases.
We know one of them has to be wrong, so we look more carefully. We see that the 12 cases in which the first equation gets a result different from that of the second equation are really complex cases in which the general rules seem to have exceptions—so they’re the type of cases in which we’d expect the equation that’s merely a heuristic to get the wrong result. We also look at both of the equations and find that the first equation has much more plausibility, it seems much more like the type of equation that would be expected to be correct.
Additionally, the second equation has lots of subjectivity—some of the values for the constants are chosen by the person applying it. Thus, there’s some room for getting the wrong result based on bias and the assumptions of the person applying it.
We then see a few plausible seeming proofs of the first equation and see that the second equation isn’t able to make sense of lots of different math problems, so it has to attach auxiliary methods to solve those. We then hear that previous students who have used the first equation have been able to solve very difficult math problems—ones that the vast majority of people (most of whom use the second equation), have almost universally gotten wrong. We also see that equation 2 results in lots of paradoxical seeming judgments—we have to call in our professional mathematician friend (coincidentally named Michael Huemer) to find a way to make them not straightforwardly paradoxical, and the judgment he arrives at requires lots of implausible stipulations to rescue equation 2 from paradox. Finally, we find that in all 12 of the cases in which the correct answer diverges from the answer given by the device, there are independent pretty plausible proofs of the results gotten by equation 1 and they’re more difficult problems—the type that we’d expect the device to be less likely to get right.
In this case, it’s safe to say that equation 1 would be the equation that we should go with. While it sometimes gets results that we have good reason to prima facie distrust (because they diverge from the method that’s accurate 90% of the time), that’s exactly what we would expect if it were correct, which means all things considered, that’s not evidence against it. Additionally, the rational judgment in this case would be that equation 2—like many moral systems—is just an attempt to mirror our starting beliefs, rather than to figure out the right answer, forcing revision of our beliefs.
Our moral intuitions are the same—they’re right most of the time, but on the hypothesis that they’re usually right, we’d still expect there to be lots of cases in which our moral intuitions are wrong. When we carefully reflect on the things we’d expect if utilitarianism were correct, they tend to match exactly what we see.
> 4) Many non-utilitarian intuitions just seem to be emotional reactions to particular words in the sentence. We spend lots of time talking as a society about how lots of things are very bad--eg murder. Thus, when utilitarianism asks if murder is sometimes good, based on implausible stipulations that we have trouble imaging, the part of our brain that says "ick murder, that's bad," overpowers reflection about the particular case.
That's true, but it also substantiates my point that utilitarianism is for people who are less moral than others - utilitarians reason like people who sustained damage to their ventromedial prefrontal cortex: https://thingstoread.substack.com/p/is-utilitarianism-an-amoral-system
> This leaves the atheist with either some form of Utilitarianism (a fatally flawed philosophy as I have shown here) or nihilism. Most atheists are uncomfortable with nihilism so they end up being forced to cling to a fatally flawed philosophy.
This seems largely to be the case, especially since your (extremely long) article didn't address nihilism. It really should; how do you know that morality exists anymore than as a collective dream?
As a non-utilitarian, I'd love to discuss these subjects with you if you have the time.
Darn. Not sure I can line-by-line this one...