0
Sidenote: I wrote this argument when I was younger and more foolish so the syllogism has some unnecessary parts to it.
“I do not attack you for if I were you I wouldn’t want me to. I do not let you attack me, for if I were me (As I am), I wouldn’t let you attack me. I don’t pay you five thousand dollars for no reason, for if I were both of us I wouldn’t want to.”
—Words of wisdom
The third argument that shall be presented for utilitarianism is based on universalizing egoism. It has 12 premises and is relatively technical. However, before discussing and defending the premises I shall provide a brief overview of the argument.
Many intuitions motivate utilitarianism. However, perhaps the most fundamental is the intuition that there are certain things that determine how well our lives go, that we should maximize, in the absence of impacts on others. From this starting point, we imagine morality being similar to what we do when we act in our own best interest, except taking the interests of everyone equally. The problem with one who does only what’s best for themselves is the “for themselves,” part. Morality is the same as what we would do if we acted in our own self interests, but possessed all possible interests. The reason why torturing another is wrong is because one who experienced the lives of both the torturer and the tortured would prefer not to be tortured, even if it increases their quality of life when they are the torturer. The reason why racism is wrong is because one who experienced the lives of both racists and victims of racism would not treat their interests as any less valuable based on the color of their skin. If one had a skin condition that caused their skin colour to change over time, they would have no rational reason to privilege their interests when their skin was a particular color.
Now for the premises.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 Only the types of things that are good for selves who are rational egoists are good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists or unique benefits that apply to non rational egoists.
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
This argument is valid so the conclusion follows logically and inescapably from the premises—as William Lane Craig is fond of saying. So now let’s argue for the premises.
1
The first premise says that a rational egoist is defined as someone who does only what produces the most good for themselves. If you reject this premise you are confused. This is a definition.
2
The second premise says that a rational egoist would do only what produces the most happiness for themselves. So roughly, if someone was only doing what benefitted themselves, they would maximize their own happiness.
This has several supporting arguments.
1 This seems to be common sense. When we see someone being selfish, it seems to roughly track doing what makes them happy, even if it doesn’t make others happy. Other criterions for morality include rights and virtue, but it seems strange to imagine an egoist would maximize their virtue, or minimize the risks of the violations of their rights. If they did, they would spend all their time indoors, in a bunker, to minimize the risks of their rights violations, or immediately commit suicide, to prevent the possibility of anyone violating their rights. It also seems strange to imagine an egoist who merely tries to be as virtuous as possible, despite the personal suffering caused. When we analyze the concept of virtue, it seems clear that the motivation for virtue ethics is based on it being good for others, or good in some other abstract sense, rather than good for the virtuous agent. If a person gets shot in the face, they have no reason to care about whether the gun that shot them was caused by a person intentionally violating their rights. Covid deaths wouldn’t matter any more morally if covid were made in a lab.
Interlude A brief dialogue
(The man took out the sharp objects and began to torture the utilitarian.
“Why are you doing this?” asked the utilitarian.
“Well,” said the man. “A genie told me that if I did it he’d increase my virtue such that my overall virtue would increase.”
“But surely it’s unvirtuous to torture me?”
“Oh of course,” replied the torturer. “But it’s stipulated that my overall virtue would increase, and that’s what I care about being a virtue ethicist egoist.”
The utilitarian growled.
“Look,” said the strange virtue ethicist egoist. “I don’t like this any more than you do. In fact, I’m an empath, so I’m actually experiencing all of the agony that you’re experiencing and I find it super unpleasant. But I only care about maximizing my own virtue so…sacrifices must be made.” )
—A monologue from a pretty bad egoist.
“Safety first,” said the anti Bryan Caplan. He jumped off the building, dying immediately, to ensure no one violated his rights.
—Another pretty bad egoist albeit better than the first one because of the avoidance of torture.
2 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize the other criterions for morality. Virtually no one is a consequentialist, who cares about maximizing virtue, or rights protection.
“I tortured the little child to prevent those two torturers from doing torture because it increased net virtue.”
—Not a good virtue ethicist.
3 We can imagine a situation with an agent who has little more than mental states, and it seems like things can be good or bad for them. Imagine a sentient plant, who feels immense agony. It seems reasonable to say that the agony they experience is bad for them, and that if they were rational and self interested they would try to end their agony. However, in this imagined case, the plant has no ability to move and all it’s agony is the byproduct of natural features. Given that its misery is the result of its own genetic formation, it seems strange to say it’s rights are being violated, and given it’s causal impotence, it seems unclear how it could have virtue. Yet it seems despite that it could still have interests.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that beings matter if and only if they can experience happiness. This serves as strong evidence for utilitarianism.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible. This would require strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. They are purely the result of atoms functioning. However, strongly emergent properties are not reducible to weaker properties. Chalmers argues that there is only one thing in the universe that is strongly emergent; consciousness. Whether or not this is true, it does prove the broader principle that strong emergence is prima facie unlikely. Rights are clearly not reducible to happiness; no amount of happiness magically turns into a right. If you have to posit dozens of instances of a bizarre phenomena that can’t be conclusively established to apply to anything, your theory is probably wrong.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings about happiness for them. When deciding upon what food to eat, we act based on what food will bring us the most happiness. It seems strange to imagine any other criterion for deciding upon food, hobbies, or relationships. We generally think someone is acting reasonably if they were in a romantic relationship, given the joy it brings to them, but if someone spent their days picking grass, we would see them as likely making a mistake, particularly if it brought them no happiness. Additionally, the rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. Additionally, we care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit.
“I ate that ice-cream because it increased my virtue slightly even though I hated it.”
—A very weird dude.
5 As Neil Sinhababu argues, we reach the belief in pleasures goodness via the process of phenomenal introspection, whereby we think about our experience and determine what they’re like. This is widely regarded as being reliable, organisms that were able to form accurate beliefs about their mental states are more likely to reproduce, and psychologists and biologists generally regard it as being reliable.
6 Hedonism provides the only plausible account of moral ontology. While any attempt at accounting for moral ontology will be deeply speculative, hedonism provides a plausible account. It seems at least logically possible that there would be certain mental states that are objectively worth pursuing. Much like it’s possible to have vivid mental states, involving colors that we as humans will never see, it seems equally probable that there could be mental states that provide us with reasons to pursue them. It’s not clear why desirable experiences are any stranger than experiences of confusion, dizziness, new imaginary colors, echolocation, or the wild experiences people experience when they consume psychedelics. Given this, there seems to be a possible way of having desirable mental states. Additionally, there seems to be a plausible evolutionary account of why we would have desirable mental states. If those mental states are apprehended as desirable, then organisms would find them motivating. This serves as a mechanism to get organisms to do things that increase fitness, and not do things that decrease fitness. On the other hand, there is no plausible evolutionary account of how rights could arise. How would beings evolve rights? What evolutionary benefit would accrue from evolving rights? Only the desirability of happiness can be explained by the fine tuning process of evolution.
7 An additional argument can be made for the goodness of happiness. A definition along the lines of “good experiences,” is the only way to explain what happiness is. It’s not merely a desired experience, given that we can desire bad experiences, for example, one who thinks they deserve to suffer could desire experiencing suffering. Additionally, if we are hungry, but never think about it, and wish it would stop, it seems that our hunger is still causing us to suffer, even though we never actually desire for it to stop. The non hedonist could accept this, but argue that it’s not sufficient to prove that happiness is desirable. It merely proves that we perceive happiness as desirable mental states, not that the mental states are in fact desirable. However, if the only way we can explain what happiness truly is is by reference to their desirability, this would count in favor of hedonism. Other accounts require us being systematically deluded about the contents of our mental experiences.
8 Only happiness seems to possess desire independent relevance. In the cases that will be discussed with preferential accounts, it seems clear that agents are acting irrationally, even if they’re acting in accordance with their desires. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights. A person who waives their right to their house and gives their house to someone is not being irrational. A person who gives a friend a house key, allowing them to come into their house without their consent is similarly not acting irrationally. Rights seem to be waivable. happiness does not.
Non-hedonists could argue that hedonism doesn’t adequately explain the things that make people well off. For example, if a person is molested while unconscious without their knowledge, this seems bad for them, even if they don’t find out about it.
However, this intuition can also be explained as a case of caring about happiness maximizing heuristics. Generally, people being molested causes harm, so we have a strong intuitive reason to think it makes people worse off, even if we remove the hedonistic components.
We can consider another case that has fewer real world components to get a clearer picture. Suppose that it were the case that every second aliens molest every human googol times. No human ever finds out about it. Additionally suppose that aliens start out experiencing enormous amounts of agony, which is diminished every time they molest a human, such that if they molested no humans, their suffering would be greater than the sum total of suffering in human history. However, if they molest humans googol times, they are able to have enjoyable mental states. In this case, it seems that what the aliens are doing is justified. No human ever finds out about it or is made hedonically worse off. However, if this is bad for people, we must stipulate that not only are the aliens' actions bad, they’re by far the worst act in the history of the world.
If we believed the aliens molestation was one one thousandth as immoral as a traditional molestation, and stipulate that there are as many molestations done by humans each as there are humans, we’d have to stipulate that the aliens molestation that causes no one’s experiences to be worse is 10^97 times worse than all sexuall assaults done by humans. Thus, if one could either prevent all sexual assaults done by humans or have a 1 in 10^97 chance of preventing the aliens actions.
One might hold the intuition that the alien’s actions are still wrong. However, it seems hard to hold that the aliens actions are the worst thing in the world by orders of magnitude--such that the holocaust is functionally a rounding error.
If one holds the intuition that the aliens' actions are wrong, we can change the situation to make it even more counterintuitive. Suppose the aliens are comprised fully of air. Thus, when they molest people, they just swap out with the air in front of people. Surely the air deriving sexual pleasure from entering us would not be the worst thing in the world.
One could object that the things that it is rational for one to pursue is their own desires, rather than their own happiness. However, this view seems mistaken. It seems clear that there are some types of desires that are worth pursuing, and others that are not. I shall argue that only the desires for happiness are worth pursuing.
1 Imagine several cases of pursuers of their own desires.
A A person who is a consistent anorexic, who has a preference for a thin figure, even if it results in them starving.
B A person who desires spending their days picking grass, and would prefer that to having more happiness.
C A person who has a strong preference for being in an abusive relationship, even if it minimizes their happiness.
D A person who is indifferent to suffering that occurs on the left side of their body. They experience suffering exactly the same way and apprehend it as undesirable, BUT they have an arbitrary aversion to right side suffering infinitely greater than their aversion to left side suffering
It seems intuitively like these preference maximizers are being irrational. These intuitions seem to be decisive.
2 What should we do about a person with infinitely strong preferences? Suppose someone has a strange view, that being within 50 feet of other humans is unfathomably morally wrong. They would endure infinite torture rather than be within 50 feet of other humans. It seems like having them be around other humans would still be less bad than inflicting infinite torture upon them. Infinite preferences seem to pose a problem for preference utilitarianism.
3 What about circular preferences? I might prefer apples to bananas, bananas to oranges, but oranges to apples.
4 If preferences are not linked to mental states, then we can imagine strange cases where things that seem not to be good or bad for an agent, are good or bad for the agent according to preference utilitarianism. For example, imagine a person who has a strong preference for their country winning the war, despite being trapped in an inescapable cave. It seems strange that their side losing the war would be bad for them, despite them never finding out about it. It also seems strange to imagine that Marx is made worse off by communism not being implemented after his death, or that Milton Friedman is made worse off by regulations that pass after his death. It seems equally strange to imagine that slave owners were made worse off by racial equality occurring long after their death.
5 Imagine a dead alien civilization who had a desire for there being no other wide scale civilizations. If the civilization had enough people, preference views would imply that humanity would have an obligation to go extinct, to fulfill the preference of the alien civilization, despite none of them ever knowing that their preference was fulfilled.
One might object that we should look at peoples rational preferences, rather than the preferences that they in fact have. However, this is circular. When analyzing what a rational self-interested person would desire, merely saying their rational self interest is not helpful. However, using this criterion explains why a consistent anorexic, grass picker, or one who desires being in an abusive relationship seems to harbor irrational preferences. Similarly, preferences for states of the world after one dies seem irrational, for one cannot benefit from them if they are dead. It’s hard to imagine a rational preference that is not linked back to happiness. For one to hold this view they’d have to give a non hedonistic account of rational preferences.
One might additionally argue that what matters is people's mental experience of having their preferences fulfilled. Yet this seems not to be the case. If a person had got addicted to a drug that was free, had no negative side effects, and that they had easy access to. Each day they would have a preference for consuming the drug, rather than not consuming it, yet it seems hard to imagine that it would be good for them to consume the drug, assuming it does not make them happy at all.
Additionally, imagine a case where, for most of a person's life, they have had a strong desire to die a democrat. However, on their death they convert to become a conservative, knowing that their desire to be a registered democrat upon death has not been fulfilled. It seems it would be good for them to register as a republican, if it made them happy, even if it reduced their overall preference fulfillment.
One might object that this is a negative preference, rather than a positive one. This person has no positive preference for the drug, merely a preference for not missing the drug. Yet this seems hard to justify. In this case of the drug, they would not be harmed by not consuming the drug, their desire would merely not be fulfilled. It seems nonetheless like the drug would not benefit them.
Conversely, it seems clear that other types of preferences are good to create. Creating a preference for reading books that brings one immense joy is a good thing. Preferences should be created if and only if they improve the happiness of the subject. This refutes the negative preference objection. If one had a strong preference for not going through a day without reading a book, and the books that they read brought them great joy, it would be good to create the preference, even if the preference were a negative preference.
One might adopt objective list theory, according to which what matters is fulfillment of a list of objectively good things, including happiness, friendship, honor, health, and others. This view runs into several issues.
First, it’s difficult to give an adequate account of what makes something part of the objectively good list. Hedonism is monist, saying that there’s only one type of thing that is good. Objective list theories problematically say that there are a series of unconnected things that are good. This is less parsimonious and fails to provide an adequate account of how things are good. It seems a priori more plausible that there would be some good experiences than that there would be an unrelated bundle of good things that aren’t tied to experience.
Second, objective list theories can’t account for why things are only good for sentient beings. It seems conceivable that, on objective list theories, non-sentient beings could fulfill things on their objective list. Objective list theories just say that things are good in virtue of being part of an objective list, however, there’s no necessary correlation between beings experiencing happiness or suffering and things being able to be part of the objective list.
Third, objective list theories can’t account for why all the things that are on the objective list are generally conducive to happiness. Virtue, friendship, love, and decency are generally conducive to happiness.
Fourth, objective list theories are elitist (Crisp 2001), holding that things can be good for people even if they neither want them nor derive any positive experience from them. It’s counterintuitive that an unenjoyable experience that one doesn’t want can be good for them.
Fifth, all of the things on the objective list only seem good if they’re generally conducive to happiness. We might hold that knowledge is good, but it would be strange to suppose that arbitrary facts that benefit no one are good. The world would not be a better place if we all had the information about whether the number of particles in the universe were even or odd. Friendship might be good, but only if the friends are made mutually better off.
Sixth, objective list theories would have trouble accounting for strange counterfactual alien scenarios. We can imagine an alien civilization that derives primary satisfaction from producing helium. This alien civilization reproduces asexually when helium is produced and only cares about knowledge to the extent that it maximizes helium. This alien civilization finds friendship and love to be deeply unpleasant--strange deviant things that only bizarre subsets of the population care about. These alien philosophers adopt an objective list theory where the only member of the objective list is helium maximization and see it as an absurd implication of hedonic views that they tolerate bizarre deviations from the good like love, friendship, and (non helium maximization related) knowledge.
The aliens do not actively detest love or friendship, they merely find them far less good than helium maximization. They view love and friendship the way one might view watching paint dry. Alien Hume famously said that it is no better to prefer the destruction of all helium to the scratching of one's finger.
It seems clear that, for these aliens, helium maximization is the thing that they should do. They should not pursue friendship if they don’t want it. Nor should they pursue love. If we could convince the aliens that they had an obligation to maximize some non-helium related thing, it would be bad to do so, for they’d be far less happy.
Perhaps you don’t share the helium intuition, thinking that this practice is so strange that it can’t be actually good. However, if we think broadly, many of the things that we think are good are no less arbitrary than helium maximization. Sex is good, yet deriving pleasure from sex is no stranger than deriving pleasure from helium. Music is good, despite it being arbitrarily tuned noises. Presumably listening to loud clanging would not be good, to the extent that it is not enjoyable. It seems nearly impossible to give an account of music according to which it’s part of the objective list, but listening to frying pans clanging together is not.
We might adopt the view that what matters morally is that one derives happiness from things that are truly good. This would avoid the counterintuitive conclusion of utilitarianism that we should plug into the experience machine, where we would have a simulated life that we believe is real, with more happiness and it avoids Sidgwick’s objection that all the things that seem to be rights or virtues are happiness maximizing heuristics. However, this view has a variety of issues.
The previous objections to objective list theories still apply. Additionally, there are compelling counterexamples. Suppose one gained infinite joy from picking grass. Surely picking grass would make them better off. Additionally, suppose that a person was in a simulated torture chamber. Surely that would be bad for them. Unless there’s some fundamental asymmetry between happiness and suffering, the same principle would apply to happiness. A simulated experience of happiness would still be a good thing. Additionally, it’s unclear how this would account for composite experiences of happiness from two different things. Suppose that someone gains happiness by the combination of gaining deep knowledge and engaging in some strange sexual act. Would the happiness they got from that act be morally relevant? If so, then if there is at least some good act relating to any “impure” pleasure they get, such as knowledge acquisition or careful introspection, then that act would be morally good. Additionally, suppose that one was in the experience machine, but exercised wisdom and virtue. Then would their happiness be good for them? This shows the logistical troubles with such a view.
If we say that happiness is not at all good, absent virtue, then it would be morally neutral to make people already in the experience machine achieve far less happiness than they would otherwise. This is a very counterintuitive view. Additionally, if we accept this view, we would have to accept one of two other conclusions. If the suffering of the people in the experience machine is morally bad, but their happiness is not morally good, giving them five million units of happiness and one unit of suffering would be morally bad, because it brings about something bad, but nothing good. This is a very difficult pill to swallow. However, if neither their happiness nor their suffering is morally relevant, then it would be morally neutral to cause them to suffer immensely (aside from issues of consent).
This broader point can be illustrated with an example. Imagine a twin earth, identical to our own, but where no one had a preference for not being in the experience machine. To them, only their mental experiences of things matter. It makes no difference, to them, whether they are in the experience machine or not. It seems that in this world, there would be nothing wrong with plugging into the experience machine. The only reason plugging into the experience machine seems objectionable is because most people have a preference for not plugging into the experience machine, and find the thought distressing.
Additional objections can be given to the experience machine (Lazari-Radek, 2014). Several factors count against our intuitions about the experience machine. First, there is widespread status quo bias. As Singer explains “Felipe De Brigard decided to test whether the status quo bias does make a difference to our willingness to enter the experience machine. He asked people to imagine that they are already connected to an experience machine, and now face the choice of remaining connected, or going back to live in reality. Participants in the experiment were randomly offered one of three different vignettes: in the neutral vignette, you are simply told that you can go back to reality, but not given any information about what reality will be like for you. In the negative vignette, you are told that in reality you are a prisoner in a maximum-security prison, and in the positive vignette you are told that in reality you are a multi-millionaire artist living in Monaco. Of participants given the neutral vignette, almost half (46 per cent) said that they would prefer to stay plugged into the experience machine. Among those given the negative vignette, that figure rose to 87 per cent. Most remarkably, of those given the positive vignette, exactly half preferred to stay connected to the machine, rather than return to reality as a multi-millionaire artist living in Monaco.23”. This strongly counts against the conclusion that we have an intrinsic preference for reality. Additionally, there is a strong evolutionary reason for organisms to have a preference for actually doing things in the real world, rather than wireheading.
A final point that can be made is that the preference for the real can be explained on a utilitarian account; preferring the real tends to maximize happiness. In cases where it does not, this intuition seems to fade. It is counterintuitive to think that there would be something wrong about plugging into a virtual reality game, for a short time, because that is something we have familiarity with, and tends to maximize happiness.
The objective list theorist could argue against hedonism based on bad sources of pleasure. For example, even if a crowd of jeering spectators derived immense pleasure from one person being tortured, it would still be bad to torture the person.
However, hedonism is able to accomodate the intuition against bad pleasures. Every pleasure that we think of as a bad pleasure is not conducive to happiness generally. Sadistically torturing people does not generally maximize happiness.
Additionally, this principle has compelling counter-examples. We can consider a man called Tim. Tim derives immense satisfaction from watching scenes that appear to depict tortures. Tim would never torture anyone and abhores violence. In fact, he sometimes feels guilty about his strange desires and donates vast amounts of money to charities. Tim also makes sure that the content that he watches that depicts people being tortured does not actually involve people being tortured. Tim spends hours searching for content that looks like people being tortured but has no actual people being tortured. Additionally, we can suppose that this is the highlight of Tim’s life. He enjoys it so much that, without it his life would be miserable. Despite suffering clinical depression, Tim finds the experiences so enjoyable that he regards his life as generally good. It seems in this case, Tim is truly made better off by the joy he derives from this sadistic content.
However, suppose additionally that, despite Tim’s incredibly careful selection process, Tim is deceived by an evil demon, who manipulates the laws of physics to make people actually be brutally tortured, despite any reasonable observer concluding that no one was truly being tortured. It seems that in this case, while the person who is tortured is made worse off by the torture, Tim is made better off by it. All else equal, it seems that making Tim enjoy viewing the torture less (assuming he’d view the same amount of torture) is bad.
Imagine another case of an alien civilization who views the suffering of humans. This alien civilization starts in a state of vast agony, yet becomes less miserable each time they watch a human suffer. If they view all human suffering their overall hedonic state drops to zero, when it starts off significantly worse than being boiled alive. Again, it seems like the aliens' sadism is not a bad thing.
If we think that enjoying the suffering of others is actively bad, independent of the suffering of others, then it would be morally good to make the aliens unable to see suffering. This is deeply implausible.
One might object that deriving pleasure from sadism is morally neutral, neither good nor bad. However, in the scenarios both posited, it seems obvious that the world is better because the aliens enjoy suffering enough to not be as miserable as beings being boiled alive. If the only way for the aliens to relieve their unfathomable agony was to torture one person this seems justified.
We can imagine another case of a person, Wyatt, who takes immense satisfaction in eating meat because he knows that the animal suffered. He feels deeply guilty about this fact, but cannot enjoy eating meat unless he knows that the animal suffered. Wyatt continues to eat meat, but donates to charities that help animals because he feels guilty. In this case, it seems that Wyatt enjoying the meat, assuming it won’t cause him to eat any more meat, is not a bad thing. To the extent that Wyatt enjoys meat because he knows about the suffering, and others enjoy meat that causes enormous suffering, but don’t care whether or not they know about the suffering, it’s hard to see how Wyatt’s meat enjoyment is any worse than any of ours. Much like there seems to be no morally relevant difference between a person who tortures other because they like when others suffer and one who likes the taste of people after they’ve been tortured, there’s no difference between one who enjoys the suffering of animals that they eat and one who merely enjoys the taste of the animals.
If Wyatt is morally no different from the rest of people when he eats meat, then either Wyatt’s sadistic meat eating is morally good or the joy that most people get from eating meat is morally neutral. However, this is deeply implausible. If meat tasted less good, but people ate the same amount of meat, that would be a worse world. If sadistic pleasure can be good, then enough sadistic pleasure can outweigh the badness of torture.
Additionally, there are many cases where people enjoy the suffering of others, which are not found objectionable. If the parents of a murder victim derive satisfaction from knowing that the murderer is rotting in prison, it wouldn’t be desirable to deprive them of that satisfaction.
Additionally, we can imagine a world exactly like our own, except humans would never be happy in their lives unless they, upon their sixth birthday, slap someone in the face and revel in the enjoyment. In this case, all of their enjoyment is coming from the fact that they slapped someone, but it wouldn't be good to condemn everyone to never being happy.
Additionally, we can imagine a scenario where every person will be given an extra year of happy life by torturing one person. In this case, their happy life is only existing because of the torture, but this case seems clearly justified.
Our anti-sadism intuition comes from the fact that sadism is not conducive to happiness. However, if it were conducive to happiness, it would be justified. In the cases like Schadenfreude, where we derive joy from the suffering of others in ways that make things go best, we don’t find it counterintuitive.
The objective list theorist could argue that what matters is deriving pleasure from appreciating the good. Thus, in the case of deriving unrelated pleasure from good things, this wouldn’t increase well-being because, the pleasure is not related to appreciation of the good. On this account, the formula for determining how much one is benefited by an event is the following.
Well-being (which describes that which is non instrumentally good for an individual) from event Q = Pleasure + (Pleasure x objective list based value of Q). Thus, the value of pleasure from being in love or from friendship are greater than the value of pleasure from things like eating ice-cream. To illustrate, eating ice cream produces well-being of P, where P is the raw pleasure of the experience, because eating ice cream is not part of the objective list. Thus, for eating ice cream (W is well-being, P is pleasure, O is objective list based value) W= P + P x 0= P. This view, however, runs into issues.
1 Suppose that the value of having a deep conversation with someone has an objective list fulfillment value of 1 and brings a great deal of pleasure. If this were true then having the epiphany while mulling over the ideas with someone will be twice as valuable as having the epiphany while not mulling over the ideas. It seems strange to think that whether the epiphany was a result of talking with the other person is relevant to how good the epiphany is.
2 There seem to be troubles with disentangling what types of pleasures are generated by appreciating someone else. Suppose that every time a person interacts with another they begin laughing uncontrollably. In this case, is their happiness fulfilling the objective list.
3 It seems to result in equal pleasures and pains not offsetting. Suppose that every time I see someone I get a headache, such that the overall quality of interacting with them is neutral. The pleasure precisely cancels out the badness of the headache. In this case, it seems strange to say that it's actively good to interact with them. However, on an OLT account, it seems it would be, because the pain's badness is left unchanged by the interaction, but the pleasure's goodness increases because I'm interacting with another person.
4 This seems to run into strange conclusions relating to a modified utility monster. Suppose that there’s a being who derives a trillion times as much pleasure from all experiences as the average human. Additionally, suppose that the mini utility monster has the property of universal experiencing, where, after they die they will have all experiences that they did not have in life. For example, if the modified utility monster eats chocolate ice-cream and then immediately dies, after death it would have every possible experience, except those of eating the precise type of ice-cream. However, if the mini modified monster dies without ever being in love, then in the afterlife it will derive pleasure from the experience of being in love, but will check “being in love,” off it’s objective list. Suppose we think that the objective list based value of being in love is 1, such that half of the benefits of being in love come from the happiness of it, and half of them come from the intrinsic value of love. Now, we have the option of sacrificing a billion people to allow the utility monster to find love. In this case, being in love will not change the composition of the utility monster's experiences. Every positive experience that the utility monster has from being in love will be an experience that they will be deprived of when they die. However, on objective list theories, the value of the utility monster being truly in love with an actual person is twice as good as the value of having the experience of being in love, but not actually being in love with a real person. Thus, the objective list theory would entail that it would be good to sacrifice a billion people to cause the utility monster to find love, even though the utility monster’s subjective experience will be fully unchanged. This is deeply implausible.
5 We can create a newly modified utility monster. This utility monster experiences far more happiness than all humans combined. This utility monster does not have all experiences after its death. However, this utility monster is a hedonist. It’s view is that love is only instrumentally valuable. Thus, when it is in love, it is not appreciating the love for its own sake--it’s only appreciating the pleasure it gets from being in love. Thus, on objective list theories, the happiness it gets from being in love is no more valuable than the pleasure it gets from eating ice-cream, because it is not getting pleasure from appreciating things that are truly good.
Suppose additionally that one has the option to sacrifice a billion people to convince the utility monster to be an objective list theorist. The utility monster is currently in love, and the happiness it gets from being in love is greater than the sum total of happiness for the aforementioned billion people. On objective list theories, it seems that sacrificing a billion people to change the mind of the utility monster would be good, for it would raise the value of the pleasure by causing it to be experienced as a result of appreciation of love, even though the total amount of happiness doesn’t change. In fact, we can even suppose that becoming convinced of the objective list theory causes the utility monster to become distraught, bringing about half as much suffering as the pleasure they get from being in love. Nonetheless, objective list theories seem to hold that sacrificing a billion people to make its hedonic state worse, but to convince it of objective list theory would be good, because the extra value of the appreciation of love outweighs both the deaths of a billion people and the decrease in happiness from the utility monster becoming distraught over being convinced of objective list theory. This is similarly deeply implausible.
6 Suppose that Jim and Mary are married. Jim is out of town, but is going to watch a movie. Mary is also going to watch a movie. However, the next day Jim shall return, and they shall watch a movie together. Consider the following happiness distributions of movies.
Movie A brings Jim 10 units of happiness and Mary 10
Movie B brings Jim 150 units of happiness and Mary 1
Movie C brings Jim 1 units of happiness and Mary 150
While watching the movies, both Jim and Mary appreciate the presence of the other and bond over the movies, such that watching the movies with the other increases their enjoyment of the movie by 5 units of each. It seems intuitive that the optimal arrangement would be for Jim to watch movie B while out of town, Mary to watch movie C when Jim is out of town, and then they together watch movie A. However, objective list theorists cannot accommodate this intuition.
Suppose that an objective list theorist says that, in this case, the objective list based value of watching the movie with their Spouse has value of .5, such that two thirds of the value of watching the movie comes from the pleasure it brings, and one third comes from the value of watching it with their spouse. On this account the total value of watching the intuitive arrangement, where Mary watches Movie C by herself, Jim watches Movie B by himself, and them watching movie A together is 150+150+ 1.5 x ((10+5)(2))= 345. However, if Jim watches B by Himself, Mary watches movie A by herself, and they watch movie C together, the total value would be 150 + 10 + 1.5(155+6)= 394. Thus, on this account the world is better if Jim and Mary watch together a movie that only one of them enjoys very much, rather than each watching independently the movie that they individually enjoy, and together the movie that they both enjoy. This is deeply implausible.
One might object that the well-being they get from watching the movie is not truly a Movie result of appreciating the other. However, it is hard to draw a distinction between this and other cases. Suppose that Jim and Mary hold hands during the movie and take great comfort in the presence of the other. The extra value they get from watching the movie seems to be derived clearly by the appreciation of the other. Additionally, suppose that when they watch the movie with the other, the only enjoyment they get from the movie relates to focusing on the other. When they watch the movie with the other they ignore the movie, and simply focus on each other, deriving precisely the same amount of pleasure from it as they would have from watching the movie. In this case, it still seems like it would be better to watch A together, and each separately enjoy their preferred movie between B and C.
7 Suppose that Tim and Sue are planning to have a date. Tim is a big fan of the philosopher Derek Parfit, who advocates for objective list theories. Parfit is planning to give a talk on Thursday arguing for objective list theory. Tim is currently a hedonist but thinks it’s very likely that Parfit will convince him that hedonism is false, and objective list theory is correct. Tim and Sue can either have their date on Wednesday or Friday. Suppose additionally that being convinced of objective list theory would not increase Tim’s happiness at all. However, if he became convinced of objective list theory, the happiness he’d get from his interactions with Sue would come from appreciating his relationship with her as being part of the objective list, rather than merely as a tool for increasing mutual happiness. It seems an objective list theorist would think it rather important that the date is scheduled for Friday, because it increases the value of the date. If Tim becomes an objective list theorist, the value of the date would increase dramatically (its value would approximately double). In fact, on objective list theorists' accounts, it seems that convincing hedonists of objective list theory would be extremely important, and dramatically increase the value of their lives. This is deeply implausible. Peter Singer’s marriage is no less valuable for him being a hedonist about value.
8 Suppose we adopt a particular account of consciousness, according to which one’s experience of an object or event is wholly produced by their brain, and does not truly observe the external world. Instead, the brain receives inputs from the external world and hallucinates a conscious experience that best makes sense of the inputs. Regardless of the plausibility of this account of consciousness, it seems that objective list theory would hold that it would have significant normative implications. If one’s experience of being in love with their wife is truly a mental image generated entirely by their brain in a way that makes sense of the signals it’s receiving from the external world, then when one appreciates their loved one, they’re truly only appreciating the joy they get from the mental hallucination. On objective list theories, this would seem to dramatically undercut the value of all things, for being in love is truly just a mental hallucination that roughly tracks what’s going on in the external world. However, this is implausible. Whether or not we are directly observing things as they are, or our brains are making up the world that we see, in a way that roughly mirrors the real world but is understandable to us should not undercut the value of being in love.
To clarify the case, suppose that when one sees someone with whom they’re in love, the only inputs from the person that they’re truly seeing as they actually are in the real world are a series of photons. The rest of the person is hallucinated by their mind, in a way that roughly tracks what the other person is actually doing, but is not totally identical. If this is true, then consciousness is very much like the experience machine, with our mind creating an experience machine-esque experience in ways that roughly track what’s happening in the real world.
One might object that the well-being we’d experience if this were the case would still fulfill our objective list because it roughly tracks the real world. Thus, it is close enough to actual perception of the good things for it to count as appreciating the good. However, this reply runs into several problems.
First, the objective list theory holds that even if events roughly track what is going on in the real world, as long as they’re not in the real world, they don’t possess the value. Objective list theorists would presumably object placing one in a modified experience machine, where their actual self is replaced by a robot version of themself, which acts exactly like they would actually act but is not conscious and placing their actual self in an experience machine, where their experiences are identical to what they would be in the real world. Even if the experience machine caused one to experience a life exactly like the one they were living prior to being in the experience machine, and replaced them with a robot twin who acted like them, such that the world they hallucinated tracked perfectly the life they were previously living, an objective list theorist would presumably find that objectionable.
Additionally, we can modify the case slightly. Suppose that the version of the world that they hallucinated was subtly different from the actual world. For example, in the actual world their wife is relatively physically unattractive. However, their hallucination of the actual world perceives their wife as being very attractive. Additionally, the version of the world that they hallucinate has the loudness of often fail to track their true loudness. Suppose their wife speaks with many decibels, such that direct perception of the world would hear their wife being very loud. However, the version of reality that their brain hallucinates has their wife appear quieter than she actually is. In this case, the reality that they’re experiencing does not truly track the external world. However, this still does not seem to have any significant normative implications. The objective list theorist, if they hold that they still gain non-hedonic value from true love under this theory of consciousness but don’t gain non-hedonic value from true love in the experience machine, would have to suppose that there’s some threshold at which one’s hallucinations become enough like reality that they gain value from their objective list. This, however, seems implausible. A schizophrenic who believes their neighbor is an angel sent by god who they’re in love with does not gain value in proportion to how realistic their view is.
To illustrate this point more, suppose that my true self is a million miles away. I am a brain in a vat, but I experience everything exactly as it would affect me if I were actually generated by the brain inside the body that I experience being part of, and the choices I make have exactly the same effects as they would if I were actually generated by the brain inside the body I experience being a part of. This case is exactly like the case of my actual self, except the thing generating my subjective experience is merely located out of my body and far away. This doesn’t seem to possess any significant normative implications.
Now suppose we accept epiphenomenalism--the view that consciousness is causally inert. Thus, while I’m still the brain in the far away vat, my thoughts don’t cause events. Instead, it’s the brain waves in the body that I feel like I’m part of that both generate my thoughts and the actions that the body takes. Surely epiphenomenalism wouldn’t undercut the value of objective list fulfillment. However, in conjunction these make it so that I’m a discombobulated brain millions of miles away that’s merely experiencing what life would be like inside my body, but that is causally unconnected from my body. The objective list theory holds that being a causally inert consciousness is not enough to undercut the value of love, but being a causally inert consciousness a galaxy away, whose experiences are projected information sent by observing brain waves does undercut the value of true love. This is deeply implausible.
It seems like if we accept
1 Having the true source of our consciousness be far away is morally irrelevant
2 Having the true source of our consciousness be causally inert is morally irrelevant
We’d also accept that having the true source of our consciousness far away and causally inert is morally irrelevant. However, this undercuts the view that switching out a person for a robot copy of themself in a world comprised entirely of robots, such that they don’t realize that they’re interacting with non sentient robots, and their loved ones in the real world don’t realize they’ve been replaced by a robot, but the total composition of their experiences does not change, is morally bad. However, this case is fundamentally identical to the aforementioned case. In both cases their consciousness does not causally interact with the world, but they receive information as if they causally interact with the world. Additionally, in both cases the source of their consciousness is very far away and is receiving inputs identical to the information they’d receive in the real world, but they’re not truly receiving information from the real world. The only difference between the two cases is that in the robot world the information is being transferred by robots, but in the other world it’s being transferred by faster than light technology that projects information to their brain, which is located very far away.
9 It seems that objective list theories would have to hold that, if one had the ability to cause imaginary friends had by four year olds to exist, it would be extremely good to do so, even if the overall composition of their mental states is neutral, because it would make the relationships that the four year olds had with them morally good. This holds even though the stipulated imaginary friends would disappear the moment that small children grow out of them. However, this is implausible--it does not seem like a tragedy that imaginary friends that small children interact with do not exist.
3
This premise took a very long time to defend—future posts will argue for the other premises. Can’t have all of the defenses of the premises be in this post. We’ll also see in future posts that there are many independent plausible ways of deriving utilitarianism. This serves as very good evidence. One rarely finds several different arguments for a false conclusion—each with super plausible premises. When arguments seem to converge on a conclusion it’s likely to be true. This was a long one, but thanks everyone for sticking it out. The road is long and twisted, but it leads inescapably to utilitarianism.
Premise 6 is false. the sum total of Premises 1-5 establish that happiness is *a* good thing. Not that it is the *only* good thing.