My Issue With The Way Lots Of Utilitarians Argue For Utilitarianism
Against Huemerless utilitarianism
Philosophy rarely proceeds by way of knock down, deductive argument. Instead, a better way to proceed is to compare theories holistically and abductively, as explanations of phenomena. Thus, as a cautionary note to other utilitarians, I’d recommend that, rather than attempt to provide a single knockdown deductive argument, they proceed abductively and compare a wide range of verdicts. This is probably the biggest evolution in my thinking over the years.
Given that the deductive arguments are only as intuitive as the conjunction of all of the premises, even the deductive arguments proceed by analysis of the intuitive plausibility of certain notions. And yet if there’s a pretty intuitive premise, but it entails dozens of hideously unintuitive things, that premise should likely be rejected.
To illustrate with an example, the simplest application of the argument of (Harsanyi, 1975) would entail average utilitarianism, though it can certainly be employed to argue for total utilitarianism, if we include future possible people in our analysis. However, the reason I reject Harsanyi’s argument as showing average utilitarianism is not because I think the argument trivially provides greater support for average utilitarianism than for total utilitarianism. Instead, it’s because average utilitarianism produces wildly implausible results. Consider the following cases.
We have 1 billion people with -100^100^100^100^100^100^100^100 utility. You have the choice of bringing an extra 100^100^100^100^1000 people into existence with average utility of -100^100^100^100^100^100^100^99. Should you do it? It would increase average utility, yet it still seems clearly wrong—as clearly wrong as anything. Bringing miserable people into existence who experience more than the total suffering of the holocaust every second is not a good thing, even if there are existing slightly less miserable people.
There is currently one person in existence with utility of 100^100^100. You can bring a new person into existence with utility of 100^100^10. Average utilitarianism would imply that, not only should you not do this, doing it would be the single worst act in history, orders of magnitude worse than the holocaust in our world.
You are in the garden of Eden. There are 3 people, Adam (utility 5), Eve (utility 5), and God (utility 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000^1000000000000000000000000000000000000000000000000000000000000000000000000000^100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000). Average utilitarianism would say that Adam and Eve existing was a tragedy and they should certainly avoid having children.
You’re planning on having a kid with utility of 100^100, waaaaaaaaaaaaaaaaaay higher than that of most humans. However, you discover that there are oodles of aliens with utility much higher than that. Average utilitarianism would say you shouldn’t have the child, because of the existence of the far away aliens who you’ll never interact with.
Average utilitarianism would say that if fetuses had a millisecond of joy at the moment of conception, this would radically decrease the value of the world, because fetuses would bring down average utility.
Similarly, if you became convinced that there were lots of aliens with bad lives, AU would say you should have as many kids as possible, even if they had bad lives, to bring up the average.
These cases are why I reject average utilitarianism. If total utilitarianism had implications that were as unintuitive as those of average utilitarianism, I would similarly reject it, despite the deductive arguments. The deductive arguments count strongly in favor of the theory, but would not be enough to overcome the hurdles of the theory, if it were truly unintuitive across the board.
Utilitarians will often try to discredit intuitions as a way of gaining knowledge, (E.G. Sinhababu 2012). They will often point out the poor track record of intuitions. However, this does mean that intuitions are less reliable than they would otherwise be, but it does not mean we should simply ignore intuitions. Absent relying on what seems to be the case after careful reflection, we could know nothing, as (Huemer, 2007) has argued persuasively. Several cases show that intuitions are indispensable towards having any knowledge and doing any productive moral reasoning.
Any argument against intuitions is one that we’d only accept if it seems true after reflection, which once again relies on seemings. Thus, rejection of intuitions is self defeating, because we wouldn’t accept it if its premises didn’t seem true.
Any time we consider any view which has some arguments both for and against it, wecan only rely on our seemings to conclude which argument is stronger. For example, when deciding whether or not god exists, most would be willing to grant that there is some evidence on both sides. The probability of existence on theism is higher than on atheism, for example, because theism entails that something exists, while the probability of god being hidden is higher on atheism, because the probability of god revealing himself on atheism is zero. Thus, there are arguments on both sides, so any time we evaluate whether theism is true, we must compare the strength of the evidence on both sides. This will require reliance on seemings. The same broad principle is true for any issue we evaluate, be it religious, philosophical, or political.
Consider a series of things we take to be true which we can’t verify. Examples include the laws of logic would hold in a parallel universe, things can’t have a color without a shape, the laws of physics could have been different, implicit in any moral claim about x being bad there is a counterfactual claim that had x not occurred things would be better, and assuming space is not curved the shortest different between any two points is a straight line. We can’t verify those claims directly, but we’re justified in believing them because they seem true--we can intuitively grasp that they are justified.
The basic axioms of reasoning also offer an illustrative example. We are justified in accepting induction, the reliability of the external world, the universality of the laws of logic, the axioms of mathematics, and the basic reliability of our memory, even if we haven’t worked out rigorous philosophical justifications for those things. This is because they seem true.
Our starting intuitions are not always perfect, and they can be overcome by other things that seem true. However, merely ignoring all intuitions will not do, if we want to justify utilitarianism.
How should we decide which intuitions to rely on? This is a difficult question that would require an immense amount of time to address. However, a few points are worth making here, as it relates to utilitarianism.
First, our intuitions are pretty unreliable (Beckstead, 2013). Thus, even if we have a few strongly held intuitions conflicting with utilitarianism, this should be insufficient to make us reject utilitarianism.
Second, if we can employ debunking accounts of our intuitions, we should trust them less. If the reason we have an intuition about a case is that we’re bad at reasoning about big numbers, that’s a reason to distrust that intuition.
Third, as (Ballantyne and Thurow, 2013) have argued, there are specific facts about moral reasoning that make it often unreliable. Those are partiality, bias, emotions, and disagreement. If any of these are present, they undermine the reliability of our judgements. However, utilitarianism falls prey to these far less than other theories.
The example of partiality is obvious. Utilitarianism is often objected to for being too demanding, so it clearly isn’t supported for reasons of self interest. In fact, utilitarianism is explicitly impartial, treating everyone’s interests equally.
The point about bias was supported here. When people reflect more, they’re more utilitarian. Many non utilitarian intuitions can be explained by reliance on heuristics and biases like risk aversion, large number biases, and many others.
The point about emotions is supported by (Greene 2007). Greene finds the following four things. First, inducing positive emotions leads to more utilitarian conclusions. This supports the DPT, because inducing positive emotions causes people to be less affected by negative emotions that, according to the DPT, are largely responsible for non utilitarian responses to moral questions. Second, patients with Frontotemporal dementia are more willing to push the person in front of the trolley to save 5, in the trolley problem.
People with Frontotemporal dementia have emotional blunting, a phenomena in which they’re less affected by emotions. Thus, people whose emotions are inhibited are more utiiltarian.
Third, cognitive load makes people less utilitarian. Cognitive load relates to people being under mental strain. When people are under mental strain, they’re less able to carefully analyze a situation, and they are more affected by emotions.
Fourth, people with damaged VMPC’s are more utilitarian. The VMPC is a brain region responsible for generating emotions.
Disagreement is shared by all of our moral theories. However, many of the assumptions of utilitarianism have virtually no disagreement. Utilitarianism merely objects to adding extra things that other theories add. Additionally, as the points above have shown, our moral intuitions are often unreliable, so we would expect the correct theory to be disagreed with by many people.
Thus, while all intuitions are potentially error prone, utilitarianism’s are considerably less error prone than average. They represent the types of intuitions that we’d expect to be especially reliable.
So as utilitarians, we shouldn’t aim for the single master argument that will defeat all objections to utilitarianism. Instead, we should compare the theories holistically. There are reason to favor utilitarian intuitions, and we can provide independent arguments for accepting them across a variety of cases. This is the best way to defend utilitarianism.