Addressing A Bad Objection To Utilitarianism
Yes, we can compare utility--even interpersonally.
One common yet poor objection to utilitarianism claims we cannot make interpersonal comparisons of utility so utilitarianism fails. It relies on an unmeasurable metric. This objection is nonsense.
For one, we frequently do make interpersonal comparisons of utility. We often make judgments that people tend to be worse off when they’re in poverty, subject to bombing, and when affected by pollution. Being precise about such matters poses difficult empirical questions, but interpersonal comparisons of utility are required in all political decisions.
If our set of values is to be consistent, we need to have a coherent utility function. If one adopts a view that one person in poverty is less bad than a person who is dead, but they do not give a number of people in poverty with equal moral weight to one person being dead, their view runs into a problem. Suppose one has the ability to either lift twenty people out of poverty or to prevent one death. In that situation, they have to come to a decision about whether preventing a death is more important than pulling twenty people out of poverty. If they would rather prevent the one death, then their utility function values preventing a single death more than pulling twenty people out of poverty. If they’d rather prevent twenty people from being put into poverty, the opposite would be true. If their answer is one of neutrality, then they would be indifferent between those two options. However, if their view is neutral between those two then they must judge preventing twenty-one people from being in poverty as more important than preventing one death. If they remain indifferent even in the case of twenty-one people, then they judge preventing twenty people from being in poverty to be morally equivalent to preventing one death, which they judge to be morally equivalent to preventing twenty people from being put into poverty. Thus, by transitivity, they’d be indifferent between twenty-one people being in poverty and twenty being in poverty. This is an implausible view.
As has been well documented by economists, as long as one’s judgments meet certain minimal standards for rationality, it must be able to be modeled as optimizing for some utility function. Thus, in order for a moral system to be robustly rational, it must make certain judgments about utility. If other moral systems cannot be modeled as a utility function, that means they are false.
To see this with a simple example, it is clear that setting one on fire is considerably morally worse than shoving someone. However, it is not infinitely worse. A coherent utility function would have a ratio of disutility caused by setting someone on fire to shoving someone of N, where they’d be indifferent between a 1/N chance of setting someone on fire and a certainty of shoving someone.
Additionally, given that utility here describes a type of mental state, there’s nothing problematic about making interpersonal comparisons of utility. Much like it would be possible in theory to judge whether or not a particular action increases the total worldwide experience of the color red, the same is true of utility. A way to conceptualize the metric is to imagine oneself as if they experienced every single thing that will be experienced by anyone and then act as if you were maximizing the expected quality of your experiences.
When it is conceptualized this way, it is evidently conceptually coherent. Judgments of the collective experience are logically no different from judgments of one's own experience. People very frequently make judgments about whether or not particular actions would increase their own happiness--for example when they decide upon a job, college, or choice of vehicle.
Additionally, every plausible theory will hold that the considerations of utilitarianism are true, except in particular cases. Every plausible moral view holds, for example, that if you could benefit one of two people of similar moral character, and you could benefit one of them more than the other, you should benefit the one that you could benefit more. Thus, issues surrounding evaluating consequences would plague all plausible moral theories.
You have not justified why your conception of "coherency" is any way required. You can say that a logical system is analytically false, or even subject to theoretical money pumping if it didn't have a coherent utility statement that applies to all situations where the system is applied, but that doesn't appear to prevent me from acting on such a system in any way.