The Monstrous Implications of Prioritarianism
Theron Pummer's powerful argument against prioritarianism
When people tell me they have intransitive preferences, I take their money.
Utilitarianism often conflicts with our common-sense intuitions. However, as we’ve seen before, often when utilitarianism reaches verdicts that differ from our own, the utilitarian intuitions turn out to be defensible, while our common-sense ends up being utterly indefensible. This is one of the things that I think lends most credibility to utilitarianism—while it’s clearly unintuitive sometimes, this is what we’d expect of the correct moral theory. However, the fact that our intuitions that diverge from utilitarianism end up being rejectable upon reflection would be utterly surprising on the hypothesis that utilitarianism is incorrect.
One such case where our common-sense intuitions are not utilitarian relates to prioritarianism. The idea behind prioritarianism is that benefitting someone is more important if they are less well off. There are many quite potent objections to prioritarianism. But a new (okay, not that new, but I saw it recently) brilliant one—which shows how spectrum arguments can run amuck—comes from Theron Pummer, as is true of many of the cleverest arguments out there. The minimal prioritarian claim involves saying that, for one who is badly off at some level, the moral goodness of bringing about N units of benefit for them is less great than the moral goodness of bringing about N- units of benefit, where N- is some amount less than N. Thus, suppose that there’s a person who you can give 100 units of utility to. It’s more important to give 99 utility to one who is substantially worse off.
This is a claim that prioritarianism cannot plausibly deny. But it ends up entailing the priority monster, which involves saying that for any arbitrarily large benefit, it is less good than an arbitrarily small benefit given to some sufficiently badly off peopole. To show this, N benefits for one person is less good than N- units of benefit for another, which is less good than N-- units of benefit for another person, which is less good than N--- units of benefit for one other people…which is less good than some arbitrarily small benefit for some person. But this is a wildly implausible conclusion.
This conclusion is not like the utility monster. The utility monster involves a being getting arbitrarily large benefits. In this case, the benefit is arbitrarily small, arbitrarily insignificant. Of course, the prioritarian can deny this claim and say that there is some utility thresholds such that as how badly off one is goes to infinity, the worthwhileness of benefitting them approaches some specific threshold. But this poses two additional worries.
First, it requires denying the minimum prioritarian claim. Prioritarianism should not deny that benefits of amount N for person at welfare level X is less good than benefits of .9999999999N for a person at a welfare level 1,000,000 times worse than Y. But accepting this minimal postulate entails the priority monster.
Second, it should undercut our intuitions about prioritarianism. If we know that at least one of our strong prioritarian intuitions is wrong, this should lead us to distrust our other prioritarian intuitions. Just as vindication of our utilitarian intuitions serves as evidence for utilitarianism, so too does contradiction of our prioritarian intuitions serve as evidence against prioritarianism.
For this and other reasons, I reject prioritarianism.
New thing I’m doing wherein I’ll write funny quotes that I’ve heard or thought of at the top of each post—might be discontinued at some point.
It seems obvious that the prioritarian argument is tricking moral intuitions, because of the easy confusion between utils and objectively valuable commodities such as money. Those commodities involve diminishing marginal utility. Utils don't, by definition.
I wonder how unintuitive this actually is. It seems like our supposedly prioritarian intuitions are motivated largely by concerns about diminishing returns (e.g. we should give the $100 to a poor person rather than a rich person, because it will do more good if given to the poor person). But when these considerations are removed, and we ask a question like "is giving 99 utils to x better than giving 100 utils to y, assuming x is worse off than y," then I'm sure that we actually DO have prioritarian intuitions.