Scott Alexander's Needless Doubts About Utilitarianism
Don't Worry Scott--Utilitarianism is the way!!!!!
Scott starts his article with a cry for help
God help me, I’m starting to have doubts about utilitarianism.
Don’t worry Scott!! Help is on the way! Bentham’s Bulldog is on the case—much like Christian apologists, no one should leave the church of utilitarianism.
Whose Superstructure?
The first doubt is something like this. Utilitarianism requires a complicated superstructure – a set of meta-rules about how to determine utilitarian rules. You need to figure out which of people’s many conflicting types of desires are their true “preferences”, make some rules on how we’re going to aggregate utilities, come up with tricks to avoid the Repugnant Conclusion and Pascal’s Mugging, et cetera.
I have never been too bothered by this in a practical sense. I agree there’s probably no perfect Platonic way to derive this superstructure from first principles, but we can come up with hacks for it that come up with good results. That is, given enough mathematical ingenuity, I could probably come up with a utilitarian superstructure that exactly satisfied my moral intuitions.
And if that’s what I want, great. But part of the promise of utilitarianism was that it was going to give me something more objective than just my moral intuitions. Don’t get me wrong; formalizing and consistency-ifying my moral intuitions would still be pretty cool. But that seems like a much less ambitious project. It is also a very personal project; other people’s moral intuitions may differ and this offers no means of judging the dispute.
There is a solution here—hedonic utilitarianism. There is a fact of the matter about what makes people happy. I agree—preferences are confusing and hard to define, and run into lots of other problems, see here and here. While it may be hard to quantify to some degree, there is a fact of the matter about what maximizes happiness. Happiness refers to mental states that would be taken as desirable by one who was fully rational. For example, we can know that setting me on fire would make me less happy than giving me a cake. A good theory should have uncertainty at the margins.
Whose Preferences?
Suppose you go into cryosleep and wake up in the far future. The humans of this future spend all their time wireheading. And because for a while they felt sort of unsatisfied with wireheading, they took a break from their drug-induced stupors to genetically engineer all desires beyond wireheading out of themselves. They have neither the inclination nor even the ability to appreciate art, science, poetry, nature, love, etc. In fact, they have a second-order desire in favor of continuing to wirehead rather than having to deal with all of those things.
You happen to be a brilliant scientist, much smarter than all the drugged-up zombies around you. You can use your genius for one of two ends. First, you can build a better wireheading machine that increases the current run through people’s pleasure centers. Or you can come up with a form of reverse genetic engineering that makes people stop their wireheading and appreciate art, science, poetry, nature, love, etc again.
Utilitarianism says very strongly that the correct answer is the first one. My moral intuitions say very strongly that the correct answer is the second one. Once again, I notice that I don’t really care what utilitarianism says when it goes against my moral intuitions.
In fact, the entire power of utilitarianism seems to be that I like other people being happy and getting what they want. This allows me to pretend that my moral system is “do what makes other people happy and gives them what they want” even though it is actually “do what I like”. As soon as we come up with a situation where I no longer like other people getting what they want, utilitarianism no longer seems very attractive.
Hedonistic utilitarianism seems to be the solution here. If wireheading doesn’t make them happy, it’s not good.
If they really were happy then it would seem bad to force them to enjoy art and such, for the same reason that forcing kids to listen to music from the 4th century rather than music that they truly enjoy would make them worse off.
Whose Consequentialism?
It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism. This both means utilitarianism can’t ground my moral intuitions, and it means that if I’m honest I might as well just admit I’m following my own moral intuitions. Since I’m not claiming my moral intuitions are intuitions about anything, I am basically just following my own desires. What looked like it was a universal consequentialism is basically just my consequentialism with the agreement of the rest of the universe assumed.
Another way to put this is to say I am following a consequentialist maxim of “Maximize the world’s resemblance to W”, where W is the particular state of the world I think is best and most desirable.
This formulation makes “follow your own desires” actually not quite as bad as it sounds. Because I have a desire for reflective equilibrium, I can at least be smart about it. Instead of doing what I first-level-want, like spending money on a shiny new car for myself, I can say “What I seem to really want is other people being happy” and then go investigate efficient charity. This means I’m not quite emotivist and I can still (for example) be wrong about what I want or engage in moral argumentation.
And it manages to (very technically) escape the charge of moral relativism too. I think of a relativist as saying “Well, I like a world of freedom and prosperity for all, but Hitler likes a world of genocide and hatred, and that’s okay too, so he can do that in Germany and I’ll do my thing over here.” But in fact if I’m trying to maximize the world’s resemblance to my desired world-state, I can say “Yeah, that’s a world without Hitler” and declare myself better than him, and try to fight him.
But what it’s obviously missing is objectivity. From an outside observer’s perspective, Hitler and I are following the same maxim and there’s no way she can pronounce one of us better than the other without having some desires herself. This is obviously a really undesirable feature in a moral system.
Hedonistic utilitarianism gives a great account. Hitler’s dream world is much worse than yours.
Whose Objectivity?
I’ve started reading proofs of an objective binding morality about the same way I read diagrams of perpetual motion machines: not with an attitude of “I wonder if this will work or not” but with one of “it will be a fun intellectual exercise to spot the mistake here”. So far I have yet to fail. But if there’s no objective binding morality, then the sort of intuitionism above is a good description of what moral actors are doing.
Can we cover it with any kind of veneer of objectivity more compelling than this? I think the answer is going to be “no”, but let’s at least try.
I think Huemer and Parfit make a compelling case for moral realism, but one can deny it and be a utilitarian. If utilitarianism always matches our reflective intuitions, that gives us good reason to be utilitarian.
I’m with Scott that I don’t think that one can prove objective morality, any more than they can prove their starting axioms. But there are good reasons to accept objective morality. Here is the following argument
P1 If we have desire independent reasons to care about things objective morality exists
P2 We do have desire independent reasons to care about things
Therefore, objective morality exists
P1 is true because morality is about what we have desire independent impartial reason to do, so if there are desire independent reasons, there must be objective morality.
P2 Can’t be proved, but it’s hard to deny. One who denies it would have to say that if one desires to eat a car, set themself on fire, not go into a Doctor’s office when they’re 4 years old because they don’t like shots despite causing immense long term harm, cut themselves, not eat enough food because of an eating disorder, suffer on a future tuesday for no good reason, and endure immense agony despite it giving them no joy would be rational. This is implausible.
One idea is a post hoc consequentialism. Instead of taking everyone’s desires about everything, adding them up, and turning that into a belief about the state of the world, we take everyone’s desires about states of the world, then add all of those up. If you want the pie and I want the pie, we both get half of the pie, and we don’t feel a need to create an arbitrary number of people and give them each a tiny slice of the pie for complicated mathematical reasons.
This would “solve” the Repugnant Conclusion and Pascal’s Mugging, and at least change the nature of the problems around “preference” and “aggregation”. But it wouldn’t get rid of the main problem.
I’m not sure I understand this proposal, but the RC is not something to be solved.
The solution to Pascal’s mugging is easy: it won’t maximize expected utility. The odds are so low they’re telling the truth that it’s statistical noise. However, we can’t just discount small risks of very bad things, if we accept the plausible principle that for any amount of utility there is some % chance of some amount of greater utility that would be overall better e.g. 100% chance of 50 units of utility can be surpassed by less than 100% chance of some amount of utility far greater than 50. If we accept this, that’s sufficient to get to caring a lot about infinite impacts and near infinite ones.
The other idea is a sort of morals as Platonic politics. Hobbes has this thing where we start in a state of nature, and then everybody signs a social contract to create a State because everyone benefits from the State’s existence. But because coordination is hard, the State is likely to be something simple like a monarchy or democracy, and the State might not necessarily do what any of the signatories to the contract want. And also no one actually signs the contract, they just sort of pretend that they did.
Suppose that Alice and Bob both have exactly the same moral intuitions/desires, except that they both want a certain pie. Every time the pie appears, they fight over it. If the fights are sufficiently bloody, and their preference for personal safety outweighs their preference for pie, it probably wouldn’t take too long for them to sign a contract agreeing to split the pie 50-50 (if one of them was a better fighter, the split might be different, but in the abstract let’s say 50-50).
Now suppose Alice is very pro-choice and slightly anti-religion, and Bob is slightly pro-life and very pro-religion. With rudimentary intuitionist morality, Alice goes around building abortion clinics and Bob burns them down, and Bob goes around building churches and Alice burns them down. If they can both trust each other, it probably won’t take long before they sign a contract where Alice agrees not to burn down any churches if Bob agrees not to burn down any abortion clinics.
Now abstract this to a civilization of a billion people, who happen to be divided into two equal (and well-mixed) groups, Alicians and Bobbites. These groups have no leadership, and no coordination, and they’re not made up of lawyers who can create ironclad contracts without any loopholes at all. If they had to actually come up with a contract (in this case maybe more of a treaty) they would fail miserably. But if they all had this internal drive that they should imagine the contract that would be signed among them if they could coordinate perfectly and come up with a perfect loophole-free contract, and then follow that, they would do pretty well.
Because most people’s intuitive morality is basically utilitarian [citation needed], most of these Platonic contracts will contain a term for people being equal even if everyone does not have an equal position in the contract. That is, even if 60% of the Alicians have guns but only 40% of the Bobbites do, if enough members of both sides believe that respecting people’s preferences is important, the contract won’t give the Alicians more concessions on that basis alone (that is, we’re imagining the contract real hypothetical people would sign, not the contract hypothetical hypothetical people from Economicsland who are utterly selfish would sign)
Contractarian ethics that is just the consensus intuitions isn’t very good. Throughout most of history many people have supported horrific things like slavery, subjugation of women, and oppression of gay people. Utilitarianism has a better track record and is supported through lots of good arguments that I’ve previously discussed.
So what about the wireheading example from before?
Jennifer RM has been studying ecclesiology lately, which seems like an odd thing for an agnostic to study. I took a brief look at it just to see how crazy she was, and one of the things that stuck with me was the concept of communion. It seems (and I know no ecclesiology, so correct me if I’m wrong) motivated by a desire to balance a desire to unite as many people as possible under a certain banner, with the conflicting desire to have everyone united under the banner believe mostly the same things and not be at one another’s throats. So you say “This range of beliefs is acceptable and still in communion with us, but if you go outside that range, you’re out of our church.”
Moral contractualism offers a similar solution. The Alicians and Bobbites would sign a contract because the advantages of coordination are greater than the disadvantages of conflict. But there are certain cases in which you would sign a much weaker contract, maybe one to just not kill each other. And there are other cases still when you would just never sign a contract. My Platonic contract with the wireheaders is “no contract”. Given the difference in our moral beliefs, whatever advantages I can gain by cooperating with them about morality are outweighed by the fact that I want to destroy their entire society and rebuild it in my own image.
I think it’s possible that all of humanity except psychopaths are in some form of weak moral communion with each other, at least of the “I won’t kill you if you don’t kill me” variety. I think certain other groups, maybe along the culture level (where culture = “the West”, “the Middle East”, “Christendom”) may be in some stronger form of moral communion with each other.
(note that “not in moral communion with” does not mean “have no obligations toward”. It may be that my moral communion with other Westerners contains an injunction not to oppress non-Westerners. It’s just that when adjusting my personal intuitive morality toward a morality I intend to actually practice, I only acausally adjust to those people whom I agree with enough already that the gain of having them acausally adjust toward me is greater than the cost of having me acausally adjust to them.)
In this system, an outside observer might be able to make a few more observations about the me-Hitler dispute. She might notice Hitler or his followers were in violation of Platonic contracts it woud have been in their own interests to sign. Or she might notice that the moral communions of humanity split neatly into two groups: Nazis and everybody else.
I’m pretty sure that I am rehashing territory covered by other people; contractualism seems to be a thing, and a lot of people I’ve talked to have tried to ground morality in timeless something-or-other.
But the best form of contractualism converges with utilitarianism, as Harsanyi showed. Utilitarianism treats everyone as equally relevant, so it is the contractarian approach.
Most of these issues seem easily solved by hedonic utilitarianism. Why does Scott reject it. He lays out his reasons here.
It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.
By extension, any society that truly believed in Benthamism would end out developing a superdrug, and spending all of their time high while robots did the essential maintenance work of feeding, hydrating, and drugging the populace. This seems like an ignoble end for human society. And even if on further reflection I would find it pleasant, it seems wrong to inflict it on everyone else without their consent.
Even if we accept this is somewhat counterintuitive, preference utilitarianism has way more unintuitive results, including that we should all be brutally tortured to death if lots of dead aliens wanted other civilizations to die from slow torture. I already linked to my objections.
But the super-drug seems obvious. We frequently force children, for example, to do things in their best interest. If a person was addled, we’d give them cancer treatment because they’re making very irrational decisions. And yet this stipulated superdrug is so good that failing to consume it would be way more irrational than foregoing cancer treatment. NOt taking the drug would be like an eating disorder, but far more pernicious. This just seems obvious.
So Scott, embrace hedonism. We have debauchery, licentiousness, and iniquity