Ethics Needs A Marginal Revolution
At the margin, ethics needs more marginal analysis, and not just marginally more
Introduction
The marginal revolution in economics was a transformative shift that occurred when economists began thinking more about things at the margin—economists now talk about things like marginal costs, marginal revenue, and marginal revenue product. It’s also the name of Tyler Cowen’s and Alex Tabarrok’s excellent blog that produces very brief articles many times a day. Tyler Cowen was also responsible for me getting an emergent ventures grant—he’s the one who gives them out—which makes him especially cool.
Marginal analysis involves analysis of the addition of one more thing. For example, the marginal revenue of soybeans would be the amount of revenue from selling one extra soybean. The marginal cost of soybeans is the cost of extra soybeans. Producers sell until marginal revenue equals marginal costs—until the cost to produce one more of the thing is greater than the revenue that it brings in.
I think that ethics, like economics, needs a marginal revolution. Ethicists do not think about things at the margins enough. And this results in grave, easily avoidable errors. When one thinks about things at the margin, otherwise puzzling verdicts about cases cease to be puzzling. I think marginal analysis ends up favoring utilitarianism quite a bit. I’ll discuss two cases where thinking at the margins vindicates otherwise surprising verdicts, though for a third example of this, see here.
Capped views of well-being
Lots of ethicists defend the view that there is a cap on how much well-being one can have over the course of their life. No matter how much pleasure, knowledge, and desire fulfillment one has, there is a limit to how good their life can be. Temkin, for example, defends the capped model of ideals, according to which each of the things that make your life go well can only make your life go well to a certain degree—each o of the “ideals” like pleasure have a limit to how much they can benefit you. This position is generally taken to be intuitive. But it’s only intuitive when framed as a view about the total value of one’s life—when framed in marginal terms, it becomes extremely implausible.
If Y is a function of X, and Y doesn’t approach positive infinity when X approaches infinity, then the marginal contribution that X must make to Y must approach zero or less. If, for example, Y always was 1% of X, then as X went to infinity, so too must Y. This is a mathematical point that is not disputed.
Thus, if we adopt the capped view of well-being, then when the various goods that contribute to well-being—pleasure, relationships, knowledge, achievements, desire fulfillment—approach infinity, the marginal contribution of more of them to well-being must approach zero. So on this account, for example, a year filled to the brim with tons of pleasure, relationships, knowledge, achievements, and desire fulfillment would have very little value if you’ve already had lots of pleasure, relationships, knowledge, achievements, and desire fulfillment. In fact, because the marginal value approaches zero, it must be the case that 1,000 years that are as good as years can be, can produce less value than any finite positive amount of value.
But this is very implausible. Imagine you’ve lived googolplex years. You’re getting to the end of a very, very long life. You can live 1,000 more years which would be about as good as years can be—they would contain extremely large amounts of everything of value. On this account, these thousand years of value would have barely any value at all—third parties should be almost entirely indifferent between you living these thousand years or dying immediately. But that’s very implausible. In fact, on this account, these last 1,000 years of life would be less valuable than a single lick of a lollipop that was had by this very long-lived being during year 3. But this is wildly implausible.
Note, this isn’t some distinct counterexample. It’s not a surprising, external entailment. It’s just the straightforward entailment of value approaching a cap about the value as it approaches a cap. Well-being approaching a cap involves contributory goods’ marginal values approaching zero, which is very unintuitive.
This becomes especially clear if we imagine cases of memory loss. Suppose you can’t remember if you’ve lived for a long time. However, you think that there’s about a 99.9% chance you’ve lived for a very long time, and a .1% chance you’ve only been alive for a few years. Someone offers you the following deal, where you can either option one or two. The options are as follows:
Get a bit of pleasure and knowledge only if you’ve had a short life.
Get mindbogglingly large amounts of knowledge, pleasure, achievements, wisdom, desire fulfillment, and relationships of great value only if you have had a supremely long life. Assume the amount of each of these things are roughly equal to the amounts of those contained in most countries.
It seems like 2 is a better deal. But the capped view denies this, because if your life is sufficiently long, the various welfare goods barely improve your life. So the capped view is deeply implausible.
The analysis of the capped view of welfare has almost always focused on the value of the life as a whole. But this exacerbates errors. We have no intuitive appreciation of a billion years—our emotional reaction to a billion years of bliss is certainly not 1,000 times stronger than our emotional reaction to 1,000,000 years of bliss. I hate to quote Sam Bankman Fried’s old blog, but I think he makes the point well:
Still many people have the intuition that 1,000,000,000,000, people at happiness 2 (Option A) is better than 1,000,000,000,000,000 people with happiness 1 (Option B). But I posit that this comes not from a flaw in total utilitarianism, but instead from a flaw in human intuition. You remember those really big numbers with lots of zeros that I listed earlier in this paragraph? What were they? Many of you probably didn't even bother counting the zeros, instead just registering them as "a really big number" and "another really big number, which I guess kind of has to be bigger than the first really big number for Sam's point to make sense, so it probably is." In fact, English doesn't even really have a good word for the second number ("quadrillion" sounds like the kind of think a ten year old would invent to impress a friend). The point--a point that has been supported by research--is that humans don't fundamentally understand numbers above about four. If I show you two dots you know there are two; you know there are exactly two, and that that's twice one. If I show you thirteen dots, you have to count them.
And so when presented with Options A and B from above, people really read them as (A): some big number of people with happiness 2, and (B): another really big number of people with happiness 1. We don't really know how to handle the big numbers--a quadrillion is just another big number, kind of like ten thousand, or eighteen. And so we mentally skip over them. But 2, and 1: those we understand, and we understand that 2 is twice as big as one, and that if you're offered the choice between 2 and 1, 2 is better. And so we're inclined to prefer Option A: because we fundamentally don't understand the fact that in option B, one thousand times as many people are given the chance to live. Those are entire families, societies, countries, that only get the chance to exist if you pick option B; and by construction of the thought experiment, they want to exist, and will have meaningful existences, even if they're not as meaningful on a per-capita basis as in option A.
Fundamentally, even though the human mind is really bad at understanding it, 1,000,000,000,000,000 is a lot bigger than 1,000,000,000,000; in fact the difference dwarfs the difference between things we do understand, like the winning percentages of the Yankees versus that of the Royals, or the numbers 2 and 1. And who are we to deny existence to those 900,000,000,000,000 people because we're too lazy to count the zeros?
By framing the judgment as one about marginal value rather than whole lives, one avoids these psychological cofounders. But when that happen, not only does the intuition lose its initial force, the intuition goes the other way. No one seems to have the intuition that a million years of intense joy, once you’ve already been really happy for a while, is barely valuable. I’ll close this section with a quote from Huemer that is, I think, basically right about our sources of error when evaluating lives, and that explains why marginal analysis is best:
Here, I want to suggest a more general theory of normative error than any of the theories that Temkin explicitly addresses. On this account, certain kinds of evaluative judgments are made with the aid of what we might call the Attraction/Aversion heuristic. The AA heuristic is used for making ethical judgments regarding particular scenarios. The subject begins by imagining the scenario and then experiences some degree of attraction or aversion to the events taking place in that scenario. There is room for a range of views regarding the nature of attraction/aversion, the details of which are not important for present purposes; for instance, attraction might be understood as a sort of positive emotion, a wish or desire for an outcome to occur, or some combination of the two. The subject then judges an outcome to be good or an action right, roughly in proportion to the strength of the attraction the subject experiences, or judges an outcome bad or an action wrong, roughly in proportion to the strength of the aversion the subject experiences. When comparing a pair of outcomes, a subject will tend to judge one outcome as better than another to the extent that the subject feels more attraction or less aversion to the former.
. . .
When we consider a scenario involving an extremely long period of mild discomfort, we experience less aversion than when contemplating a scenario involving two years of extreme torture. Our aversion to imagined long periods of suffering does not increase linearly with the duration of the imagined suffering. For instance, when we contemplate suffering one thousand years of pain and then contemplate suffering two thousand years of pain, the latter occasions an aversion far less than twice as intense as the former, even though it involves twice as much pain and so, considered abstractly, would seem to be twice as bad. This failure to track degrees of badness becomes more serious, the longer the time period that one tries to imagine. Thus, the thought of 1 billion years of pain is only slightly more upsetting, if at all, than the thought of 1 million years of pain, even though the former represents one thousand times more pain than the latter.
Repugnant conclusions and happy oysters
The repugnant conclusion is the notion that, for any population of 10 billion people living great lives, there exists a better population that consists of some vast number of people living barely worthwhile lives. Lots of people find this to be very counterintuitive.
But this intuition doesn’t seem that trustworthy. It involves adding up the value of an entire world full of enormous numbers of people living good lives. Weighing up entire worlds full of value, where we do not have an intuitive appreciation of the differences in numbers of people between the worlds is a notoriously fraught process. But if we instead frame the conclusions in marginalist terms, they cease to be unintuitive at all.
Suppose you think that 10 billion happy lives are more valuable than any number of barely happy people. Suppose that each of the happy lives is comprised of 10 billion happy moments, each of which are very brief. In total, then, there are 10^16 happy moments.
Just kidding, there are actually 10^20 happy moments. See how you didn’t even notice a 99.99% decline in the population of a world. And these are the intuitions you trust to reject totalism? Now, suppose someone offers you for there to be 10^20 happy moments minus 1, and in addition, there would be 100 people who live barely worthwhile lives. Note, these lives are worthwhile—they contain happiness and the people who live them regard them as worth living. This seems better than the starting state!
Suppose they offer this deal again—you can trade one happy moment for creating 100 people with barely worthwhile lives. Seems good again. But if this pattern continues, you eventually get the repugnant conclusion world.
The repugnant conclusion sounds repugnant when we’re comparing “world we can’t intuitively imagine with very happy people” to “other world we can’t intuitively imagine, that apparently has more people, where people have only barely good lives.” But when you realize that it says that sometimes an extra moment of pleasure at the margin is more valuable than 1,000,000 lives that are worth living (even if just barely so), that seems wildly unintuitive.
The repugnant conclusion has a secret twin that few people have heard about. It comes from McTaggart and is called McTaggart’s conclusion. I like to call it the oystrous conclusion. The idea is as follows: suppose that one is deciding whether to live 100 happy years where they’ll have tons of knowledge and such, or instead to live forever but just have constant mildly positive utility. Lots of people intuitively think that it’s better to be supremely happy for a century than slightly happy forever. I think this is false!
Suppose that the person who lives a century dies suddenly. Suppose that one second before he dies, you offer him a deal, where he can trade the last moment of his life for 100,000 years of mild happiness. Seems like a good deal. So the last moment of his life is less valuable than 100,000 years of mild happiness. Suppose we compare this to another world that is exactly the same as this world with one exception. Rather than first being very happy and then being mildly happy for 100,000 years, instead first the person is mildly happy for 100,000 years, and then they live their life. This sems just as good as the other case—rearranging when one experiences things doesn’t matter morally. But then suppose that at the end of this deal, when he’s used up all 100,000 years of mild happiness, when he’s used up all but one second of the 100 years minus one second of great happiness you offer another deal, where he can give up the last moment of immense bliss in exchange for 100,000 years of mild happiness. This seems like an improvement.
But this can keep going. We can keep trading the last second of great happiness for 100,000 years of mild happiness, rearranging things, and then doing this again, and at each step of the way, things get better. Eventually, when we run through this process enough, we are led inexorably to the conclusion that it’s better to live a very long life at mild happiness than a pretty long life at great happiness. And all this comes when we think at the margins.
Our brains struggle to think in big numbers. As a result, it’s easy to generate supposed counterexamples to utilitarianism that exploit our brain’s inability to recognize that 100,000 years of happiness is 100 times as much happiness as 1,000 years of happiness. But when we break things down into manageable, marginal chunks, the end result ends up overwhelmingly favoring the utilitarian verdict.
The capped view does employ marginality, it argues that the marginal benefit of something good like a friendship decreases if one already has friendships (as opposed to the first friendship). You seem to be arguing that the marginal benefit of additional units is constant and therefore one can just use arithmetic to compare these various scensrios to each other. But mostly you seem to be talking about the addition of years that one can enjoy those good things not the things themselves. (Also pleasure I would say is not something that contributes to well being but a component of well being).