The Last Stop on the Crazy Train
How to react to a world full of conclusions with potentially astronomical stakes
David Friedman once described some class of people as “economists only during their day job.” His basic point was that some people think like economists professionally, but totally forget everything they learned from economics when analyzing political issues. Similarly, some people are utilitarians only during their day job. They declare their support for increasing global utility, and then haphazardly mutter some convenient excuse about why that implies they should do whatever they were otherwise planning on doing.
Vasco Grilo is not one of those people.
Vasco began by saying that people should give money to the Shrimp Welfare Project. But then, when he began counting the welfare of the roughly billion soil nematodes per person, he concluded that normal GiveWell-style charities beat the Shrimp Welfare Project, because they lower soil nematode populations (though he GiveWell charities are less effective for saving lives than the High-Impact Philanthropy Fund). And he’s not even that confident that soil nematodes have bad lives!
Now, Vasco has started to suggest that it’s even more of a crush for saving human lives if you count microorganisms, even if you assume there’s a tiny chance they’re conscious (and give a tiny estimate of their consciousness conditional on them being conscious). This is because there are many microorganisms.
On the one hand, I’m kind of sympathetic to this. Part of the reason I give a sizeable portion of my monthly charitable donations to humans is because I think doing so lowers wild animal populations. But I think there’s a deeper underlying problem with this approach.
Vasco estimated that per dollar, HIPF prevents about 5 billion years of soil nematode life (and way more than that many years of bacteria life). But you know what’s a lot more than 5 billion? 10^50.
That’s the number of atoms on Earth. Now, I don’t think atoms are conscious. But Philip Goff does, and he’s pretty smart. The odds that he’s right aren’t, like, one in googol. If you guess that there’s a 1/10^10 chance atoms are conscious, and think their welfare range is 1/10^10 of ours conditional on them being conscious, then the welfare range of atoms on Earth is, in expectation, equivalent of that of 10^30 people. Now, you might be tempted to ignore low probabilities, but I don’t think this is that plausible for reasons I’ve given at length here (and in light of chaotic effects of our actions, even if you are risk averse, you probably should behave mostly like you weren’t).
Now, maybe you can ignore this one because we have no information about what it’s like to be an atom. The philosophical views that say atoms are conscious also imply that their consciousness is transferred over to higher-level brains—so probably our best bet for improving atom welfare is to improve the welfare of biological organisms that we know to be conscious. But still, if atoms might be conscious, then probably we should all be thinking hard about whether we can improve their welfare.
10^50 is a lot. But you know what’s more? 2^86 billion! That’s how many conscious sub-people you might have in your brain.
Suppose it turns out that every combination of neurons which would be enough to sustain consciousness, by itself, has its own associated mind. To give an example, let’s name a random one of my neurons Fred. If Fred disappeared, then all the neurons minus Fred would be conscious. On this view, the neurons minus Fred are conscious. Every combination of neurons that would be enough individually to sustain a mind has its own mind.
And there’s something kind of plausible about this. It’s a bit weird that the existence of Fred affects whether some combination of neurons other than Fred give rise to a unique mind. That makes consciousness weirdly extrinsic. Whether some neurons form a mind would depend on the presence of other neurons.
Now, I’m not saying this is that likely (though it does have surprisingly good arguments in its favor). But if this theory is true, it implies that brains which have about 86 billion neurons would contain around 2^86 billion conscious subsystems. It implies that a single African elephant with about 257 billion neurons has orders of magnitude more moral worth than all humans on Earth, on account of its staggeringly large number of conscious subsystems.
And you know what’s more than 2^86 billion (or even 2^257 billion)? Infinity. But that is how many years religious people tend to think we’ll spend in hell. So then maybe if you give non-zero credence to some religion being correct, you should spend all your time evangelizing.
One could keep going. The crazy train has many destinations. There are countless ways that our actions might affect unfathomably large numbers of others. If it just had one stop, you could simply do what was astronomically important on that theory. But if it has many, often pointing in completely opposite directions, then it’s hard to get off the crazy train at any particular stop.
Fortunately, I think there is a nice solution: you should just be a Longtermist.
Longtermists are those who think we should be doing a lot more to make the far future go well. Mostly, this involves reducing existential risks, because if the species goes extinct, then we won’t be able to bring about lots of future value. It also involves trying to steer institutions and values to make the future better. A future in which people have better, more humane, and more sentientist values is one that’s a lot likelier to contain astronomical amounts of value.
The future lasts billions of years, and it could sustain staggeringly large numbers of future people. For anything you could possibly imagine being worth promoting, we’ll be in a much better position to promote it in the far future. If atoms matter, we’ll be in a better position to promote atomic welfare in the far future than we are today. If the number of sub-minds grows exponentially with the number of neurons, future people with godlike technology will be in an ideal position to make super happy superminds with staggeringly large numbers of neurons.
If God gives eternal life to some people, then people probably have on average infinitely good total existences, and so increasing the number of future people, by being a Longtermist, is infinitely valuable. In fact, because the future could contain, according to Bostrom’s estimate, 10^52 happy people, it might be that each dollar given to Longtermist organizations enables on the order of 10^30 extra lives—and those extra lives have on average infinitely good total existences.
This holds most of all for all the crazy conclusions we haven’t thought of. Whatever is most important—whatever true conclusions have astronomical stakes—will be easier to affect in the far future. No matter what has value, only a tiny slice of it exists today. As Will MacAskill says in What We Owe The Future:
But now imagine that you live all future lives, too. Your life, we hope, would be just beginning. Even if humanity lasts only as long as the typical mammalian species (one million years), and even if the world population falls to a tenth of its current size, 99.5 percent of your life would still be ahead of you. On the scale of a typical human life, you in the present would be just five months old. And if humanity survived longer than a typical mammalian species—for the hundreds of millions of years remaining until the earth is no longer habitable, or the tens of trillions remaining until the last stars burn out—your four trillion years of life would be like the first blinking seconds out of the womb.
If you think about the future of humanity as like the life of a person, then Longtermism starts to look really obvious. Of course the stuff after the first few seconds would matter more than the first few seconds. Of course the first five months matter less than what comes later.
The future could have unimaginably awesome technology of a sort that we in the present can barely grok. If we have good values and godlike technology, then we’ll be in a better state to address whatever it is that ultimately matters most. To those who care about conclusions with astronomical stakes, then, steering the world towards such a future should be the top priority.


Sorry, noticed a typo!
> Vasco Grillo is not one of those people
Grilo with one L I think!
What I don’t get about the whole “giving to GiveWell to reduce habitat” argument is that there are infinitely better ways to reduce habitat.
You could try to get right-wing Brazilian politicans elected, for example. Random acts of arson would probably also have higher returns. You can buy chemicals and dump them in creeks and lakes at night. If you want to stay away from illegal things - bulk buy the most polluting consumer product you can find. You can lobby against environmental regulations in poor countires.
I sincerely doubt giving to GiveWell beats all of the above in reduced habitat/$spent. It seems a bit like a cop-out.