Important Insights
30 pillars of my worldview
A lot of key insights can be distilled into just a few sentences. Here are some of the most important things I think I’ve learned in my life.
In the modern era, we all agree past societies made enormous moral errors. We should expect that we are making such errors too, but that our errors are outside the Overton window.
Human life has gotten much better over time on every objective metric.
Suffering is a bad thing because of how it feels. Sources of significant suffering are thus a big deal, even if suffering isn’t the only bad thing in the world. This isn’t just something utilitarians should be concerned about.
Humanity has built vast torture facilities called factory farms. That’s where almost all meat comes from, and they probably cause more suffering every few years than all the suffering in human history. This is extremely bad. It sounds hyperbolic to call them torture facilities, but if we treated humans the ways we treat animals on factory farms—say, locking them in a cage, sometimes grabbing them by their legs and beating them to death against concrete, so that their blood and brains coat the concrete, gassing them, forcing them to inhale feces all day from other inmates, and castrating them without anesthetic—no other description would seem adequate.
But even the suffering of factory farms is dwarfed by wild animal suffering. So long as we buy that suffering is a bad thing, we should support efforts to reduce wild animal suffering. Compared to all the suffering in nature, human misery is a rounding error.
Saving human lives is amazingly cheap. It costs only a few thousand dollars to save someone’s life, less than a cheap car. More people should do this.
Improving animal lives is also amazingly cheap. A dollar given to effective animal charities can keep animals out of cages for years, or can spare about 14,000 shrimp from a painful death (21,000 if you donate to the recent fundraiser).
For this reason, by giving a hundred bucks, you can spare millions of shrimp from the pain of death.
There’s a movement built around operationalizing these insights concerning how much good we can do. It’s called effective altruism. It’s great! More people should be effective altruists! Growing the movement is very important, given that it’s saved about 50,000 lives a year, despite being pretty small, and improved conditions for huge numbers of animals. Effective altruists are also trying to reduce existential risks, which is very valuable given…
The far future could have way more people than are around today, so that even by very conservative estimates, slightly improving the trajectory of the far future is better than improving the present—even by a lot. Longtermism, the idea that we should be doing a lot more to make sure the future goes well, is extremely hard to argue against.
Existential threats are alarmingly high. Expert forecasts tend to think there’s about a 1/10 chance we’ll go extinct this century, mostly from AI or biotechnology.
It’s pretty likely that some time in the next few decades, we’ll get economic growth more rapid than has ever before been seen in human history, spurred by AI development. This will pose many significant challenges.
It’s probably possible to create digital consciousness. If so, it would be possible to create amazingly large numbers of digital minds. Thus, almost every person in the future, in expectation, will be digital. Biological organisms are, in expectation, a rounding error compared to the number of happy digital minds you could create if you harnessed the energy from space. How the future goes will depend mostly on how it goes for digital people.
Our intuitions about small, simple animals are wildly distorted. It’s pretty plausible that they can feel pain, and there’s just no reason to think with much confidence that they only feel minimal pain. Given that the average person affects millions of insects every year, this is a hugely important aspect of our actions.
Social norms are often wildly out of accordance with moral truth. We often think things are very horrible when they’re just a bit bad, and we think things are mildly bad when they’re very horrible. Your intuitive ick reaction to infractions is not a reliable gauge of anything.
Bayesianism is just correct! It’s the way you should think through decisions. If your reasoning doesn’t approximate Bayesian reasoning, then something has gone wrong. It’s also helpful, when thinking through evidence, to think through how exactly your reasoning is an approximation of Bayesian reasoning.
Our moral intuitions are hugely influenced by our affective reactions. As a result, our moral intuitions are often pretty okay, but very bad when there’s a disharmony between our intuitive gauge of how bad something is and how bad it really is. Scope neglect is one example of this, and it leads to bad ethical judgments. Our intuitions tend to better reflect how much society looks down on various practices than how bad they really are.
You should try to have a rough gauge of how many orders of magnitude are at stake. It doesn’t have to be precise, but people often err in thinking that reducing the expected value of a course of action by two or even five makes much difference if its expected value is very enormous.
The world is very morally weird, and this should make us sometimes hold to surprising-sounding ethical judgments.
Animals in nature mostly live a few days or weeks and then die painfully. It’s likely that their lives are mostly pretty bad.
Philosophy is really great, and philosophers are way less confused than non-philosophers (note: I’m obviously only talking about analytic philosophers). In contrast, most people are extremely irrational, especially when it comes to politics. Most people are also very confused on lots of topics.
Of the controversial utilitarian judgments, the easiest ones to defend are ones about aggregation (some number of dust specks are worse than a torture), the non-existence of rights, the non-existence of incomparability, and that it’s very good to create well-off people. The harder ones to defend are the non-existence of desert and the wrongness of partiality (that one I actually think there are good arguments for, but it’s just so counterintuitive). I also no longer think there is a decisive case for hedonism about well-being.
God probably exists. There are many good arguments for theism from anthropics, fine-tuning, psychophysical harmony, moral knowledge, and more! The problem of evil is a good argument, but it is outweighed by all the arguments for theism, especially given some good theodicies.
Consciousness isn’t physical. Physical stuff is exhaustively characterizable in terms of its structure and function, but the what-it’s-likeness of experience isn’t about structure and function. Physicalism also has absurd consequences concerning knowledge of what experiences are like.
When smart people disagree with you, you should rarely be confident that you’re right and they’re wrong, even if you feel like they didn’t rebut your arguments adequately. They obviously feel the same way about you. Almost everyone should be less confident in their views on controversial issues and should recognize that the world is complicated. People should defer way more!
There’s very likely a giant multiverse, because if there was just a single universe, with a few billion people, it’s pretty unlikely you’d be in it! The standard explanations of fine-tuning also entail a multiverse. The self-indication assumption is true and thirding in sleeping beauty is correct.
There are extremely strong arguments for thinking that the way to make decisions is by maximizing expected value (note: while the linked post gives arguments for fanaticism, most of them vindicate EV maximization more broadly).
Infinite ethics/anthropics/everything else having to do with the infinite is weird as hell and poses much harder problems than anything else in philosophy.
Social media is very bad! You should probably spend less time on it, unless it’s Substack :), and the time you spend is reading my blog. Meditation, adequate sleep, and exercise are very good and you should do more of those things.
I think people often put ideology ahead of what’s ultimately valuable. People should think about themselves, very explicitly, as serving the good, and see ideology only as a means towards that end. I find the attitude in To Lucasta, Going to the Wars to be the right one:
Yet this inconstancy is such
As you too shall adore;
I could not love thee (Dear) so much,
Lov’d I not Honour more.


Nice piece, but small correction: in the XPT, the median expert had the chance of extinction by 2100 at 6%, and the median superforecaster had the chance at 1%, so 10% seems on the higher end of expert forecasts. Probably you're referring to Ord, who has x-risk at 1/6, but I think he is much higher than most forecasters.
I admire your consistency and willingness to hammer down to a final logical conclusion, even if some of those conclusions appear absurd to a lot of people. Forcing skeptical listeners to think hard why they think a logically deduced conclusion is wrong and revisit their own logic makes us all smarter in the end, whoever “wins” the argument.