I think utilitarianism is really obvious. I’ve thought this for basically my entire life. I remember when I was in my middle-school, extreme libertarian phase, I was never moved at all by the ethical arguments for libertarianism that don’t appeal to the good consequences of libertarianism—arguments that claim, for example, that taxation is impermissible because it is theft, even if it has good outcomes. When I thought about why that was, my answer was something like, “call it theft if you like, but who cares if it’s theft as long as it makes lives better in the aggregate.” I concluded that utilitarianism was right before I’d ever heard the word utilitarianism.
But as I grew older and advanced in the world, it became clear that other people—even very smart people—are generally not utilitarians. Derek Parfit was not a hedonistic utilitarian at least, neither is Richard Chappell. Michael Huemer isn’t even a utilitarian at all—nor are most other philosophers. My smartest friend by far thinks utilitarianism is downright crazy. Most philosophers aren’t even consequentialists. So being confident in hedonistic utilitarianism seems monumentally arrogant—saying that you’re confident that most of the most consistently brilliant and correct philosophers that you’ve read are almost certainly wrong.
For these reasons, I only give about two-to-one odds to hedonistic utilitarianism being correct.
But two-to-one odds still are still alarmingly high. Most of the people I know who are very smart and consistently correct disagree with me about this, and nonetheless, I give two-to-one odds to them all being wrong. But I don’t think this is a huge problem—I think that in some of these cases, it’s okay to double down on one’s views, even if lots of smart people disagree. As Richard says
Rather than seeing this as some deep failure of "analytic philosophy" (as if different training would somehow break the logical symmetry between modus ponens and modus tollens), I'd encourage clear-eyed acceptance of this reality, combined with default trust or optimism about your own philosophical projects. After all, unlike all those fools who disagree with you, YOU'RE beginning from roughly the right starting points, right? ;-)
There are a few philosophical issues on which I retain very high confidence relative to the background of philosophers’ views: non-physicalism about consciousness, moral realism, utilitarianism, hedonism about well-being, and atheism. Here, I’ll explain why I think this—why I retain a very high degree of confidence in my extremely controversial views.
There are some philosophical disputes that I feel I can’t make heads or tails of. I can understand what’s being argued about, but I don’t feel like there’s some crucial, fundamental insight that others are missing. A good example here is mereology—I think I lean towards mereological universalism or nihilism maybe, but this will come just from comparing the various theoretical virtues of the views. Likewise with basically all of politics. As a result of this, I have no political views with more than roughly 60% confidence—it’s just too difficult to have an informed political opinion.
But when it comes to the issues that I do have strong opinions on, I do feel like there’s some crucial, ineffable1 insight that I have that the dissenters are just missing. It’s not that they weighed up the evidence and came to a merely incorrect conclusion—it’s that in a very fundamental sense, they are missing the point. And this is a point that I think I can’t convey—I can stack up the evidence for various points, comparing the theories as explanations of various facts about the world. But I can’t just convey, in a simple and straightforward way, the intuitions I have about many of these cases.
Take the example of a god. It’s clear as day to me that god doesn’t exist—that a world where nearly every being that will ever live will die horribly within its first few days of life is not created by a perfectly benevolent creator. As Ben Burgis has suggested, believing in god is like believing in a giant that paints the sky yellow every day—there are lots of evidentiary problems with it, the biggest of which is the sky isn’t yellow. And the excuses that theists come up with to explain why this world of such gratuitous misery, suffering, despair, and death is perfectly compatible with god strike me as utterly absurd. It’s easy to come up with a dozen arguments against any obvious truth—hence the numerous clever arguments for skepticism about everything. But despite their panoply of arguments, there’s a deep sense in which I feel that I grasp the falsity of the various theodicies. Just as it’s sometimes difficult to argue a nihilist out of their views, so too is it difficult to argue a theist out of their obviously false views. Nonetheless, the mere existence of theists—just like the existence of nihilists—shouldn’t disabuse one of the recognition of the obvious absurdity of the beliefs.
And I do take disagreement seriously. I’d be far more confident in the falsity of theism if it weren’t for all the smart theists. The best argument for theism is the argument from smart theists—more persuasive even than the argument from psychophysical harmony.
Nevertheless, there are a few reasons why they don’t move me more. One of them is the fundamental unconveyable insight I feel I have, where I think that if they had that insight, if they could really read my mind, many of them would be moved. It would, of course, be monumentally arrogant to think that all views one holds are that way, but it’s not crazy to think they just some of one’s views are like that.
So I start out pretty confident that I’m right based on the insights that I have. The inside view is strongly atheistic. But when I take the outside view, it also seems to confirm it. It turns out that
Most philosophers are atheists.
All of the smartest and most consistently correct philosophers I know—with the exception of Dustin Crummett—are atheists.
Studying philosophy of religion makes people less religious.
When I consider the arguments for god, with the exception of the argument from psychophysical harmony, they’re mostly unmoving.
When you compare the best of the theists (Swinburne, for example) to the best of the atheists (Mackie and Sobel), it’s not even close—the atheists’ arguments are just better.
There’s a powerful debunking account of theism. Most theists had it drilled into them from a young age.
We know that most religious beliefs throughout history have been false, and yet have still been believed.
All of these make it so that the core insight I have about the incompatibility of evil and theism moves me to be almost certain of atheism. This is especially so when you read the holy texts, and it’s just flagrantly obvious that it’s not written by a perfect being. As Huemer says
A Nigerian prince sends you an email, full of spelling and grammatical errors, asking for your bank account details so he can send you $40 million. You know nothing about Nigerian politics or how international bank transfers work, but … come on. This message just looks exactly like what a scammer would say. I don’t know what an actual Nigerian prince would say, but ... not that.
Someone in 2016 said, “Donald Trump is a poor man’s idea of a rich man.” A poor man might think, “If I were rich, I would have gold all over my house. I’d fire people rudely and talk shit to people all the time, ha! And I’d put my name in giant letters on a huge building!” But this is not what actual rich people (other than Trump) are like.
Analogously, the God of the Bible and the Koran is a primitive human’s idea of a great being. It’s what the primitive human imagines he would be like if he had ultimate power. “I’d make everyone worship me! If anyone didn’t want to, I’d torture him. Forever. I’d make my tribe defeat our enemies. And I’d kill the gays, because--gross!”
But that is not what an actual supreme being would be like. He would not be like a human drunk on power.
Imagine looking down at two ant colonies fighting in the dirt. You would not pick a favored colony and then start stomping on the other colony, unless you’re a child. You would not become super-concerned about exactly how the ants are doing things in their colony, whether they’re reproducing in the right way, whether the ants believe you exist, or whether they are showing respect for you.
If there is a god, we are to God as the ants are to us.
One other point—there’s only one other area in which I feel I have the same types of unique insights aside from philosophy, and it’s an area where, when I’ve checked my intuitions, it turns out to be correct. The area is economics. I think my intuitions about economics are quite good—economics has always made sense to me, and when learning about economics, I’ve usually been able to predict the lesson in advance. For example, it was immediately obvious to me why free trade is basically good, monopolists don’t have a supply curve, why a negative income tax is generally better than other forms of welfare, and so on. My dad, who has a master’s in economics, has frequently claimed that I think like an economist (which is, of course, the type of thing that parents say about people, but nonetheless…). I’ve been right in a significant number of disputes I’ve had with him about economics, as he’s later admitted. If I weren’t rubbish at math, I’d probably study economics. This will sound like boasting, but remember, I’m trying to explain why I trust my judgments in a limited subset of domains over the judgment of other very smart people, so I’ll have to appeal to the accuracy of my judgment in similar domains.
It’s easy for a person to think that they have some deep, crucial, brilliant insight. But there are only a few cases in which I’ve thought this, and the other crucial area has been one where I think the evidence has generally confirmed my hubris. So I think it’s reasonable for me to trust my judgment in domains that I’ve studied in great detail for years, where I feel as though there’s some great insight I have that others are missing.
This is an example of agent-relative evidence. Agent-relative evidence is evidence where an agent has evidence that an objective observer doesn’t. Suppose I was accused of committing a crime that I know I didn’t commit. There was good evidence for this—it even convinces Huemer, Chappell, and my smartest friend that I committed it. Nonetheless, this shouldn’t move me to think that I actually committed the crime. If I have special evidence that third parties don’t, then I shouldn’t be moved by the fact that third parties disagree with me—and they shouldn’t be moved by my insistence that I have special evidence.
In a similar way, even if I can’t convince my friend Amos that theism is false, I think I’m justified in thinking it is. And similarly, he’s justified in thinking that it’s true. We are forced to see the world through our own eyes, and absent abandoning all beliefs, we’ll have to sometimes have beliefs that other very smart people disagree with us about. As Scott Alexander says
Philosophy is hard. It’s not just hard. It’s hard in a way such that it seems easy and obvious to each individual person involved. This is not just an Eliezer Yudkowsky problem, or a Topher Hallquist problem – I myself may have previously said something along the lines of that anybody who isn’t a consequentialist needs to have their head examined. It’s hard in the same way politics is hard, where it seems like astounding hubris to call yourself a liberal when some of the brightest minds in history have been conservative, and insane overconfidence to call yourself a conservative when some of civilization’s greatest geniuses were liberals. Nevertheless, this is something everybody does. I do it. Eliezer Yudkowsky does it. Even Topher Hallquist does it. All we can offer in our own defense is to say, with Quine, “to believe something is to believe that it is true”. If we are wise people, we don’t try to use force to push our beliefs on others. If we are very wise, we don’t even insult and dehumanize those whom we disagree with. But we are allowed to believe that our beliefs are true. When Hallquist condemns Yudkowsky for doing it, it risks crossing the line into an isolated demand for rigor.
I think that to trust your judgment over the judgment of other smart people, you have to have some type of inside evidence. There has to be some insight that you think other people are missing, that doesn’t involve them just being dumb. This is one of the reasons why I think people should not be very confident about politics. In politics, unless you’re Noam Chomsky or Milton Friedman, it’s unlikely that you actually have some deeply important and crucial insight that others are missing. You have to actually think that your reasoning abilities are better than the smart people who disagree with you—otherwise, you should not be super confident.
But it’s very easy to just feel like you have special evidence. No doubt crazy wackos like Hoppe think that they have some core, crucial, special, ineffable insight about argumentation ethics. But on top of this, your insight has to be something that you genuinely think that others can’t get merely by being clever and thinking about it—it has to be a special intuition. Additionally, you should have some plausible explanation of how so many other people go wrong, combined with at least some smart and reliable people agreeing with you, combined with some reason to think that your insights are likely to be correct. Finally, you should try to get outside-view evidence that you’re not following your own reasoning down a crazy rabbit hole.
I think I have all of this in the case of most of these beliefs. Let’s give the example of utilitarianism.
There’s a pretty plausible explanation of how people go wrong in their opposition to utilitarianism. It seems the common arc of people in relation to utilitarianism is the following.
Hear about utilitarianism.
Think it sounds plausible.
Hear about a putative counterexample (E.g. organ harvesting).
Think “hmm, that seems repugnant.”
Conclude utilitarianism is false.
But this is a demonstrably mistaken moral methodology. And when you really dig into the alleged counterexamples it becomes abundantly clear that none of them withstand scrutiny. But once they really start to carefully consider the putative counterexample, they’ve already abandoned utilitarianism for years and are motivated not to accept it. And utilitarians all too often make bad arguments for utilitarianism or don’t appeal to moral intuitions. So it’s not surprising that utilitarianism is not ascendant in the academy.
Just as religious indoctrination can explain why people have false religious beliefs, so too can this process explain why people have false non-utilitarian beliefs. This is, while not overwhelmingly decisive, all-in-all a very plausible explanation that serves to “debunk” various other views. It makes it so that I think I can be at least somewhat confident in my utilitarianism, especially because, when one really reflects, the arguments for utilitarianism are so overwhelmingly decisive.
I started out believing in the truth of utilitarianism when I was about 12. Nearly all beliefs that 12-year-olds have are wildly implausible. The mere fact that utilitarianism is, at the very least, not a crazy view and is held by many smart people shows that I have some very strong outside view evidence for it. If you have an idea when you’re thirteen, before you’ve talked with anyone about it, assuming it is a controversial philosophical view, the correct outside view to take about it involves being almost certain that it’s false—giving it maybe .00001 odds. The fact that utilitarianism is defended by other smart people and is at least not a crazy view means that the outside view to take would involve giving it maybe 20% odds.
If you have some hypothesis that you start with, and you think it sounds plausible, and then you get outside evidence that causes you to update in favor of it by a factor of 20,000, while an outside observer should plausibly give it only 20% odds, you should give it more than 20% odds. This is relevantly analogous to the following case given by Huemer
Here’s a type of occurrence you sometimes hear about. Sue is scheduled to get on a plane to fly somewhere. Before the flight, she finds herself having a bad feeling about it. For no known reason, she feels strongly disinclined to get on the flight. She doesn’t go. Later, she learns that that flight crashed. If she had gotten on it as scheduled, she would have been killed.
That’s a fictional instance of a real genre of stories. For some real-life stories like this, see: https://listverse.com/2014/04/28/10-unnerving-premonitions-that-foretold-disaster/
What inferences, if any, should we draw from such an event?
I have an epistemological puzzle about this. Think about how Sue would react to this event, compared to how third parties would react. Sue is much more likely to conclude that there is precognition. If Sue ever has a bad feeling about a flight again (or about anything else), she will listen to that feeling. But when you hear about what happened to Sue, you are much more likely to say, “Oh, it’s just coincidence. Sometimes people have bad feelings, sometimes planes crash; every once in a while, the two types of events randomly coincide.” If you later have a bad feeling about a flight, you’ll probably get on it anyway. This is commonly understood to be the rational response.
This looks like an example of agent-centered evidence.
…
Maybe there are agent-centered epistemic norms, or agent-centered pieces of evidence. Evidence is agent-centered when the very same fact has different evidential significance for different subjects, even when both have the same degree of confidence in that fact and the same background information. The precognition case looks like it might be such a case: the evidence is that Sue had a premonition about the flight, and then the plane crashed. For Sue, that’s pretty strong evidence of precognition. We would completely understand Sue’s resolution to never get on a plane that she has a bad feeling about; this would not seem unreasonable at all. But for third parties, it’s not very convincing. Is it?
Is this just a straight case of agent-relativity of evidence? Why might the evidence be more rationally persuasive to Sue than it is to outside observers?
Here’s one explanation. When outside observers learn about it, they should think,
“This event is a biased sample from the class of stuff that happens. The reason I heard about this story is that something weird happened – if Sue had a premonition that was completely wrong, then the story wouldn’t get repeated and I wouldn’t have heard about it. Furthermore, since there have been billions of people in the world, I should initially expect that some things like this would have happened, even if there were no precognition or ESP.”
But when Sue herself experiences the event, she shouldn’t say that. To her, her own life is not a biased sample. If nothing happened, or the premonition was wrong, Sue would still know about it. It’s not as if she searched through lots of people’s lives looking for stories like this; she only has the one life that she had a chance to directly experience.
That seems to make sense. Two people can get “the same evidence” but by a different evidence-collection method, and of course that can affect the significance of the evidence. (I put “the same evidence” in quotes because you might say that the information about how the evidence was collected is part of your evidence, so it's not really the same.)
There is still something weird about this, though, because Sue knows how the situation looks to third parties, and they know how the situation looks to her. Both seemingly know the same facts. The third parties know that Sue’s experience is not a biased sample to her. She knows that her experience is, to other people, just the experience of one among the 7 billion people on earth, and not particularly remarkable to them.
Other people should regard my intuitions about utilitarianism as a biased sample, but I shouldn’t. It’s not a biased sample for me—it’s my beliefs. This, I think, represents a good sample of the cases in which it’s okay to believe things even when smart experts disagree. If you have, as I do here, somewhat private evidence in that you have an unbiased sample, then this gives some reason to believe your beliefs, even when other smart people disagree with you. This serves as both some outside view evidence, and also a reason to think that I’m not going horribly wrong. So I think my high confidence in utilitarianism is justified.
Also, I can talk backward, and that has to count for something when it comes to proving the accuracy of moral intuitions 😛.
Ineffable in the sense that I couldn’t eff it.
I do wish you were an economist.
I think I agree with basically the entire outline of the post, just that everytime you mention one of your beliefs I substitute it with my own belief. This issue of fundamental seemingly justified disagreement seems like one of the most important issues when it comes to communication between people with different philosophical views. I think something of interest would be how this relates to the different beliefs within the general population as opposed to disagreement among philosophy types. I enjoy talking to people with different intuitions to mine eg. religious types, supernatural types etc., not even try to convince them they are wrong or anything just trying to poke their brain to see how they percieve the world.