Eliezer has a fun article titled “Making Beliefs Pay Rent (in Anticipated Experiences.” The basic idea is that humans often think things that are false. One way of seeing if we believe false things is to check out beliefs against the external world. When deciding whether to believe things, we should ask what we’d expect to be true if the belief were true and see if that world state is present in the actual world. As Eliezer says:
When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever.
Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
I think this is generally a pretty decent heuristic. Most beliefs will increase the probability of some events in the world, and if your beliefs are constantly conflicting with the facts of the world—if your beliefs lead you to anticipate that both X and Y will happen, each with 90% confidence, and neither Y nor Y happens, that is very strong evidence against the reliability of those beliefs.
But there’s a world of difference between “this is a useful heuristic” and “this is required for any justified beliefs.” Some beliefs do not need to have anticipated experience—and some beliefs do not have anticipated experience. Consider the following examples:
In all possible worlds, the mathematical laws hold. So if you hop over to another universe, the mathematical laws will be the same.
Married bachelors are not merely nonexistent but the type of thing that couldn’t exist, even in principle.
There are nebulas that we will never observe—they do not just mysteriously stop existing when we can no longer see them.
It’s wrong to torture other people.
We are not brains in vats.
Note that each of these beliefs has the same empirical content as its inverse. If these were false, we would not be able to tell this empirically. This is because beliefs are broad models of how some feature of reality operates, and we do not have the ability to directly observe every feature of reality. As a consequence, the beliefs that are about the features of reality that we won’t be able to observe do not need to pay rent in anticipated experience.
This is a more broad problem I have with rationalists. It seems they confuse “X is a reasoning error that people often make that can be corrected by Y” with “therefore, all beliefs should be rejected unless they meet criteria Y.” As a consequence, they overgeneralize from imperfect heuristics, like that nearly all views that diverge from materialism are false, and end up rejecting the best views about consciousness. This even lead, during my brief back-and-forth with Eliezer, to him declaring that interactionalist dualism was no different from physicalism because it didn’t make any different empirical predictions about what we’ll currently see. Merely pointing out that people mistakenly believe beliefs of type X does not imply that all beliefs of type X are unjustified, and does not excuse ignoring the powerful arguments for some specific belief of type X.
Beliefs can pay rent without necessarily directly resulting in anticipated experiences. They could only figure into broader networks of thoughts and concepts that form a unified and internally consistent whole that pays (or tends to pay) rent. Perhaps one might say, then, that models or worldviews should pay rent, even if they have some spandrels that don't, themselves, pay rent.
Once I saw a tweet that endorsed “verificationism, but as a theory of whether I care if a sentence is true.” I haven’t clicked through to read Eliezer’s article, but it sounds like he might be sympathetic to that view.