Against Yudkowsky's Implausible Position About Animal Consciousness And For Greater Modesty About Philosophical Positions Among Rationalists
Pigs are conscious
1 Introduction
I’ve been to a few EA globals. They were a lot of fun, and I met mostly cool people. But when I tell various people in the rationalist community that I’m a non-physicalist—a totally standard position in the philosophy of mind literature, adopted by some of the worlds leading philosophers of mind, including David Chalmers who is a genius—people act like I declared that I believe in Bigfoot or a flat earth. Rationalism is the only place where you get more stigma for declaring that you believe a technical position abut philosophy of mind than if you say you’re a polyamorous furry. Rationalists seem to have total confidence in physicalism about consciousness, despite the many objections to it.
I like Rationalists a lot. They are much better at acquiring true beliefs than most normal people. But unfortunately, they are much worse at acquiring true beliefs than they think they are. This often results in absurd overconfidence in very tenuous views—views for which there are not decisive arguments on either side.
I encounter this a lot. When you mention that you’re a moral realist in a bay area house party, the gaggle of rationalists express below 1% credence in moral realism. They often express this despite a very basic unfamiliarity with the literature. Very often, they regurgitate poor objections that demonstrate unfamiliarity with the subject matter—for example, they’ll note that intuitions are sometimes inaccurate, as if that refutes phenomenal conservatism.
A pretty extreme example of this occurred in this Facebook debate about animal consciousness. Eliezer Yudkowsky, head honcho of the rationalists, expressed his view that pigs and almost all animals are almost certainly not conscious. Why is this? Well, as he says:
However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does not have a more-simplified form of tangible experiences. My model says that certain types of reflectivity are critical to being something it is like something to be. The model of a pig as having pain that is like yours, but simpler, is wrong. The pig does have cognitive algorithms similar to the ones that impinge upon your own self-awareness as emotions, but without the reflective self-awareness that creates someone to listen to it.
Okay, so on this view, one needs to have reflective processes in order to be conscious. One’s brain has to model itself to be conscious. This doesn’t sound plausible to me, but perhaps if there’s overwhelming neuroscientific evidence, it’s worth accepting the view. And this view implies that pigs aren’t conscious, so Yudkowsky infers that they are not conscious.
This seems to me to be the wrong approach. It’s actually incredibly difficult to adjudicate between the different theories of consciousness. It makes sense to gather evidence for and against the consciousness of particular creatures, rather than starting with a general theory and using that to solve the problems. If your model says that pigs aren’t conscious, then that seems to be a problem with your model.
2 Mammals feel pain
I won’t go too in-depth here, but let’s just briefly review the evidence that mammals, at the very least, feel pain. This evidence is sufficiently strong that, as the SEP page on animal consciousness notes, “the position that all mammals are conscious is widely agreed upon among scientists who express views on the distribution of consciousness." The SEP page references two papers, one by Jaak Panksepp (awesome name!) and the other by Seth, Baars, and Edelman.
Let’s start with the Panksepp paper. They lay out the basic methodology, which involves looking at the parts of the brain that are necessary and sufficient for consciousness. So they see particular brain regions which are active during states when we’re conscious—and particularly correlate with particular mental states—and aren’t active when we’re not conscious. They then look at the brains of other mammals and notice that these features are ubiquitous in mammals, such that all mammals have the things that we know make us conscious in our brains. In addition, they act physically like we do when we’re in pain—they scream, they cry, their heart rate increases when they have a stressful stimulus, they make cost-benefit analyses where they’re willing to risk negative stimuli for greater reward. Sure looks like they’re conscious.
Specifically, they endorse a “psycho-neuro-ethological ‘‘triangulation’’ approach. The paper is filled with big phrases like that. What that means is that they look at various things that happen in the brain when we feel certain emotions. They observe that in humans, those emotions cause certain things—for example, being happy makes us more playful. They then look at mammal brains and see that they have the same basic brain structure, and this produces the same physical reactions—using the happiness example, this would also make the animals more playful. If they see that animals have the same basic neural structures as we do when we have certain experiences and that those are associated with the same physical states that occur when humans have those conscious states, they infer that the animals are having similar conscious states. If our brain looks like a duck’s brain when we have some experience, and we act like ducks do when they are in a comparable brain state, we should guess that ducks are having a similar experience. (I know we’re talking about mammals here, but I couldn’t resist the “looks like a duck, talks like a duck joke.”)
If a pig has a brain state that resembles ours when we are happy, tries to get things that make it happy, and produces the same neurological responses that we do when we’re happy, we should infer that pigs are not mindless automatons, but are, in fact, happy.
They then note that animals like drugs. Animals, like us, get addicted to opioids and have similar brain responses when they’re on opioids. As the authors note “Indeed, one can predict drugs that will be addictive in humans quite effectively from animal studies of desire.” If animals like the drugs that make us happy and react in similar ways to us, that gives us good reason to think that they are, in fact, happy.
They then note that the parts of the brain responsible for various human emotions are quite ancient—predating humans—and that mammals have them too. So, if the things that cause emotions are also present in animals, we should guess they’re conscious, especially when their behavior is perfectly consistent with being conscious. In fact, by running electricity through certain brain regions that animals share, we can induce conscious states in people—that shows that it is those brain states that are causing the various mental states.
The authors then run through various other mental states and show that those mental states are similar between humans and animals—animals have similar brain regions which provoke similar physical responses, and we know that in humans, those brain regions cause specific mental states.
Now, maybe there’s some magic of the human brain, such that in animal brains, the brain regions that cause qualia instead cause causally identical stuff but no consciousness. But there’s no good evidence for that, and plenty against. You should not posit special features of certain physical systems, for no reason.
Moving on to the Seth, Baars, and Edelman paper, they note that there are various features of consciousness, that differentiate conscious states from other things happening in the brain that don’t induce conscious states. They note:
Consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or motor plans.
In other words, there are common patterns among conscious states. We can look at a human brain and see that the things that are associated with consciousness produce different neurological markers from the things that aren’t associated with consciousness. Features associated with consciousness include:
Irregular, low-amplitude brain activity: When we’re awake we have irregular low-amplitude brain activity. When we’re not conscious—e.g. in deep comas or anesthesia-induced unconsciousness—irregular, low-amplitude brain activity isn’t present. Mammal brains possess irregular, low-amplitude brain activity.
Involvement of the thalamocortical system: When you damage the thalamocortical system, that deletes part of one’s consciousness, unlike other systems. Mammals also have a thalamocortical system—just like us.
Widespread brain activity: Consciousness induces widespread brain activity. We don’t have that when things induce us not to be conscious, like being in a coma. Mammals do.
The authors note, from these three facts:
Together, these first three properties indicate that consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or endogenous activity. These properties are directly testable and constitute necessary criteria for consciousness in humans. It is striking that these basic features are conserved among mammals, at least for sensory processes. The developed thalamocortical system that underlies human consciousness first arose with early mammals or mammal-like reptiles, more than 100 million years ago.
More evidence from neuroscience for animal consciousness:
Something else about metastability that I don’t really understand is also present in humans and animals.
Consciousness involves binding—bringing lots of different inputs together. In your consciousness, you can see the entire world at once, while thinking about things at the same time. Lots of different types of information are processed simultaneously, in the same way. Some explanations involving neural synchronicity have received some empirical support—and animals also have neural synchronicity, so they would also have the same kind of binding.
We attribute conscious experiences as happening to us. But mammals have a similar sense of self. Mammals, like us, process information relative to themselves—so they see a wall and process it relative to them in space.
Consciousness facilitates learning. Humans learn from conscious experiences. In contrast, we do not learn from things that do not impinge on our consciousness. If someone slaps me whenever I scratch my nose (someone does actually—crazy story), I learn not to scratch my nose. In contrast, if someone does a thing that I don’t consciously perceive when I scratch my nose, I won’t learn from it. But animals seem to learn to, and update in response to stimulus, just like humans do—but only when humans are exposed to things that affect their consciousness. In fact, even fish learn.
So there’s a veritable wealth of evidence that at least mammals are conscious. The evidence is less strong for organisms that are less intelligent and more distant from us evolutionarily, but it remains relatively strong for at least many fish. Overturning this abundance of evidence, that’s been enough to convince the substantial majority of consciousness researchers requires a lot of evidence. Does Yudkowsky have it?
3 Yudkowsky’s view is crazy, and is decisively refuted over and over again
No. No he does not. In fact, as far as I can tell, throughout the entire protracted FaceBook exchange, he never adduced a single piece of evidence for his conclusion. The closest that he provides to an argument is the following:
I consider myself a specialist on reflectivity and on the dissolution of certain types of confusion. I have no compunction about disagreeing with other alleged specialists on authority; any reasonable disagreement on the details will be evaluated as an object-level argument. From my perspective, I’m not seeing any, “No, this is a non-mysterious theory of qualia that says pigs are sentient…” and a lot of “How do you know it doesn’t…?” to which the only answer I can give is, “I may not be certain, but I’m not going to update my remaining ignorance on your claim to be even more ignorant, because you haven’t yet named a new possibility I haven’t considered, nor pointed out what I consider to be a new problem with the best interim theory, so you’re not giving me a new reason to further spread probability density.”
What??? The suggestion seems to be that there is no other good theory of consciousness that implies that animals are conscious. To which I’d reply:
We don’t have any good theory about consciousness yet—the data is just too underdetermined. Just as you can know that apples fall when you drop them before you have a comprehensive theory of gravity, so too can you know some things about consciousness, even absent a comprehensive theory.
There are various theories that predict that animals are conscious. For example, integrated information theory, McFadden’s CEMI field theory, various Higher-Order theories, and the global workspace model will probably imply that animals are conscious. Eliezer has no argument to prefer his view to others.
Take the integrated information theory, for example. I don’t think it’s a great view. But at least it has something going for it. It has made a series of accurate predictions about the neural correlates of consciousness. Same with McFadden’s theory. It seems Yudkowsky’s theory has literally nothing going for it, beyond it sounding to Eliezer like a good solution. There is no empirical evidence for it, and, as we’ll see, it produces crazy, implausible implications. David Pearce has a nice comment about some of those implications:
Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.
Yudkowsky’s theory of consciousness would predict that during especially intense experiences, where we’re not reflecting, we’re either not conscious or less conscious. So when people orgasm, they’re not conscious. That’s very implausible. Or, when a person is in unbelievable panic, on this view, they become non-conscious or less conscious. Pearce further notes:
Children with autism have profound deficits of self-modelling as well as social cognition compared to neurotypical folk. So are profoundly autistic humans less intensely conscious than hyper-social people? In extreme cases, do the severely autistic lack consciousness’ altogether, as Eliezer’s conjecture would suggest? Perhaps compare the accumulating evidence for Henry Markram’s “Intense World” theory of autism.
Francisco Boni Neto furthers:
many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Super vivid, hyper conscious experiences, phenomenic rich and deep experiences like lucid dreaming and ‘out-of-body’ experiences happens when higher structures responsible for top-bottom processing are suppressed. They lack a realistic conviction, specially when you wake up, but they do feel intense and raw along the pain-pleasure axis.
Eliezer just bites the bullet:
I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious. This is not where most of my probability mass lies, but it’s on the table.
So when confronted with tons of neurological evidence that shutting down higher processing results in more intense conscious experiences, Eliezer just says that when we think that we have more intense experiences, we’re actually zombies or something? That’s totally crazy. It’s sufficiently crazy that I think I might be misunderstanding him. When you find out that your view says that people are barely conscious or non-conscious when they orgasm or that some very autistic people aren’t conscious, it makes sense to give up the damn theory!
And this isn’t the only bullet Eliezer bites. He admits, “It would not surprise me very much to learn that average children develop inner listeners at age six.” I have memories from before age 6—these memories would have to have been before I was conscious, on this view.
Rob Wiblin makes a good point:
[Eliezer], it’s possible that what you are referring to as an ‘inner listener’ is necessary for subjective experience, and that this happened to be added by evolution just before the human line. It’s also possible that consciousness is primitive and everything is conscious to some extent. But why have the prior that almost all non-human animals are not conscious and lack those parts until someone brings you evidence to the contrary (i.e. “What I need to hear to be persuaded is,”)? That just cannot be rational.
You should simply say that you are a) uncertain what causes consciousness, because really nobody knows yet, and b) you don’t know if e.g. pigs have the things that are proposed as being necessary for consciousness, because you haven’t really looked into it.
I agree with Rob. We should be pretty uncertain. My credences are maybe the following:
92% that at least almost all mammals are conscious.
80% that almost all reptiles are conscious.
60% that fish are mostly conscious.
30% that insects are conscious.
It’s about as likely that reptiles aren’t conscious as insects are. Because consciousness is private—you only know your own—we shouldn’t be very confident about any features of consciousness.
Based on these considerations, I conclude that Eliezer’s view is legitimately crazy. There is, quite literally, no good reason to believe it, and lots of evidence against it. Eliezer just dismisses that evidence, for no good reason, bites a million bullets, and acts like that’s the obvious solution.
4 Absurd overconfidence
The thing that was most infuriating about this exchange was Eliezer’s insistence that those who disagreed with him were stupid, combined with his demonstration that he had no idea what he was talking about. Condescension and error make an unfortunate combination. He says of the position that pigs, for instance, aren’t conscious:
It also seems to me that this is not all that inaccessible to a reasonable third party, though the sort of person who maintains some doubt about physicalism, or the sort of philosophers who think it’s still respectable academic debate rather than sheer foolishness to argue about the A-Theory vs. B-Theory of time, or the sort of person who can’t follow the argument for why all our remaining uncertainty should be within different many-worlds interpretations rather than slopping over outside, will not be able to access it.
Count me in as a person who can’t follow any arguments about quantum physics, much less the arguments for why we should be almost certain of many worlds. But seriously, physicalism? We should have no doubt about physicalism? As I’ve argued before, the case against physicalism is formidable. Eliezer thinks it’s an open-and-shut case, but that’s because he is demonstrably mistaken about the zombie argument against physicalism and the implications of non-physicalism. In the literal second paragraph of his article about zombies, Eliezer says:
It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".
No! No! No^10^64. He is confusing non-physicalism and epiphenomenalism. I am a non-physicalist non-epiphenomenalist. There are several other non-physicalist views. In fact, in the FaceBook exchange, Eliezer says:
Suppose I claimed to be able to access an epistemic state where (rather than being pretty damn sure that physicalism is true) I was pretty damn sure that P-zombies / epiphenomenalism was false.
In a FaceBook thread where Eliezer admonishes people for being too stupid to understand that physicalism is true, he demonstrates that he doesn’t have a basic familiarity with the subject. The possibility of P-zombies is not the same thing as non-physicalism. And, as I’ve shown before, Eliezer’s reply to the zombie argument all hinges on that one crucial error.
I used to believe Eliezer’s position about physicalism, after reading his piece on zombies. Then I made a friend (I had some before, just to be clear). He explained to me how the zombie argument really worked, rather than the distorted Yudkowsky version. After I learned that, I realized Eliezer’s view fails completely.
And that’s not the only thing Eliezer expresses insane overconfidence about. In response to his position that most animals other than humans aren’t conscious, David Pearce points out that you shouldn’t be very confident in positions that almost all experts disagree with you about, especially when you have a strong personal interest in their view being false. Eliezer replies:
What do they think they know and how do they think they know it? If they’re saying “Here is how we think an inner listener functions, here is how we identified the associated brain functions, and here is how we found it in animals and that showed that it carries out the same functions” I would be quite impressed. What I expect to see is, “We found this area lights up when humans are sad. Look, pigs have it too.” Emotions are just plain simpler than inner listeners. I’d expect to see analogous brain areas in birds.
When I read this, I almost fell out of my chair. Eliezer admits that he has not so much as read the arguments people give for widespread animal consciousness. He is basing his view on a guess of what they say, combined with an implausible physical theory for which he has no evidence. This would be like coming to the conclusion that the earth is 6,000 years old, despite near-ubiquitous expert disagreement, providing no evidence for the view, and then admitting that you haven’t even read the arguments that experts give in the field against your position. This is the gravest of epistemic sins.
5 Against rationalist overconfidence
It’s actually really difficult to reliably generate true beliefs.
Some topics are easy to generate true beliefs about. It’s not hard to know the Earth is round or that there are rocks. But a lot of topics are genuinely quite difficult to believe true things about. Philosophy of mind is one of those subjects—so is quantum physics. But I fear that rationalists have a bad habit of overestimating how easy it is to come to beliefs about those things. Then, they attribute disagreement with the rationalist consensus to intellectual deficiency.
The zombie argument offers an important test case. If Yudkowsky’s demonstrably mistaken views about zombies can catch on as the go-to response among rationalists to the zombie argument, something has gone deeply wrong. You cannot claim that your community roots out false beliefs, when a widely shared, high-profile belief is something that you wouldn’t believe if you’d taken an intro philosophy of mind class. There are many reasonable replies to the zombie argument that do not rest on errors—Yudkowsky’s is not one of them.
I’m generally pretty wary when people claim that they have across the board, better ways of gaining knowledge than experts in various fields. If you claim that the philosophers of mind are making dumb mistakes, as are quantum physicists, as are philosophers of time, it may just be you who is making the dumb mistakes. Rob Wiblin makes a good point:
… I am instinctively a physicalist, not least because I’ve spent my adult life being surrounded by people who make fun of anyone who says otherwise. Believing this is a handy identity marker that the skeptics/rationalists use to draw the boundaries on their community.
Beliefs catch on in the rationalist space not just because they’re true. They often catch on because they confirm the previous beliefs of people who share the dispositions of rationalists. If rationalism really were great at tracking the truth, it’s incredibly unlikely that there would be extreme social pressure to conform to a only slight plurality view in philosophy of mind.
The world is a complicated place. Philosophy is hard, so is quantum physics. In addition, rationalism has weird social pressures that make beliefs ubiquitous apart from their truth. As a consequence, if there’s a disagreement between philosophers and rationalists or quantum physicists and rationalists, I’ll happily side with the philosophers.
In addition, Eliezer should cut his credences in his own views in half. Something has gone deeply wrong when you conclude with almost total certainty that a position disagreed with by almost all experts, with no evidence for it, that produces genuinely crazy results and conflicts with lots of empirical evidence is true.
Great article. I also hang out with rationalists sometimes, and I am too frustrated by their absurd overconfidence in the "LessWrong canon". One could write similar articles on various topics (like decision theory), but, honestly, this just makes me sad about the whole premise of trying to think better.
And it's not like the other group trying to think better, analytic philosophers, is faring well. Maybe the sad truth is that humans are really bad at philosophy, and there is just no reliable way to fix that.
> Rationalists seem to have total confidence in physicalism about consciousness, despite the many objections to it. I like Rationalists a lot. They are much better at acquiring true beliefs than most normal people. But unfortunately, they are much worse at acquiring true beliefs than they think they are. This often results in absurd overconfidence in very tenuous views—views for which there are not decisive arguments on either side.
As a rationalist, this is a bit distressing to hear. Not because there's anything incorrect about what you've said, but because it *shouldn't* be correct. After this you describe a Yudkowsky who seems to have come up with a hypothesis via his internal intuition, and then failed to update his views according to available evidence. Yudkowsky's intuition is usually pretty good, but he himself wrote:
> The third virtue [of rationality] is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting, the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse.
Interestingly I feel the same as you regarding qualia/consciousness with respect to Yudkowsky: I'm inclined toward moral realism, and Yudkowsky's view seems incorrect even just based on how humans work. If I stub my toe really hard, I believe I am experiencing qualia in that moment, but not that I'm experiencing "reflective self-awareness" in that moment. I don't understand how "reflective self-awareness" is supposed to create this qualia, such that without it there is no qualia. I similarly disagree with EY about the danger of AGI, where his reasoning that the very first AGI will probably kill us all seems to have skipped some logically necessary steps.
I don't see a problem with *rationalism* per se, because it seems to me that Yudkowsky laid out some very good principles. It seems to me that the main problem going on in rationalism is that some rationalists including Yudkowsky himself don't always seem to follow those principles. It may also be the case that there are some additional principles that are an important part of being rational, but which aren't included in Yudkowsky's Sequences, and we should expect this to be the case because reality is extremely complex relative to human mental capability, and therefore it would be surprising if any single person were the font of all truth. And indeed, my understanding of the twelfth and final virtue of rationality is that "there are some additional principles of rationality I haven't laid out here, and I myself don't know what they are, but they're important too".