The Objectivity Of Consciousness Favors Its Ubiquity
But even if it's not robustly objective, it's still ubiquitous
Consciousness is either present or absent. Either the switch is on or off. It cannot be vague or a matter of mere interpretation whether you have experiences. Either there’s something it’s like to be you or there isn’t.
Admittedly, this is controversial. For those skeptical, I recommend
’s excellent post on the topic. One of the facts about consciousness that strikes me as most clear is that it isn’t vague. It may be vague exactly what one is experiencing, but it can never be vague whether someone has an experience at all.Suppose that we grant this—that consciousness isn’t vague but is robustly objective, and that there aren’t cases where it’s genuinely indeterminate whether someone is conscious. I think this has pretty big implications for animal consciousness. In particular, it makes it probable that consciousness is spread relatively widely across the animal kingdom, rather than being limited to vertebrates.
I’ll explain later that there’s an interesting dilemma: whether or not one thinks that consciousness is vague, there’s a strong argument for widespread animal consciousness. There is a two-pronged argument, and whichever prong turns out correct makes it likely that animal consciousness is widespread.
Higher-level physical features are vague. It’s often genuinely indeterminate, whether something is a heart. There are some objects that are sort of like hearts, so that whether they technically count as a heart is a matter of interpretation. Human concepts are generally not precise enough to have clean applications in all cases.
If consciousness depends on some complicated, higher-order property, then consciousness would sometimes be vague. So if we think—as I’ve suggested—that consciousness isn’t vague, it must depend on some precise property. This property will generally be something quite widespread, for properties that are limited to vertebrates typically are subject to borderline cases.
Take, as an example, Brian Key’s cortex theory, according to which a cortex is strictly needed for consciousness. Because they don’t have cortices, Key thinks octopi, fish, shrimp, and insects aren’t conscious (I think this is pretty implausible). But whether something counts as a cortex is vague. There was not a first organism with a clear and unambiguous neocortex (and any criteria for precisely distinguishing cortices that generate consciousness from not quite cortices that don’t will be hopelessly arbitrary).
Thus, one who affirms the cortex-centric theory will either have to think:
Consciousness is vague, so that it’s sometimes indeterminate whether a being has qualia—whether there’s anything it’s like to be that organism.
The features that give rise to consciousness are highly arbitrary, relying on technical delineations of what counts as a real cortex, often triggered by thresholds without a firm basis.
If one accepts 2., then they’ll have to think that there was a first conscious organism—conscious because of its cortex—despite having no substantial differences with the unconscious organisms before it. This is problematic. It’s rather like saying that only heaps are conscious; because it’s vague what counts as a heap, this requires drawing extremely arbitrary lines between two similar organisms. Similar arguments will apply more broadly to most theories of consciousness.
The argument, in a nutshell, is as follows:
Consciousness isn’t vague.
If consciousness isn’t vague, it depends on non-vague properties.
If consciousness isn’t widespread, then consciousness depends on vague properties.
So consciousness is widespread.
The reason to think 3. is that the properties that distinguish humans from, say, shrimp, are higher-level biological properties—having to do with functionally new brain regions like neocortexes. But these properties are vague! So if humans are conscious and shrimp aren’t, then consciousness would be either vague or arbitrary.
This also has some interesting implications for which of the main theories of consciousness is correct. One of the leading theories is the global workspace theory, which holds that consciousness serves as a sort of spotlight shining attention on different areas of the brain. But whether some global workspace exists is likely vague, so probably this poses challenges for global workspace theory.
Integrated information theory is another leading theory, which says that consciousness is what you get when you have many different types of information all being processed. This one survives the vagueness challenge—IIT is probably robustly objective—but I don’t find it very plausible. Among other things, it implies that random matrices are more conscious than people. It also implies thermometers are conscious.
There’s also a view according to which consciousness is generated by the brain’s electromagnetic field. This one probably survives the vagueness challenge decently well. Whether something has an EM field is robustly objective (and then presumably the contents of consciousness are determined by the information being beamed to the EM field).
Higher order thought theories say that, essentially, a brain becomes conscious if it has thoughts that represent other thoughts it has. However, representation is likely a vague notion, so this implies some degree of arbitrariness or indeterminacy. HOTGTOGO—higher-order thought theories have got to go.
So far I’ve explained why a person who shares my conviction that consciousness is objective should think it’s widespread. But what if a person thinks consciousness is sometimes vague and indeterminate? Here I’ll argue that they too should believe that consciousness is ubiquitous.
Those who accept that consciousness is vague and blurry—genuinely indeterminate in some cases—are usually physicalists.1 They think consciousness just is a certain kind of physical arrangement. When you build brains of the right sort, they become a conscious system, with no further ingredients needed.
But physicalism makes an empirical prediction: the physical systems that are conscious should have some specific explanation of how they give rise to consciousness. Just as there’s a coherent explanation of how cells give rise to the processes involved in life, there should be a coherent explanation of why the physical processes that produce consciousness do so.
This is different from what non-physicalists think about consciousness. Non-physicalists think consciousness isn’t something physical, but instead there are extra laws tacked on that make it so that consciousness appears when there are certain physical arrangements. The physical is, by itself, insufficient to produce consciousness—there thus doesn’t need to be a physical explanation of how consciousness appears. There can’t be.
But if the physical systems are supposed to directly entail consciousness, then this challenges a lot of physical theories which seem to have no coherent explanation of why consciousness appears when it does. Key, for instance, says that you need a cortex for consciousness—but why in the world would a cortex give rise to consciousness? A cortex does some impressive things, but these mostly differ from other brain regions in degree not kind, and it’s not super clear what about the cortex’s function might explain the emergence of consciousness. So if the cortex gives rise to consciousness, probably non-cortical areas do too.
Now, I’m not a physicalist. I think the process of trying to get consciousness from non-conscious stuff is like trying to get the fact that there are infinite prime numbers from a stone. Despite this, you can sort of see what a physicalist account of consciousness would have to be like. If you close your eyes2 you can sort of see what an account would have to look like. It would have to involve consciousness being nothing more than a kind of information processing, where a cognitive system takes in various negative signals. However, this is what shrimp and insect brains do—so on this picture, probably consciousness would be widespread.
I don’t think this argument is as decisive as the last one. Higher-order thought theories, for instance, can potentially explain consciousness without implying that it’s super widespread. And maybe on theories on which consciousness is more restricted, like certain versions of global workspace, one can tell a story about why consciousness is unique to these structures.
But still, I think this has considerable force. I think Peter Godfried Smith tells the most plausible sort of physicalist account, on which consciousness arises from a certain limited kind of agency. This agency is possessed by shrimp and insects. Thus, the most plausible evolutionary account of consciousness seems to imply that it’s widespread.
There’s another kind of consideration favoring consciousness being widespread: it would be rather odd if it just emerged locally and recently. Most broad evolutionary processes—vision, for instance—are widespread across the animal kingdom. Octopi, shrimp, insects, and more can see. There aren’t many analogous cognitive features that are limited to just vertebrates. This is a stronger argument for fish consciousness, as the last common ancestor of vertebrates could very well have been conscious, but the last common ancestor of insects and vertebrates was a very simple kind of worm. Though it still has some force for insects.
It’s also quite plausible that cephalopods—octopi, squid, cuttlefish, and so on—are conscious. So to think consciousness is not widespread, you’d have to think it developed twice, almost completely independently, but depended on various higher-order features that aren’t present in lower animals. That’s not impossible, but it doesn’t seem that likely. I don’t know of any other trait that’s restricted to tetrapods3 (even very stupid ones) and cephalopods. One could deny cephalopod consciousness, as Brian Key, for instance, does, but that seems pretty implausible.
These considerations might even favor thinking consciousness is more widespread than typically thought. They might even incline us in the direction of thinking primitive worms and insects are conscious—albeit possessing very primitive experience.
Overall, though, I think the case for widespread animal consciousness is fairly robust. I’ve presented elsewhere a bunch of mostly behavioral considerations favoring this conclusion—here I’ve argued that reflecting on the sorts of features we should expect to generate consciousness also favor its ubiquity. Together, I think these considerations justify at least 60% credence in insect and shrimp consciousness.
There also are neutral Monists, but let’s leave them aside. They’ll generally think that at least pro-consciousness is widespread, though it’s unclear how we should count proto-consciousness ethically.
Does it almost feel like nothing changed at all?
And if you close your eyes
Does it almost feel like you've been here before?
How am I gonna be an optimist about this?
How am I gonna be an optimist about this?
Tetrapoda is a clade that includes amphibians, reptiles, birds, and so on.
Your initial premise is absurd. Of course consciousness comes in degrees. You experience that every morning when you wake up. You go from fully unconscious to fully conscious by going through intermediate states with intermediate degrees of consciousness. If we had to place a precise time, down to the millisecond, on when you became conscious this morning, it would not be well defined. It would be, as you put it, vague.
I agree with most of the reasoning in this post. However, I think you don't push it far enough, and if you did push it far enough, it would look more like a reductio of its starting point than a compelling inference.
In particular, I agree that if consciousness is robustly objective--if it can never be a vague or indeterminate matter whether something is conscious--then it's ubiquitous. I also agree that consciousness being robustly objective is inconsistent with lots of popular theories of consciousness (like HOT, global workspace, or theories that posit the necessity of specific high-level anatomical features like cortices, etc.)
But you don't say what you mean by "ubiquitous", and you seem focused on implications for shrimp/insect pain. I think the only sort of ubiquity that could get you the conclusion that it's never vague whether something is conscious looks more like panpsychism. Consciousness would have to be a fundamental feature of matter, sorta like mass or spin.
In that Peter Godfrey Smith story, you see proto-agency--e.g., chemotaxis--already in single celled organisms. Do you want to say they're conscious? If not, then you need to draw a line somewhere in the tree of life between them and insects. That's going to look kinda vague/arbitrary.
If you're happy to say amoebas are conscious, then we just push the question back. Despite being only one cell, they're still massively complex, containing something like 10^14 molecules. How many molecules do you need, in what kind of structures, do you need to become conscious for the first time? The only hope I see for escaping this--if you won't tolerate any vagueness--is saying that consciousness is there right from the start, with the simplest particles.
But now the problem is that you buy the robust objectivity of consciousness at the fundamental level at the price of sacrificing the robust objectivity of consciousness at the macroscopic level we're already familiar with.
Take a physical example like mass. We're happy with the idea that it's pretty robustly objective how much mass an electron has, how much mass a proton has, etc. What about a macroscopic object, like a car? Well, for lots of purposes we can think of car as a single object with a single mass--if you want to know how much energy (e.g., how many gallons of gas, burned in an engine with such-and-such efficiency) it will take to move the whole car a given distance, thinking of the car as a single object with a single mass is convenient. On the other hand, if you want to know what will happen if another car sideswipes the mirror, it's better to think of the mirror as a distinct object with its own (much smaller) mass, to make sense of why the mirror will be snapped off but the rest of the car won't be moved. There's not really a robustly objective fact about how to group particles together into macroscopic objects--that's a matter of convenience. We get the robust objectivity of mass facts at the fundamental level, but not the robust objectivity of mass facts at the macroscopic level. And in the case of mass, it's pretty straightforward how you get from particle-mass-facts to macro-object-mass-facts--it's just addition; the squishiness comes in how you group particles together into macro-objects. In the case of consciousness, nobody has any idea how you'd get from particle-consciousness to organism-consciousness, because nobody has any idea what particle-consciousness-facts could be.
It seems to me pretty plausible that if you want the robust objectivity of consciousness facts, the best you could get would be the robust objectivity of facts about the consciousness of something like quarks, which would probably *not* get you the robust objectivity of facts about the consciousness of insects, shrimp, or humans.