59 Comments
User's avatar
River's avatar

Your initial premise is absurd. Of course consciousness comes in degrees. You experience that every morning when you wake up. You go from fully unconscious to fully conscious by going through intermediate states with intermediate degrees of consciousness. If we had to place a precise time, down to the millisecond, on when you became conscious this morning, it would not be well defined. It would be, as you put it, vague.

Expand full comment
Bentham's Bulldog's avatar

I agree that it comes in degrees but that’s compatible with it being binary whether something is consciousness. It comes in degrees how far above 6 feet something is, but either it’s above 6 feet or it isn’t

Expand full comment
River's avatar

Then you run into the problem that the difference between the person who is 5'11.98" and the person who is 6'0.01" seems rather arbitrary. I'm sure a neuroscientist could define an equally arbitrary line between conscious and not conscious and get your wake up time down to the millisecond, but what would be the point?

Expand full comment
Bentham's Bulldog's avatar

My argument invoked the idea that consciousness is binary. You went on and accused me of making a different point, which I didn't make. I clarified that. So I'm not really sure where we are now.

Expand full comment
River's avatar

Let me try to make more explicit what I think your error is. If you are willing to do this " > 6 ft" thing then you can take any vague thing and make it binary. Vagueness becomes a matter of how you choose to classify something, not a property of a set of things in the world. And at that point your argument falls apart. If anything is vague in an objective sense, then I believe consciousness has to be vague, as we all experience that continuous spectrum of consciousness every morning.

Expand full comment
Bentham's Bulldog's avatar

Not getting it. It's not that anything is vague! Whether something is vague depends on what it is. Whether something has more than 100,000 hydrogen atoms located within 100 feet isn't vague--whether something is a heap is.

Expand full comment
The Ancient Geek's avatar

It's about intrinsicness. Introducing thresholds that aren't intrinsic to the thing itself doesn't show the thing itself is non-vague.

Expand full comment
The Ancient Geek's avatar

You mean ,you can turn it into a binary by saying it is either above or below an arbitrary level. Since the level is arbitrary and decided externally, you are not demonstrating an intrinsic binariness.

Expand full comment
Ponti Min's avatar

Similarly, when i was a single-celled zygote I wasn't conscious. But I am now. This happened in stages; it wasn't like someone switching a light on and it happening suddenly.

Expand full comment
Luke Cuddy's avatar

But if pantheism (ala Phillip Goff) is true, then you WERE conscious as a zygote. Check and mate!

Expand full comment
Philip's avatar

What is it like to be in an intermediate stage of consciousness?

Expand full comment
Daniel Greco's avatar

I agree with most of the reasoning in this post. However, I think you don't push it far enough, and if you did push it far enough, it would look more like a reductio of its starting point than a compelling inference.

In particular, I agree that if consciousness is robustly objective--if it can never be a vague or indeterminate matter whether something is conscious--then it's ubiquitous. I also agree that consciousness being robustly objective is inconsistent with lots of popular theories of consciousness (like HOT, global workspace, or theories that posit the necessity of specific high-level anatomical features like cortices, etc.)

But you don't say what you mean by "ubiquitous", and you seem focused on implications for shrimp/insect pain. I think the only sort of ubiquity that could get you the conclusion that it's never vague whether something is conscious looks more like panpsychism. Consciousness would have to be a fundamental feature of matter, sorta like mass or spin.

In that Peter Godfrey Smith story, you see proto-agency--e.g., chemotaxis--already in single celled organisms. Do you want to say they're conscious? If not, then you need to draw a line somewhere in the tree of life between them and insects. That's going to look kinda vague/arbitrary.

If you're happy to say amoebas are conscious, then we just push the question back. Despite being only one cell, they're still massively complex, containing something like 10^14 molecules. How many molecules do you need, in what kind of structures, do you need to become conscious for the first time? The only hope I see for escaping this--if you won't tolerate any vagueness--is saying that consciousness is there right from the start, with the simplest particles.

But now the problem is that you buy the robust objectivity of consciousness at the fundamental level at the price of sacrificing the robust objectivity of consciousness at the macroscopic level we're already familiar with.

Take a physical example like mass. We're happy with the idea that it's pretty robustly objective how much mass an electron has, how much mass a proton has, etc. What about a macroscopic object, like a car? Well, for lots of purposes we can think of car as a single object with a single mass--if you want to know how much energy (e.g., how many gallons of gas, burned in an engine with such-and-such efficiency) it will take to move the whole car a given distance, thinking of the car as a single object with a single mass is convenient. On the other hand, if you want to know what will happen if another car sideswipes the mirror, it's better to think of the mirror as a distinct object with its own (much smaller) mass, to make sense of why the mirror will be snapped off but the rest of the car won't be moved. There's not really a robustly objective fact about how to group particles together into macroscopic objects--that's a matter of convenience. We get the robust objectivity of mass facts at the fundamental level, but not the robust objectivity of mass facts at the macroscopic level. And in the case of mass, it's pretty straightforward how you get from particle-mass-facts to macro-object-mass-facts--it's just addition; the squishiness comes in how you group particles together into macro-objects. In the case of consciousness, nobody has any idea how you'd get from particle-consciousness to organism-consciousness, because nobody has any idea what particle-consciousness-facts could be.

It seems to me pretty plausible that if you want the robust objectivity of consciousness facts, the best you could get would be the robust objectivity of facts about the consciousness of something like quarks, which would probably *not* get you the robust objectivity of facts about the consciousness of insects, shrimp, or humans.

Expand full comment
Bentham's Bulldog's avatar

I agree this is one possible implication, but it also wouldn’t seem that surprising if it depended on some robustly objective property that wasn’t ubiquitous. I also think panpsychisn will have similar problems specifying when the micro consciousnesses combine in a non vague way.

Expand full comment
Daniel Greco's avatar

Do you think there are any robustly objective properties that aren't ubiquitous? I tend to think the only robustly objective properties will be the fundamental ones, and those are ubiquitous. But whenever you're faced with questions about how non-fundamental stuff/properties emerge from/reduce to fundamental ones, there's gonna be some room for vagueness. (Even facts about what state of matter something is in can be vague right at the phase transition.)

Expand full comment
Bentham's Bulldog's avatar

Yeah I mean the property of having at least 10,000 carbon molecules

Expand full comment
Daniel Greco's avatar

I think that ends up being like the case of the car and its mirror. At least, if the question is whether some meso-object has at least 10,000 carbon molecules, then it can be vague because it can be vague just how to group molecules together into objects.

Expand full comment
Daniel Greco's avatar

I guess we could say something like: "a dense plutonium sphere > 100 feet in diameter" is not in practice gonna have any borderline cases, because you could never get that much densely packed plutonium before you set off a nuclear explosion. So maybe that's robustly objective (in that there won't be any physically possible borderline cases) and not at all ubiquitous, because totally absent. So maybe I'm not carving up the terrain just right. I do think the sorts of non-ubiquitous, plausible candidates for physical substrates of consciousness aren't remotely like this--they're not close to definable in fundamental physical terms in a way that would let you avoid potential vagueness/borderline cases.

To make it more explicit, I don't think you're going to find any robustly objective properties that could let you decide where from amoebas to insects to people consciousness first comes on the scene. Even if we could make something like "containing 10k carbon molecules" work it's gonna look absurdly arbitrary to think that could mark the difference. When you start to look for stuff that looks less arbitrary, you start saying broadly functional stuff, which in practice means either vagueness, or low but non-zero levels of consciousness right from the start (as in IIT).

Expand full comment
Bentham's Bulldog's avatar

What about the cemi field theory of consciousness?

Expand full comment
James “JJ” Cantrell's avatar

You had me until 60%

Expand full comment
James “JJ” Cantrell's avatar

Silliness aside, good post.

Expand full comment
JerL's avatar

Ugh, I had a comment written that I lost, but let me summarize it:

I think a good comparison for consciousness (indeed, I think consciousness is likeliest to turn out to be an example of this) is phase transitions:

Sure, it's true there's something unambiguous/binary about phase transitions, but it's subtle: although one can, one doesn't usually say a pot of water is boiling when the first steam bubble forms; or that the lake is frozen when the first ice forms in it. Instead, we wait until the *system as a whole* has gone through the phase transition--until then, the lake merely has some frozen parts.

I think your argument is something like the equivalent of insisting that once any part of an organism is conscious, we should regard the whole thing as conscious, and then this would indeed be unambiguous. But I think it's more likely that it will be useful to first of all ask about the presence of conscious *subsystems* in an organism (the presence of which, while unambiguous, wouldn't be enough to describe the whole organism as conscious, as in the frozen lake example), and then ask about the correlations and interactions between these subsystems, and whether they are so-organized as to have gone through a phase transition. At *some* point, there is a phase transition where you'd consider the whole organism conscious, just as there's eventually a point at which the pot of water as a whole is boiling.

I think this model is better than yours because "any conscious subsystem means the super system is conscious" conflicts with how I would use the term conscious to talk about the consciousness of say, the United States of America, or the Freemasons, or the universe as a whole--all of which have conscious subsystems, but also all of which plausibly are not conscious in a "meaningful" way--the only extent to which they are conscious is the trivial sense they inherit from their subcomponents.

I think this preserves most of your intuition that consciousness is binary (both the presence of conscious subsystems and the transition through the critical points are binary), but also the intuition that there may be qualitatively different gradations of consciousness, and that a supersystem might inherit the consciousness of its component subsystems in more or less trivial ways.

I have no idea what this model suggests about the prevalence of consciousness; my intuition is that any organism that is agentic enough to act as an individual agent is likely to both possess conscious subsystems, and to be on the far side of at least one phase transition in terms of how integrated/global the nature of that consciousness is, but that's just an intuition.

Expand full comment
Mark Slight's avatar

As a physician, I think it is extremely often vague if a person has any experience at all. Just look at various degrees of sedation.

For myself, I don't feel like consciousness suddenly goes online when I wake up in the morning or offline when I drift off.

I don't think this makes any sense at all!

Expand full comment
Bentham's Bulldog's avatar

But at any degree of sedation, either a person has experiences or they don't.

Expand full comment
comex's avatar

IMO, that response begs the question. But then I’m a physicalist. I believe that “having experiences” is just a property that emerges once you interpret some object as an information processing system. It’s not some mysterious unique property either, but just a jumbled combination of a bunch of component properties like “can remember past sensory inputs and thoughts”, “can reason logically”, “has sense of self”, “can direct its thinking”, etc. The reason we interpret this particular combination as special is partly because it allows the system to be agentic, but partly just due to human bias, i.e. because we have an easier time imagining “what it’s like” to be something that thinks the way we do.

Under this view, the more you’re sedated (or just asleep), the more your brain fails to process information coherently, making it less like a waking human brain and more like a random matrix. At some point we declare it unconscious. But there’s no bright line.

Expand full comment
JerL's avatar

I'm a physicalist too, but this: "I believe that “having experiences” is just a property that emerges once you interpret some object as an information processing system"

seems like it can't be right: who is the "you" who is doing the interpreting? Are there any limits on what counts as an interpretation? This seems to suggest that consciousness is a state that is underwritten by the (subjective?) interpretations of *others*--if the right set of "you"s stopped interpreting me as an information processing system, would I become a p-zombie??!

I think this only makes sense if you actually man something like, "having experiences is something that emerges from *being the right kind of information processing system*"--but it's entirely plausible that whether or not something is such a system is a bright-line thing; as I mentioned elsewhere I think it'll most likely be useful to think of it as some kind of phase transition where we do indeed have critical thresholds with qualitatively different behaviour on either side.

Expand full comment
comex's avatar

> Are there any limits on what counts as an interpretation?

Well, that’s the question. Apologies for the incoming wall of text.

I believe that in most cases, any given physical system has either one correct way to interpret it as performing information processing, or zero correct ways. That is, if an observer had full knowledge of the physical state of the system, and infinite intelligence to use to comprehend it, then the observer could determine with confidence whether the system was processing information and, if so, what information.

But is that always true? I don’t know.

Let’s look at some examples.

To start with, I believe that an accurate digital emulation of a human brain is as conscious as a real brain. Denying that seems like it would have unfortunate consequences. This should be true regardless of what physical mechanisms exactly are used to perform that emulation. And it should be true even if the human is isolated from the rest of the world, in which case the emulator doesn’t need any input/output and is ‘just’ evaluating some very specific mathematical function.

This already has some very weird consequences! If, for instance, the emulated experience happens to be torture, then we must conclude it’s unethical to evaluate certain mathematical functions. But presumably not unethical just to determine what the function is? Where is the line? But I digress.

More relevant here: one thing we can do with computations is pass them through homomorphic encryption. With homomorphic encryption, a computer takes encrypted inputs and performs a computation resulting in encrypted outputs. Whoever originally encrypted the inputs can decrypt the outputs, but the computer performing the computation can’t decrypt anything – it has no idea what it’s computing. Thus, it can’t distinguish between, say, emulating a human brain, and performing useless computations on random numbers. In other words, you can have a system where it’s impossible for a *physical* observer to tell whether or not it’s performing information processing, and thus whether or not it’s conscious.

On the other hand, an infinitely intelligent observer has no such trouble with encryption. With infinite compute, you can brute-force the encryption key. And you’ll be able to tell when you got the right answer, because it’s a near certainty that of all possible decryption keys, only one will give you an output that doesn’t look like random noise. So encryption actually poses no obstacle to my definition of information processing based on what an infinitely intelligent observer could determine.

Still, this suggests to me that for the observer deciding whether something is conscious (whether that observer is the hypothetical infinite one or a practical physical one), it’s really a job of _finding the right interpretation_. There will always be multiple interpretations; even a random matrix will perform some small amount of useful computation by pure chance (see: lottery ticket hypothesis). I suspect that an interpretation should be defined as simply a mapping from physical states to informational states. If so, you can interpret _any_ (large-enough) system as performing _any_ computation. But we can say a “correct” interpretation is one that has a sufficiently low Kolmogorov complexity (or similar complexity measure), while performing a sufficiently high amount of computation.

If the Kolmogorov complexity is too high, then it means the ‘information processing’ is just something you made up as part of the interpretation, rather than being fairly attributable to the physical system itself. If the amount of computation is too low, then it’s probably some lottery-ticket subset of the system that performs useful computation only by chance.

However, you do have to pick arbitrary thresholds for maximum Kolmogorov complexity and minimum amount of computation.

In most cases, the threshold shouldn't matter. Even we puny humans can understand (to some extent) how brains process information, how computers process information. The correct interpretations for those objects have _very low_ Kolmogorov complexity. That’s not to say that brains and computers aren’t complex. But the complexity is encoded in the objects themselves; there are few, if any, arbitrary interpretation decisions that have to be encoded in the interpretation scheme. (Even for a homomorphically encrypted brain, the decryption key doesn’t have to be encoded there! You can instead just say “try all keys and find the lowest-complexity output.”)

And as for computation, the amount of computation performed by the brain is many orders of magnitude higher than any lottery-ticket interpretation for an object of its size.

This is not surprising. Designing an object whose internal workings are fundamentally ambiguous sounds like a tough task. It’s not going to happen by accident. And evolution had no such objective when it created brains, nor did humans when they created computers. To the contrary, both human and computers are designed to communicate the information they process to the outside world. It would be hard to unambiguously communicate your thoughts if you didn’t unambiguously have thoughts!

But what if some malicious actor _were_ designing an object to be fundamentally ambiguous, to step right up to whatever your threshold of Kolmogorov complexity is? Is it theoretically possible for such an object to exist?

I have no idea. I can’t think of a concrete example of how such an object would actually work, so maybe not. The closest I can think of is something like a brain with pieces removed, where the interpretation consists of how to fill in the missing pieces. But such an object wouldn’t be able to _actively compute_ with those missing pieces, so I don’t think it counts… although it might count if we relax the standard from “actively processing information” to “capable of processing information in the future”, like an unconscious human.

And even with the original standard, maybe the reason I can’t think of a concrete example is just a lack of imagination.

Expand full comment
UncleIstvan's avatar

I feel like I missed something in this - did you ever suggest a non-vague thing that could be the basis for consciousness? Is there any such thing available? If you think it comes from something non-physical (like it comes when God bestows you with it), that seems to undermine your argument - God could give it to an extremely small and random set of things (like the only conscious entities are three people in Amsterdam, one dog in Jakarta, and this rock on Neptune ) and it would still be binary, so then objectiveness doesn't point to being widespread. So I assume you are looking for a physical basis, or at least an observable one.

In other posts, you seemed to argue that suffering and consciousness are connected, but obviously capacity-to-suffer is not objective. So that can't be how we distinguish the conscious from the unconscious.

I am really uncertain what could conceivably be an objective basis for consciousness under this approach - let alone what would actually be a good candidate to in fact be such a basis.

Expand full comment
Nathan Nobis's avatar

I just want to add and so the *metaphor* of "degrees" of consciousness does not make sense: consciousness is NOT like a thermometer.

Different conscious beings are conscious of different things, and more and less things, but no conscious being is "more conscious" than another or conscious "to a higher degree." That's not a helpful way to talk.

Expand full comment
Daniel Greco's avatar

Yes this is something that's always bugged me about IIT. It purports to be so objective, but the thing it objectively measures is not obviously something that makes any sense to talk about.

Expand full comment
Nathan Nobis's avatar

What is IIT? (I don't know if I am spelling what you wrote correctly; can't cut and paste either.)

Expand full comment
Daniel Greco's avatar

Integrated information theory, referenced in Benthams post. It purports to give an objective measure of how much consciousness a system contains, in terms of how informationally integrated its parts are.

Expand full comment
Ponti Min's avatar

I'm sure the thoughts in my mind are more complex and qualitatively different than the thoughts in my cat's mind.

Expand full comment
Nathan Nobis's avatar

And some of the the cat’s thoughts are more complex and qualitatively different from your thoughts.

This does not support the “degrees of consciousness” metaphor.

Expand full comment
Ponti Min's avatar

> some of the the cat’s thoughts are more complex [than] your thoughts.

Evidence for this?

Expand full comment
Nathan Nobis's avatar

No, you can do it by providing evidence for your initial claim: develop or offer some way to measure thoughts. And then reflect on how cats perceive some things that you don't.

Expand full comment
Ponti Min's avatar

Well I'm fairly certain my cat isn't posting on the internet debating the nature of consciousness.

> And then reflect on how cats perceive some things that you don't.

He does, however, take a keen interest in the box I keep the cat treats in.

Expand full comment
Jonathan Ray's avatar

On non physicalism there’s no reason not to suppose protons may be conscious and shut down CERN out of an abundance of caution (or fund CERN with 100% of GDP because proton welfare is the most important thing in the universe???)

Expand full comment
Stuart Armstrong's avatar

>Consciousness is either present or absent. Either the switch is on or off. It cannot be vague or a matter of mere interpretation whether you have experiences. Either there’s something it’s like to be you or there isn’t.

This seems entirely wrong to me - I've had times where I've felt a lot more conscious than others. Tired, drunk, in flow, in conversation, self-reflecting, meditating, during some sex, half-asleep... All feel different in quantity and quality of consciousness.

Expand full comment
Joe's avatar

The notions of "vague" and "higher-level" properties seem (ironically) vague, allowing them to take on several meanings, which makes the 4-point argument unconvincing to me. For example:

Someone has attempted to make a nuclear bomb. Is it functional, or will it fail to detonate?

1. It's not "vague" whether it denotates – just test it.

2. If that's not vague, then it only depends on non-vague properties.

3. But functional and non-functional nuclear bombs look almost identical – in fact, you need to understand higher-level physics concepts to try to answer this, and even the best physicists would have to use a lot of heuristics and estimation. These are vague properties.

4. Contradiction.

I'm probably misunderstanding you in a few ways, so feel free to correct me. My feeling is that "vague/higher-level properties" tend to be approximations of some more specific property. If I claim that "consciousness depends on intelligence", then the fact that I am unable to give an objective definition of intelligence doesn't invalidate this view, because the claim is more like "there exists some precise property which consciousness depends on; I don't know the exact property, but I think it can approximated by the word 'intelligence'".

But anyway, in the space of all possible objective properties, there are loads that pretty cleanly divide humans from everything else; I don't support these (and I believe many animals are probably conscious), but they definitely exist. Sure, when someone says "there's just something about humanness that I think is necessary-and-sufficient for consciousness", it sounds like a vague property; but if in fact they're claiming "there does exist a precise property for this, but I'm not sure what exactly it is", then I don't see why this position is inconsistent.

Expand full comment
Mathias Mas's avatar

-very much agree with most of your explaining of concepts, very clear.

-very much agree that this is where the discussion of animal suffering should be (consciousness)

-but I will probably never understand your reasoning. I think my main issue is that the premises in your reasoning are so technical, depending on other technical premises who in turn again depend on other technical premises etc. The reasoning looks robust but the building blocks look fragile.

-respect for even using premises that you don't even support yourself.

Expand full comment
HD's avatar

This is probably deathly pedantic but the ungrammatical "octopi" is not the plural for "octopus" in English, "octopuses" is fine.

Expand full comment
Dom's avatar

The central argument is logically invalid. "Consciousness depends on non-vague properties" isn't the negation of "conscious depends on vague properties", consciousness could depend on both.

Expand full comment
Justin D'Ambrosio's avatar

This is a bunch of sophistry and would really benefit from serious study of what vagueness is, what it is for something vague to supervene on something precise, and what the actual consequences of a precise notion of consciousness are for physcislism and dualism. You just say a bunch of shit when people have actually worked out these ideas in detail. Give me a break.

Expand full comment
Justin D'Ambrosio's avatar

Someone needs to put a moratorium on undergrads trying to do philosophy concerning subjects that are already much better understood than they could possibly know.

Expand full comment
Collard's avatar

What would it mean for something not to be experienced?

Expand full comment
Mark Slight's avatar

Thank you.

Well that's certainly not a fact (as you acknowledge). While I used to think so too, I am now confident that it's just wrong. I think the best shot you got in that case is to claim that either you remember an experience or you don't (but I think that's wrong too). Cartesian Gravity seems strong with you, my friend!

Expand full comment