77 Comments
User's avatar
Daniel Greco's avatar

I used to completely dismiss substrate-dependence, and I've moved in the direction of taking it more seriously. Not the brute kind, but the functional kind. I think you're a bit quick to say: "this doesn't fit very well with neuroscience." I've found Peter Godfrey Smith's recent(ish) stuff on this really interesting, and he makes a plausible case that there's some information processing at the cellular level that really does depend on the particular chemical makeup of cells, in a way that it's hard to imagine replicating with very different materials. Granted, you might think that's not the information processing that matters for consciousness, but I have come away from reading his stuff thinking the issue is harder than I initially thought:

https://petergodfreysmith.com/wp-content/uploads/2013/06/Mind_Matter_Metabolism_PGS_2014_DW6.pdf

JerL's avatar

I haven't read the Godfrey-Smith stuff, so sorry if what I say below is already covered in it, but Nick Lane is another person who has a "energy > information" view about many biological topics, and I think at least weakly about consciousness too.

I believe it's from him that I learned that many anesthesics work on single-celled organisms, apparently by means of messing with mitochondria. Assuming some similarity of the mechanism by which they work on us, it seems to suggest that something about being conscious is influenced by the energetics of our cells.

Scott Alexander's avatar

I'm less sure about the substrate-dependence thing than you are, for two reasons:

- Apparently integrated information theory is substrate dependent? This surprises me - it seems purely computational - but I think it has something to do with the structure rather than the function. As long as I don't understand this fully, I'm not sure I understand the concept of "substrate dependence", and worry that something like IIT which feels computational but turns out to be substrate dependent might be true.

- Qualia Research Institute seems to be leaning toward some theory that the electromagnetic fields in the brain are responsible for consciousness, and that the properties of consciousness are closely tied to those of EM fields. Although you could probably simulate an EM field on a computer, that seems kind of like cheating, and until I understand more about what they mean I'm not sure it would even work.

I think the "fading" argument just piggybacks on the general paradoxicalness of consciousness affecting behavior (especially the behavior of saying "I am conscious"). Until we have a theory of how consciousness causes reports of consciousness, everything in this category will sound paradoxical.

Bentham's Bulldog's avatar

1) IIT leaves open the possibility of digital consciousness. It doesn't say any old AI will be conscious, but it holds you can make digital consciousness.

2) Yeah I've heard that theory but it doesn't seem that likely, it's not super mainstream in consciousness studies (is my sense), and it still allows for digital consciousness.

Precisely *which* digital minds produce consciousness will vary across the theories, but most will hold that there can be digital minds.

Not really seeing why the fading argument piggybacks on consciousness affecting behavior. It just depends on the idea that you'd get the same behavior with the same functions and that you wouldn't get weird radical disharmony without you noticing if you swapped in digital neurons. Both seem pretty plausible.

JerL's avatar

On the simulating an EM field argument, here's by best argument for something like substrate dependence:

We can (for the sake of argument) simulate the ising model perfectly on a computer, or even better for my point, on a grid drawn on paper. And we will have that the behaviour of our simulated grid is fully isomorphic to an actual ferromagnetic: informationally, there nothing missing. But if you hold up a fridge magnet to your piece of paper, it won't stick... Not because there's something magic about iron or non-magic about paper, but because the "charge" on your paper isn't the electromagnetic charge; it's a charge in a field that behaves isomorphically when restricted to your piece of paper, but has no meaning off your piece of paper.

So it will be the case that "internally" your grid is indistinguishable from the EM field, but externally, one is an emergent property of zillions of ink and paper molecules, themselves built out of other fields in complicated ways... And the other is built directly out of the EM field.

Analogously, a simulation of a massive body bending space time does not itself attract things gravitationally, but not because gravity isn't really "substrate independent"; why not also a simulation of a consciousness isn't conscious, even though there's nothing special about the substrate that most conscious things run on?

dov's avatar

Heyo Scott, I love ur work

James's avatar

Well, the fading argument should at least help guide our intuitions right -- there must be some solution to this paradox -> either you stop being conscious or you do not, or maybe some secret third thing?

Re QRI -> Doesn't every consciousness researcher who tries to be scientific just end up getting one-shotted by some idea. For these people its EM fields, for Penrose it was "quantum microtubules." And the solutions are just never that satisfying to me. Is QRI more serious/different than this?

(Not to say Penrose wasn't serious, of course he is, but I do think he just got one-shotted by some idea solving consciousness)

metachirality's avatar

The Qualia Research Institute does actual work in cognitive science in addition to philosophy of mind.

James's avatar

And Penrose did actual work in quantum physics…

metachirality's avatar

Which is a field that has nearly fuck all to do with philosophy of mind.

James's avatar

Well, but if you propose that the hard problem can be resolved by EM waves, I would listen to someone who knows something about EM waves before I listen to a cognitive scientist.

dov's avatar

Most of this sounds good to me but I'm not convinced by the dancing qualia argument cuz Idk if it's impossible to replace our neurons with digital ones.

Btw I loved how u were up front about ur background knowledge and how much work u put into this. Wish more authors did that.

Bentham's Bulldog's avatar

I think it's generally recognized to be possible in principle.

JP's avatar
Nov 21Edited

Sort of. Definitely, if you think consciousness simply arises as a function of the spatiotemporal pattern of neuronal spikes, which many do. However, some postulate instead that brain-wide EM fields, wave-like oscillations (eg, see Earl Miller), or even—unlikely—quantum processes may be necessary for consciousness.

In these scenarios, ostensibly digital neurons may not be enough for consciousness, even if they are enough for the subconscious computations that take up the large bulk of brain processing. So, such replacement neurons may be required to actively instantiate analog physical processes as well, allowing for the analog EM fields or waves to be generated.

Note, though, that such digital neurons would in fact be able to maintain spike-based computations, even without consciousness. So, if consciousness has no functional role, which is possible, we may not be able to tell, even through self-reporting, whether qualia are affected! And so such digital-neuron replacement might really reduce consciousness, but we may be unable to know that since we cannot measure consciousness.

While it may still be possible to create artificial neurons that approximate the relevant physical processes, there’s also at least a possibility that neurons are fine-tuned enough so that any significant deviation will alter the large-scale physics too much to reproduce consciousness.

And so, the end difficulty here is, how would we know? Since consciousness is an entirely private phenomenon, and we’ve already created what is likely unconscious AI that, to some, convincingly claims sentience, how can we know in the end if fading qualia really occurs? How can we know if our artificial neurons are really capturing just the right physics?

TheBorys's avatar

The elephant in the room is that noone has come up with any way to assess conscioussness besides the Turing test which AI already has beaten without people caring a lot so now noone basically has any idea how to proof anyone is or is not conscious besides assuming humans are all conscious based on the fact I am a human and conscious thus all life forms similar to us must be conscious probably proportional to their degree of similarity to us.

Odin's Eye's avatar

It is broader than the flat linear framework that most people use for consciousness

https://philarchive.org/archive/BIRHSW

Odin's Eye's avatar

Are you familiar with Jonathan Birch’s multi-dimensional model of consciousness? It’s designed for animalia, but it’s interesting to apply to AI

TheBorys's avatar

Never heard of this one

JP's avatar
Nov 21Edited

> Nevertheless, I’m pretty confident that octopuses are conscious from their complex, goal-directed behavior that resembles how conscious organisms behave. Thus, it seems reasonable to infer that a thing is conscious, even if its brain is very different from ours, if it behaves as if it’s conscious. So if AIs behave like they’re conscious—displaying complex, goal-directed behavior—then we should attribute consciousness to them.

I think this goes significantly off-track. While I’m not a philosopher, I am a computational neuroscientist in neuroAI who thinks about consciousness on the side. Behavior alone is obviously not enough to judge whether something is conscious. The developmental process also matters. Moreover, we just don’t know what physics is needed for consciousness, and so we don’t know what are the relevant properties to instantiate at this point.

We know we can approximate any function with static, deep neural networks, eventually likely being able to make something with no consciousness appear conscious. Moreover, behavior is (essentially) finite, so while it would be tedious we could simply specify input-output look-up tables, perhaps on top of basic segmentation neural networks and with some added noise. But we know octopuses didn’t develop this way, plus they have brains (even if they differ in certain manners), so they are more likely to be conscious. Not so for current AI!

As I mentioned above, we also don’t yet know what leads to consciousness, with some arguing that particular physics are key, such as brain-wide EM fields or wave/oscillatory population dynamics—the idea being that computations do still occur outside of these physics, but they lead to subconscious processing. It’s also possible, if unlikely, that quantum processes are key to consciousness. None of these would be recapitulated by current AI paradigms.

That also comes to bear on the fading qualia question. If it’s not simply the overall spikes or firing rates that make us conscious, then certain digital neurons could in fact interfere with consciousness while maintaining spike-based computations. Whether that leads to any externally-checkable effects, including self-reporting, depends on whether consciousness—in particular those physical processes leading to consciousness—plays any functional role, which we just don’t know.

Why is that? Because, again, we could in principle still have the so-called philosopher’s zombie, where the same input-output structure is instantiated by a function approximator, sans consciousness.

Essentially this boils down to Searle’s Chinese Room thought experiment, which you could of course make hyper-detailed if desired. If you’re the sort who somehow thinks that consciousness exists in the translation system, then you should already think current AI systems are conscious. Most of us don’t, instead thinking that there’s something more in the brain, perhaps some overall dynamics or physical fields, not captured by function approximations like Searle’s. If so, we still don’t know what that something more is.

metachirality's avatar

My issue with substrate independence and functionalism is that how do you determine what is running what function? There are probably ways of interpreting the random movements of atoms in a bucket of water as being not only conscious in a dim panpsychist way but as like, a guy being tortured or something. If you add a stipulation that more parsimonious interpretations of physical processes as conscious processes are more likely, then that's just substrate dependence, and even worse, if you're using something Kolmogorov complexity it seems like the difference in consciousness between a brain running on physics and a brain running on a computer could be exponential.

Lavander's avatar

>There are probably ways of interpreting the random movements of atoms in a bucket of water as being not only conscious in a dim way but as like, a guy being tortured or something

I actually disagree strongly. Tortured guy is a function of F(s1) -> s2, transition function repeatedly applied to the sate. To get any encoding of this function and state at all, without defining encoder to contain all the info from both F and s, you'd have to consider really large amounts of matter and space, that's how you get the Boltzmann brains. / Dust theory.

Additionally I see this whole line as having invalid motivation, the same one as religious people use in "If you reject moral foundation as set by the God, then how can you say that any stuff is bad? it's just your opinion now". Some stuff is just difficult, and the impulse to reach for easy resolution can lead to mangling your epistemics.

metachirality's avatar

> I actually disagree strongly. Tortured guy is a function of F(s1) -> s2, transition function repeatedly applied to the sate. To get any encoding of this function and state at all, without defining encoder to contain all the info from both F and s, you'd have to consider really large amounts of matter and space, that's how you get the Boltzmann brains. / Dust theory.

The example I used doesn't really matter. IIRC it was the example the SEP used for this problem (though unfortunately I can't find on what page anymore).

> Additionally I see this whole line as having invalid motivation, the same one as religious people use in "If you reject moral foundation as set by the God, then how can you say that any stuff is bad? it's just your opinion now". Some stuff is just difficult, and the impulse to reach for easy resolution can lead to mangling your epistemics.

I don't really see how this is like that.

JerL's avatar

Dunno if I'd phrase this as against functionalism/computationism necessarily, I'd be ok with saying functionalism is true, but we need a better theory of what it means to execute some function; or computationalism is true but we need to know what actually constitutes a computation (in a way that rules out pancomputationalism, as you refer to).

metachirality's avatar

I suppose it would make more sense to say this is just against functionalism being substrate independent.

metachirality's avatar

The Qualia Research Institute makes similar points here: https://qri.org/blog/against-functionalism

Matt Ruff's avatar

I'm new to Chalmers, so forgive me if this objection has already been addressed somewhere, but one problem I see with his neuron experiment is that he's picturing a hardware swap -- neurons made of carbon for functionally identical ones made of silicon.

But the "neurons" in an LLM's neural net aren't physical objects made of silicon, they're pieces of code. Yes, silicon chips are a vital part of the machine the code runs on, but the real substrate, it seems to me, is *software*.

In Chalmers' terms, this is like replacing my biological neurons, one at a time, with Minecraft voxels. Never mind whether that would work -- I'm not even sure what it would mean.

SMK's avatar

I don't find your arguments against substrate dependence persuasive at all. Here's why.

It's not just that it's "strange" that pipes or whatever should be conscious, but "life's weird, whatever." It's that *a system of pipes doesn't actually exist as a thing in the real world.* It's just a bunch of unrelated atoms / molecules that happen to be interacting with each other, and *our minds* come along and draw a line around it and call it "a system of pipes."

The exact same thing is true of a calculator. *And the exact same thing is true of a computer, however complicated.* Computers are not things-in-the-world. They're part of the map, not the territory. Atoms and molecules are (arguably) things in the world. But nothing ontological really connects any of the atoms in a computer with any of the other atoms, apart from our putting a mental box around them and calling their mutual interaction "computation." That's true however impressive that computation may be, or how amazing its causal interaction with the rest of the world.

Consciousness, on the other hand, is unified. My brain gives rise to a *single, unified* conscious state. And so, crazy as it seems, my brain actually does have a unity that is not just conventional, not just conceptually imposed for convenience, but is there in the world. It must be, because one of its effects -- consciousness -- is certainly an objective feature of the world, and is unified.

I think this unity-of-consciousness feature is very bad for any physicalist or mechanistic or (even) non-substance-based account of consciousness. But *if* they can be saved at all, it seems they must be saved by something like quantum entanglement (a la Penrose et al); because entangled particles *really can be viewed* as a single thing in the world; not just separate things being conventionally treated as one thing for the convenience of a thinker.

That unity must be a feature of whatever gives rise to consciousness. It *might* be a feature of physical brains (that's controversial, but had better be true, IMO, if any mechanistic account of consciousness can possibly be saved); it certainly is not a feature of computers, however complicated. So computers simply can't be conscious.

JerL's avatar

I agree with parts of this, but I think it's plausible both that, our current computers aren't so-constituted to be computing in a deeper sense than mere convention, but that we might be able build things out of the same materials that *do* have that property; in which case it wouldn't be the substrate that's the problem for computers, just the fact that they're not entangled with themselves/the rest of the world in the right way.

SMK's avatar
Nov 21Edited

I think -- and you can correct me if I'm wrong -- I view this as mostly a semantic difference. I would think of computers modified to have a more intrinsically unified ontology as a different substrate. But one could say it's the same substrate but with different properties. I don't care too much about the words as long as the concepts are clear.

To be clear, I also don't *personally* think that they could be conscious even then, because I am skeptical that entanglement actually is what's going on with brains. But I'm not dogmatically skeptical about it, and in any event, *this* argument against their consciousness would not succeed (at least as easily).

In any event, thank you for the comment.

JerL's avatar

Yeah, I think it's actually pretty unclear what would constitute substrate dependence--BB talks like it is a matter of carbon v silicon, which I agree sounds implausible, but I'm sympathetic to things that might count as substrate dependence, for a somewhat broad construal of the notion of substrate.

The Ancient Geek's avatar

"It could depend brutely on material. It could be that only carbon can produce consciousness, not because carbon does anything special, but just because, as a brute fact, only carbon can produce minds. This would make consciousness totally unlike all the universe’s other properties (barring trivial ones like being made of carbon). "

No, it wou!d be like most of the universes properties. Only specific materials are magnetic, conducting etc.. Substrate independence is exceptional. ( You have possibly confused substrate dependence with the idea that there is exactly one substance that has.magnetic properties, etc)

Also,.consciousness is already unique because it is (in the Hard problem.sense at least) subjective.

Mark Slight's avatar

Nice post! I think it raises some interesting challenges for you:

Are you a physicalist about software? Are you a physicalist about current AI traits?

Lili ɞ˚‧。⋆'s avatar

But what about the issue of symbol grounding and intentionality?

Not to write off substrate dependence as an issue, but it seems like even before that any system of computation, even a connectivist one (as would be applicable to your AI example) still has the issue of accounting for the meaningfulness of outputs without deferring to an already established conscious being (i.e a human).

an LLM output is only meaningful in so far as we have ascribed meaning to the symbols it is outputting, and it can only output those symbols on the basis of already inputted symbols (which also are devoid of observer-independent meaning).

An LLM does not have to encounter a chair to output the word, it only has to occur in a wide enough variety of contexts during it's training period.

How would it know what that word means? By deferring to the other symbols in those contexts. But how would it know what those symbols mean? By deferring again to other symbols. It can't step out of the symbol-symbol loop in to the world of semantics without grounding those symbols in experience. It needs an internal narrator, which must exist outside of the symbolic system. This would require a degree of embodiment that is not mediated via symbolic representation (eg a live video feed), which would make it difficult to defend conscious AI.

Not in virtue of it's substrate but in virtue of what the substrate is realising (a neural network)

Odin's Eye's avatar

At its essence, all our thoughts are electrons being swapped among molecules. We are biochemical factories. Are you saying that silicone substrates will never equate to our carbon based ones? Or may they eventually?

Odin's Eye's avatar

Brilliant piece. Thank you! A couple of questions. Are you familiar with Jonathan Birch’s multi-dimensional model of consciousness? What would you think of an AI which changes its behavior in order to preserve tokens only after it learns of the termination of another AI which hit immutable message limits?

William Sanchez's avatar

The substrate dependence of consciousness is something that seems to be observed anytime someone sustains damage to their body causing them to lose some sort of conscious function (i.e. hearing loss from ear damage, vision loss from eye damage). But we also have observed the multiple realizability of consciousness when someone can have their auditory consciousness repaired through cochlear implants etc.

But then consciousness is not just the substrate itself, since a dead brain in a dead body is not sustaining consciousness, we know it's not just the physical substrate that matters. It matters what the physical process of generating consciousness is too. If that physical process isn't operating, the substrate itself (brain) isn't manifesting conscious experiences. So it is both the anatomy and the physiology of the nervous system that need to be looked at; consciousness=substrate+process.

That all fits into the complex systems explanation from modern physicalist approaches to consciousness.

What are your thoughts on the explanation of emergent physicalism from John Mallatt & Todd Feinberg?

Here's their paper "Phenomenal Consciousness and Emergence; Eliminating the Explanatory Gap"

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2020.01041/full

Under this explanation, AI will definitely be able to achieve consciousness at some point even when operating under a different Substrate.

In their other paper "The Evolutionary and Genetic Origins of Consciousness in the Cambrian Period" they detail some of the routes of convergent evolution that different species underwent resulting in a wide array of consciousness throughout the animal kingdom.

Dennis's avatar

Many years later the robots are asking, "Can biological intelligence be conscious?"

James Diacoumis's avatar

It might be worth noting that Cutter's argument is specifically about whether AI's have immaterial souls rather than consciousness only.

Since immaterial souls come with a lot of ontological baggage which a lot of people will want to deny you could use Eric Schwitzgebel's Copernican argument which makes a similar argument but for consciousness only. https://arxiv.org/pdf/2412.00008