133 Comments

I think the weakest premise is the first one.

This thought experiment, like many other consciousness related ones, is trying to pump an intuition by expecting us to be on board with a premise that no person can possibly have any intuition about. I frankly think it's a little disingenuous that the set up includes a humble little scientist working from her humble little room reading her textbooks. *But* she also knows every single physical fact -- which, when taken seriously, makes her a godlike Laplacian demon. I don't think anyone can relate to that, and intuitions pumped through that relatability cannot be trusted.

I really enjoyed the post and I do agree with some of your criticisms of physicalists, but don't think this thought experiment is helping much.

Expand full comment

I don’t think this works. How does the relatability of the thought experiment matter? There are weird physics thought experiments. How is that relevant to whether they work? Thought experiments are just like applied conditional reasoning.

I can think of a lot of weird mathematical word problems. Does that mean math doesn’t apply because it’s too weird? Can Mary not learn new things because it’s too weird?

Expand full comment

Not everything we call "thought experiment" is conceptually the same, or employed for similar purposes. This particular one (and I'd add p-zombies and the Chinese room to this group as well) is trying to give you the intuition that consciousness can't be purely physical by telling a seemingly simple fact-learning story and prompting you to imagine what that learning be like. But the learning in question can't be described in anyway by how we regularly deploy the word or think about the concept of learning. The story wants us to go "ah yes there's still something left even after one learned physical facts", when there's no way you can reach that conclusion because what "learning all of physics" entails is utterly inaccessible to anyone anywhere. We can have no intuition about what state would a being like that finds itself in. But the argument hinges on that relatability.

I am not a physicist, but physics thought experiments I could think of serve an inherently different purpose. Schrodinger's cat shows how quantum weirdness can lead to daily life weirdness. It doesn't require you to imagine the conscious state of a godlike entity (you can't). Twin paradox, or Hilbert's hotel in math etc. are also very different. None of them I can think of wants you to take an experiential leap like this.

Expand full comment

The Hilbert’s hotel thought experiment is probably *worse* given the conditions you’ve laid out. Infinite hotel rooms is mind boggling and difficult to even understand for many people. And denying that these thought experiments are using “learn” in a normal sense is absurd to me. Perhaps you use a different dictionary? But regardless, one could use the word "shmearn" in place of "learn" to refer to the concept happening here and the thought experiment works just fine.

"A hotel with infinite hotel rooms" is "inaccessible" in the way that you've described here, and so is knowing all the atoms in the known universe. Yet such a thing is logically possible. This strikes me as similar to denying that arguments against theism/Christianity work because "The Lord works in mysterious ways,” except that actually makes sense sometimes! But you can’t respond to the paradox of the stone with such an argument.

Expand full comment

It's different in this way:

In Mary's room, I am asked to make claims about the conscious state of a Laplacian demon.

In Hilbert's hotel, I don't need to put myself in the minds of hotel guests, or understand how they would experience reality.

Their conscious states are irrelevant to the argument. It's essentially just a mathematical proof with a cute story to make it more digestible (the fact that whether it's actually more digestible this way is irrelevant).

I'm curious how you would use a random term to replace "learn" and still make the experiment work. Learn is doing all the work in the story.

Expand full comment

Imagine my thought experiment and find the flaw in it: You learn all physical facts there is to be known, so you know the past and future perfectly. But you can still make choices in daily-life sense, because, you know, you're still you. Therefore you can prove determinism is wrong and there is a libertarian free will.

Expand full comment

"Mary in her black and white room can, merely by reading textbooks, learn everything physical. "

Can Mary also learn how to ride a bike merely by reading textbooks? If she can't learn how to bike, does that mean biking is not physical? (I mean regular biking - nothing like ET-style in the sky, magical biking.)

What if Mary could devise an apparatus like those used in Matrix (see the way Neo learns Kung-fu) and download the color red into her brain? Would that work?

I keep coming back to this premise: "Mary in her black and white room can, merely by reading textbooks, learn everything physical." It seems so demanding that it makes physicalism impossible. Because of course there are always things you can't just learn by reading textbooks.

I mean you could replace Mary the Scientist with Mary the AI Robot that is obviously physical. But even there may be purely "physical" things Mary the AI Robot wouldn't be able to learn just by reading textbooks and/or going through data.

Expand full comment

Learning to bike involves creating new pathways/reflexes in your brain. There are limitations in the human body that make it so you can only do it with training. Biking is physical of course. But the experience of biking (feel of muscles or equilibrium) is not, or so goes the argument.

I don't think the premise is demanding as stated. More precisely, it says that Mary can predict the behavior of any system (including human beings) because she know all the laws of physics and can deduce any evolution of the system. If Physicalism is true, then you should have a full description of the universe. If you cannot use that to deduce "what it's like" for a person to feel pain/redness/sounds then something is missing.

I guess the fundamental question is: does experiencing (rather than knowing) something = new knowledge necessarily?

If you answer no, you're in a weird spot where "feeling something" is simply out of the information content of the universe. It exists separately and cannot be expressed by the information in the universe.

Expand full comment

> If Physicalism is true, then you should have a full description of the universe. If you cannot use that to deduce "what it's like" for a person to feel pain/redness/sounds then something is missing.

I am not going to repeat myself re your other points. But notice that this implies physicalism is impossible because I just don’t see how it’s possible (in any possible world) to deduce what red looks like merely from descriptive facts about the universe.

And I find that to be a fantastic conclusion - just like the requirement you ought to be able to deduce what red looks like from descriptive facts.

Expand full comment

> Learning to bike involves creating new pathways/reflexes in your brain. There are limitations in the human body that make it so you can only do it with training.

Likewise perceiving red requires certain activity in the brain. Likewise there are limitations that make it so that you can only do it when actually observing a red object, instead of reading about it.

It doesn't make observation of red a non-physical thing, no more than it does biking.

> Biking is physical of course. But the experience of biking (feel of muscles or equilibrium) is not, or so goes the argument.

If biking is physical and yet one can't learn it from reading about biking then it means that knowledge argument fails, namely premise 1 is not generally correct.

Or are you saying that the feeling of equilibrium is non-physical and required for successful act of riding a bike? That would mean that no robot lacking a phenomenal consciousness will ever be able to achieve this feat. Is it your position?

> I guess the fundamental question is: does experiencing (rather than knowing) something = new knowledge necessarily?

Not the crux at all. Physicalists can happily grant that experience is a form of knowledge. The disagreement is where it's physical or non-physical knowledge.

Expand full comment

> That would mean that no robot lacking a phenomenal consciousness will ever be able to achieve this feat. Is it your position?

Of course not. I am just saying that physically training your body for something is not knowledge-acquisition (necessarily)

Correct me if I'm wrong but it seems that both of you are saying that direct experience is a form of (physical) knowledge but one that cannot be acquired through information transfer by text or voice or general bits of information?

Would you then agree that a computer with full knowledge of all physical laws and able to simulate any system can still not know some direct sensory facts?

FYI, just for clarity, I am actually a bit undecided on this so I'm interested in other perspectives.

Expand full comment

> Correct me if I'm wrong but it seems that both of you are saying that direct experience is a form of (physical) knowledge but one that cannot be acquired through information transfer by text or voice or general bits of information?

Looking at things can give you bits information about things. And reading about things can also give you bits of information about things. You may treat it as two different information channels. Some information can be received interchangeably through either of the channels. Some only from only one of them. Non of this has anything to do with phenomenal consciousness.

> Would you then agree that a computer with full knowledge of all physical laws and able to simulate any system can still not know some direct sensory facts?

I think I answer this question into this comment:

"Now, consider a Mary, who is not a human prodigy but an arbitrary powerful Solomonoff inductor with a visual feed - a superintelligent AI, that nevertheless doesn't have any phenomenal conscious experience and is not allowed to modify it's visual feed. While being in the room it accumuates a lot of facts about color red and forms a logical network of inferences. It can even send all the information to a remote server, where everyone will be able to see what it knows about color red. However, due to not observing any red images, our Mary lacks an extensional definition. So in the database, there will be an empty place for a picture of something red. Then, after Mary leaves the room and receives a red image, this empty space in the database will be filled. Definetely Mary learned something new and it wasn't knowledge of conscious experience of color red, because the AI doesn't have such experience. Therefore we have to conclude that the fact of learning anything new upon observation of a red object doesn't mean that this knowledge is non-physical."

Expand full comment

>Some information can be received interchangeably through either of the channels. Some only from only one of them.

Problem is, there's not much information about red. It's a wavelength. That's it. If you can reliably detect that, then you've detected red. Same applies to other qualia. Or are you saying the information is in the way the brain interacts with red?

This also applies for your Solomonoff-Mary example. It's very easy for a normal AI (no phenomenal conscious experience) to re-create an image of something red without using a visual feed at all. Simply based on the wavelength of emitted light.

Expand full comment

> Or are you saying the information is in the way the brain interacts with red?

Yes information about how systems of different kind process red things is obviously among the information about color red.

> It's very easy for a normal AI (no phenomenal conscious experience) to re-create an image of something red without using a visual feed at all. Simply based on the wavelength of emitted light.

Only if it hass access to software that can create images from the RGB encoding/wavelength, which would ruin the symmetry between it and human Mary who doesn't have any access to such software or writing access to her own visual stream.

On the other hand if Mary was allowed to used her knowledge of neuroscience to modify her brain in such a manner that she perceived black objects as red she would also not learn anything new after leaving the room.

The point is that consciousness is a red herring here (pun intended). We put non-conscious AI and conscious human in the same conditions - we get same results.

Expand full comment

This is a great response! I think your bike example gets at the heart of the problem. Understanding how the body is capable of doing something is not the same as learning how to do it.

Expand full comment

The argument relies on conflation of the distinction between knowledge and reality. I think physicalists can and should deny 1. 1 assumes that everything about the physical world can be represented propositionally. It's false. Language, syntactic structures, formal systems: they are not identical with reality and can never fully represent it, not in theory, not actually, never.

When Mary sees red for the first time, she learns something new, but what she learns cannot be expressed in a proposition, only referenced by a proposition (e.g 'The experience of red'.) Philosophers are always confusing the limits of language with the limits of reality: this argument is another egregious example.

Expand full comment

I think an acquaintance response actually does succeed, but interesting blog article nevertheless

Expand full comment

It seems easy to strengthen the argument to preclude such a response:

https://www.philosophyetc.net/2008/11/new-knowledge-argument.html

1. It is a factual question whether you and I experience the same color sensations when looking at an object.

2. This question cannot be settled by any physical information (or scientific inquiry).

C. So there are non-physical facts.

Expand full comment

I don't agree with 2. I think that requires us to believe that conscious experiences don't map to brain states -- ie instantiations of the same brain state, subject to identical stimuli can yield distinct experiences (due to fairy dusts?). I don't see any reason why we would believe that.

Expand full comment

You can *think* that conscious experiences map to brain states - the premise doesn't take a stand on this either way. It just notes that you can't *establish* this with any amount of physical info. So the question we're wondering about, when we wonder whether we see the same color, is not the same as the question whether we are in the same brain state when we look at the object.

Expand full comment

If it maps to a brain state that's physical, why can't physical info edtablish whether experience is the same? Even today we can look at two brain states and tell which one is experiencing which color. I think that's even a lower bar than the original thought experiment. Maybe I'm missing something in your argument, but to me, it looks like your premise assumes the truth of your conclusion: if it's not physical, then physical info can't establish it, then it's not physical.

Expand full comment

C doesn't follow from 1 and 2. The inference assumes without argument that the question can be settled.

Also, facts are not physical or non-physical: they are just propositions. What does it mean to say that the content of a proposition is physical or non-physical?

Expand full comment

The argument doesn't assume that *we* can settle the question. Just that it *is* settled, and if not by any possible physical info, that would make the determinative info (again, whether or not it's actually within our reach) "non-physical" in nature.

A physical fact is fact that's entailed by a complete description of the physical "base facts". The base facts in a domain are those that ground all the facts in that domain. In the case of physical facts, the base facts could plausibly be those that correspond to a complete microphysical description P of the universe. Any facts not entailed by a complete microphysics (e.g. Q: facts about the existence and qualitative natures of our conscious experiences) would then qualify as non-physical facts.

(Chalmers offers a more sophisticated story along these lines, invoking 2-D semantic neutrality to sidestep worries about a posteriori necessities, and suggesting that a full explanation of the world requires domains "PQTI": physical, qualia, indexical, and a "that's all" clause. The TI parts aren't really "non-physical" in any ordinary sense. But if fundamental qualia facts need to be added on top of microphysics, that's essentially to say that dualism is right.)

Expand full comment

And that it _is_ settled is again, assumed without argument.

I would deny that there are any 'qualia' facts that are represented by propositions that Mary doesn't know. The knowledge Mary gains is non-propositional. Anything we know about qualia that can be expressed in propositions, Mary knows (or could know).

The argument doesn't establish there's non-physical knowledge. It establishes that there is non-propositional 'understanding'. If we consider facts to be propositions of some kind, what Mary comes to understand when she sees red is not a fact, so it is also not a 'non-physical' fact.

One can see that the argument is about the information that can be expressed via propositions, not about whether there are non-physical facts, because if there were a relevant proposition expressing the non-physical fact Mary learns when she sees red, she could tell it to her sister Cary, who was also raised in the room, and Cary would learn what red is like without seeing it. But of course there is no such propositional fact (neither physical nor non-physical), so Mary can't tell Cary anything.

Expand full comment

It sounds like you're just denying premise 1? You don't think there's any fact of the matter whether or not people are color inverts, etc. (No sense to the familiar question, "What if the color you call green is the color I call red, and vice versa?") The idea is rendered meaningless or incoherent, on your view.

You're always free to reject premises, but fwiw I think this one is about as self-evident as premises in philosophy ever get. (The phrase "assumed without argument" has an accusatory ring to it. But it's hardly a fault of an argument that it contains premises!)

Your last paragraph seems confused. Consider, e.g., the constitutional theory of phenomenal concepts (defended by Chalmers, Balog, and others, and generally regarded as the leading theory of phenomenal concepts on offer). On this account, what Mary learns can be characterized as "The color people call 'red' looks like *RED FLASH*." If someone - like Cary - doesn't already possess the phenomenal concept of RED (as partly constituted by the phenomenal experience *RED FLASH*), there's no way to communicate this thought to them. But that's no reason to deny that it's propositional - truth apt, apt to be the target of propositional attitudes like belief or desire, apt to be embedded in larger propositions using logical connectives, etc. etc.

Expand full comment

When there’s no way in principle to establish something, I’m not willing to assume there’s a fact of the matter. We often mistakenly intuit that comparisons make sense which really don’t. So yes, I don’t grant 1, although it’s less that I have good reason to think it’s false and more that I can’t see any reason to give it credence. ‘I think this is as self-evident as premises get’ ain’t worth much to those of us who don’t share the intuition. Maybe my grasping faculty is defective. I think that we need to distinguish between propositionally structured facts, which are a representation of reality, and reality, which is not propositionally structured. I think it is perfectly reasonable to hold that reality cannot be fully represented via propositions, that there is no such thing, in practice and in principle, as the complete set of facts that describes reality. So I am comfortable with there being aspects of physical reality that can’t be expressed in propositions and are not ‘facts’ in that sense. There’s only a problem for a physicalist if some propositions are non-physical.

“The colour people call red looks like ‘red flash’” is a proposition that Mary could know. If ‘red flash’ is meant to stand in for a phenomenal experience, I deny that the alleged ‘sentence’ _is_ a sentence, is truth apt, is a proposition, etc. etc. The English sentence can be communicated to Mary, Chalmers ‘mentalese’ imagining is not even coherent, in my opinion. It may be a popular analysis, but so what? My confidence in an analytic philosophy reconstruction of ‘phenomenal concepts’ is nil.

Even if we grant Chalmers weird ‘mentalese’, the content of his expression cannot be communicated in language. Suppose (again granting things I doubt are coherent) I experience ‘green flash’ looking at a red object. What exactly am I agreeing to when I hear someone say “The colour people call red looks like ‘red flash’” and nod my head vigorously? Mary’s room establishes only that our phenomenal experience can’t be communicated in language. This does not falsify physicalism! Physicalism is not committed to a thesis about what can be linguistically expressed.

Smart people interrogating their intuitions is a risky basis for a research program. It’s hard to say without sounding very mean, which I don’t intend, but I think a lot of philosophy of mind is a degenerate inquiry. It’s an entertaining game but it can’t command assent because it’s based on ungrounded intuitions and divorced from empirical reality. Philosophers would be better off sticking to philosophy of psychology, which is at least connected to facts about minds.

Expand full comment

I’ll read the article you linked today and get back to you on this once I have, thanks for linking it

Expand full comment

There's not much more to it than the quick argument I copied over. :-) But if you'd like a more substantial read, I expand upon the core idea further here:

https://www.philosophyetc.net/2016/03/the-basic-reason-to-reject-naturalism.html

Expand full comment

I like your version of the knowledge argument as well!

Expand full comment

And if a good case, as divorced from reality as the Kalaam argument were made that we do see things alike, would that be a non physical fact?

Expand full comment

If this was a FaceBook comment it would have to be censored for misinformation!

Expand full comment

Having thought about this argument for some time, my conclusion (which may well be wrong) is that it fails - and whether or not physicalism is correct, you’re not considering the strongest physicalist response to it.

Ultimately, what does it mean to have the experience of seeing red, in terms of the actual physical process taking place? When you see red, your visual system generates some kind of stimulus, and this stimulus gets stored in memory. Subsequently, when you reflect on what it means to see red, this stimulus gets replayed from memory. This is what it means for one to have had an experience of a certain kind.

Note that this is a distinct physical process from learning scientific facts about the process itself. The latter involves a different kind of content stored into memory, and replaying it corresponds to a different kind of experience. It’s like a computer file on your phone that stores a photo, versus another computer file that stores a technical explanation of how exactly a digital camera works in the process of taking and storing this photo.

Now we get to the key point: when Mary learns about all this in her colourless room, this creates stored memories of only the second kind in her brain. At this point she could in principle manipulate her brain to implant memories of the first kind too, at which point she would truly have the experience of having seen red, even if her eyes never really saw the color.

I think this resolves the confusion: in a physicalist world, Mary could in principle manipulate the physical state of her brain to add the experience of seeing red. However, this can’t be achieved just by learning all the facts of what these manipulations would involve. You have to actually do the manipulations, and the thought experiment doesn’t assume this.

I don’t see how any objection to physicalism remains from this thought experiment after this confusion is clarified.

Expand full comment

I address that in the article!

Expand full comment

I’ve read the article, but I honestly don’t see where you address this argument. (It could be a failure of my reading comprehension -- I would honestly appreciate a clarification.)

Usually jn arguments of this kind, I find the most useful way to reduce confusion is to find a computer analogy. In this case, I think we have a very good analogy that clarifies the problem:

(1) Mary seeing red -> a computer with a camera taking a picture of a red object, after which there is an image file with red colour pixels on the computer.

(2) Mary knowing neurological facts about seeing red -> a computer that has a document file with a detailed description of all the electronic circuits involved in capturing and storing that specific red image and how exactly they behave during that process.

There is no information in case (1) that can’t be derived from (2). A computer running a human-level AI could certainly figure out how to generate the red image from the electronics description file, even if it was never connected to a colour camera.

The sleight of hand (as it seems to me) in the Mary argument is that this is obvious in the case of digital computers — where knowledge of how bits work automatically makes it easy to manipulate them — whereas in the case of human brains, the latter is a difficult problem in its own right.

But certainly, I don’t see any problem for a physicalist here: if Mary induced a physical change in her brain that corresponds to seeing and remembering red, rather than a technical description of that change, she’d have the same conscious experience as if she actually saw the colour. If anything, to me that seems like a case where anything beyond physicalism fails the Occam’s razor.

(This is not to say that there aren’t other, more powerful arguments against physicalism — I’m just focusing on this particular one now.)

Expand full comment

The section following “The most common thing they say is roughly the following.”

Expand full comment

I’ve read that section again, and while i certainly may be misunderstanding something, to me it seems like it fails to address the argument. You say:

“It [physicalism] implies something much stranger: that Mary, while in the room, would be able to learn what it’s like to see red.”

Which to me sounds incorrect. According to physicalism, there is a particular physical process in Mary’s brain that is equivalent to the conscious experience of seeing red, and another one that is equivalent to the conscious experience of acquiring a scientific understanding of the former.

In the situation posited by the thought experiment, the latter process will occur, and the former won’t, until Mary leaves the room. But I can’t see how this creates a problem for physicalism — if anything, physicalism offers a perfectly clear picture of what it means for either (or both) of these to happen.

(A lot of the physicalist arguments you summarize do sound confused, but I’m not trying to defend those. In particular, of course that when Mary actually sees red, she “learns something new”, in the sense that a physical process will occur in her brain that is different from any that could have occurred in the room. But again I fail to see how this is a problem for physicalism.)

Expand full comment

This smells like a philosophical argument that evaporates when empiricism catches up. Once we know how brains work, it'll be obvious. Or put another way: the confusion exists because brains are a (mostly) black box. We expect consciousness to be in that black box but until we turn it white we're speculating.

Expand full comment

Really? You think Mary could know what red looks like in the black and white room just by reading enough textbooks?

Expand full comment

No, because reading textbooks does not change the brain in the same way as seeing colors. Or at least I'm pretty sure that's true but like you I am speculating because no one understands brains well enough to create one from scratch.

A better thought experiment would be to alter her brain such that it's identical to how it would be if she saw color. Then, yes, she'd know what red looks like. If she could somehow alter her own brain to achieve the same effect then I'd say that dry reading would give her knowledge of the color red.

This is random but you might enjoy Starship Mage, a long series of books in a universe where aliens bred humans to wield magic. In that series, mages need to directly experience phenomena to influence it. They use fiber optics from outside the ship because cameras don't work.

Expand full comment

I addressed that in the article!

Expand full comment

I disagree but I'm not really articulate enough to do a good job of explaining why. I'll try though - but feel free to ignore this comment since it won't be very good.

Brains don't hold "knowledge" as some kind of substance. Brains take in information, contextualize it, and emit commands. Some of this is purely internal aka "system 2" or rumination or whatever.

The context of "red" when seen through the eyes is different from the context of "red" as learned from books. It's also different from "red" as thought about purely internally: even if you could perfectly imagine an entire human mind encoded with "red", the context of "red" would be different for you than for that imagined mind. The imagined mind would be conscious and experienced red but you would not.

If Mary could somehow swap her real mind with her imagined mind, as I'd expect an AGI to be able to do, then she could know what it's like to experience red. For her, the contextualization of red would be the same as for someone who'd actually seen it themselves.

I suppose my view is that all knowledge is necessarily contextualized and in humans that means how we learned something changes its character. For a hypothetical non-human, this could be different.

Expand full comment

Well, Mary here is literally a godlike ultramind capable of knowing an infinite number of physical facts and keeping them all together in her head at once, right? How could anyone possibly know anything at all about the abilities of someone like that?

Expand full comment

(By don’t deny I mean don’t forget one of the conditions of the thought experiment)

Expand full comment

By knowing that she's never seen red as stipulated by the thought experiment. Don't fight or deny the thought experiment! One of the first rules of philosophy.

Expand full comment

You have not given an argument to deny this besides your incredulity that you can at best develop into "seemings" that physicalists will just negate because it "seems" to them consciousness is physical.

Expand full comment

The question is, do you really think that "enough textbooks" is a reasonable proxy for " all of the physical facts relevant to color perception"?

Expand full comment

You continue to play fast and loose re: what “learning”, “knowing”, and “knowledge” mean. Propositional knowledge and phenomenal acquaintance with something such as a color are different things.

Expand full comment

I address the acquaintance response!

Expand full comment

But, if I understand you correctly, the claim these people are making is that “Mary is gaining an acquaintance with red, but it’s not new knowledge”, right? And you are saying it is knowledge. I agree with you on that. I think they are distinct types of knowledge, and having one doesn’t give you the other. The difference is that knowing all the physical facts doesn’t give you total physical knowledge of red.

Let’s separate the two types of physical knowledge: saber and conocer. Once you have both of these, we can say that you have Total Knowledge. But she didn’t have Total Knowledge until she saw red, even if she had all these facts.

I think you basically agree with that but you think that it’s a problem for physicalism because of the ambiguities in how the thought experiment is phrased.

Expand full comment

Can you give a rigorous definition(s) of “learn(ing)” as you’re using it here?

Expand full comment

Something like becoming aware of some new fact.

Expand full comment

Is “what having the experience of seeing red is like“ the “new fact” one “becomes aware of”? If so, then it sounds like you are equivocating between learning and phenomenal experience. What am I missing?

Expand full comment

I'm not equivocating! You learn a new fact: what red looks like. You learn this fact by experiencing red.

Expand full comment

Ok then by your logic premise 1 (“Mary in her black and white room can, merely by reading textbooks, learn everything physical.”) is false. By your definition, the kind of learning she does when she experiences red for the first time requires light of a certain wavelength to impact her retina, causing subsequent physiologic changes leading to a perception of color. That kind of learning cannot happen absent her actually seeing red.

Expand full comment

Is this 'fact' expressible as a proposition?

Expand full comment

I think what's in dispute in the acquaintance objection is whether "what red looks like" is a fact to begin with. The argument is that what Mary has learned isn't a proposition, but something else. For example, she can now simulate the experience of red, something her brain can't do just by reading about the experience because the physical mechanisms that allow it to perform such simulations require her to have actually experienced it before.

Expand full comment

(Also not to be pedantic, but rigorous definitions don’t usually start with “something like…”. Much of your arguments hinge on precise use of language and I think you are neglecting that.)

Expand full comment

It's hard to precisely define most things, but this definition should be good enough.

Expand full comment

There are almost no successful definitions. It's not worth trying to do that. Philosophy has yet to have an agreed upon definition of a concept because it's impossible. But we still can understand what things mean - giving a rough definition or thought experiment is sufficient for most debates to clear up confusions.

Expand full comment

I think the definitions are important because the entire argument here rests on a misuse of language.

Expand full comment

The problem with this argument, imo, is that it frames Mary's experiences as "what it's like for Mary to see a red rose" and not "what a red rose is like to Mary" - it requires that facts about color perception are made true by objective features of the qualia involved that Mary learns about empirically. But if you don't think that, then what a red rose is like to Mary is just a question about how Mary's brain represents things internally, and I just don't see any reason to take it as obvious that we are guaranteed the ability to apprehend that merely by reading a description of all the physical states involved. What do we know about human psychology or neuroscience that ensures us all facts are knowable in this way? If your ultimate model of the mind is grounded in reified conceptual categories, then objections like this feel powerful, but no physicalist is going to accept that our cognition is so simple as to be framed that way.

Expand full comment

Physicalism refutes Mary's room, not vice versa. _Both_ premises of the argument are wrong, and Sean Carroll is of course correct.

> Physicalism doesn’t entail that Mary would ever see red while in the room. It implies something much stranger: that Mary, while in the room, would be able to learn what it’s like to see red.

She will know how the brain state works, but this is _not_ the same as _being_ in that brain state. Compare beliefs and beliefs-in-beliefs. I can say "X simultaneously believes that 2+2=4 and that 2+2=5", but that won't make me believe that no matter how much I try and no matter how well I memorise brain scans of the guy, because my awareness of the contradiction will block me getting there. Likewise, short of taking hallucinogens, Mary's brain will not get into the state of knowing red by knowing what people knowing red know. This doesn't make the brain state non-physical.

As for Chalmers, he plainly makes a false-to-physicalists-and-to-common-sense claim on the intension of consciousness (namely, that it is equivalent to its extension)

Expand full comment

> It seems like no matter how much neuroscience Mary learns, she’d never learn what it’s like to see red.

I think there's a common form underlying many (most?) of the published arguments for non-physicalism, which goes like this:

1. I observe that I don't understand how consciousness works

2. I imagine having a physical explanation for consciousness

3. I observe that, inside my imagination, I still don't understand consciousness

4. Therefore consciousness cannot have a physical explanation

Why does it seem, naively, that Mary doesn't know what seeing red is like? Well, it's because when I imagine knowing everything about neuroscience I still don't understand how the experience of redness works.

But this is flawed. When you imagine having an explanation for something you haven't actually explained, you can't actually include the explanation, because you don't know it! So your imagination just has a box labelled "The Explanation" and you imagine yourself opening and saying "Wow! This explains everything!" but that imaginary you is wrong, because nothing has actually been explained, and you can tell that, so it seems like there can't be an explanation. But this is just what it's like the be genuinely confused by something, it doesn't impart any specialness on the thing itself.

So, I say that Mary does know what it's like to perceive redness. I don't know what that means, exactly, or how it works, because I don't know the things Mary does, but that shouldn't stop me from concluding that she does know those things.

Expand full comment

I don't think this holds up. I can imagine someone who knows all physical laws, and the location of every macro-level object in the universe, and thus can predict where any given macro-object will be at any given time. And that's despite obviously not *really* being able to imagine that experience, which would after all be the experience of a god. But the reason it is still imaginable is that I can understand *in principle* how it could happen, because we already use knowledge of physical laws to predict the behavior of objects in the universe. The problem that Mary's Room illustrates is that it is not even possible to imagine *in principle* how knowledge of physical laws could lead to knowledge of what it is like to experience a particular brain state.

Expand full comment

You are using a faulty methodology, making obscure claims about pseudopsychological states like "imagining" an entire universe and presupposing that whether or not an agent can "imagine" something entails no other agent can and that there's some sort of metaphysical truth gained from that, and then appealing to that same faulty methodology to nonempirically justify conclusions that can never be tested and don't integrate well with anything psychologists or neuroscientists know about the brain.

Expand full comment

When you imagine someone who knows all physical laws, you're leaving out all the details you don't know! The only difference is that you're willing to accept that the space labelled "explanation of all physics" has the consequences an actual explanation would have, whereas you reject this when the space is labelled "explanation of all neuroscience".

Expand full comment

No, the difference is that I can understand in principle how innumerable physical laws could exist and what it would mean for them to explain the behavior of physical systems. I do not understand even theoretically what it could mean to “explain” *the experience itself* of perceiving something.

Expand full comment

How can you argue both that Mary wouldn’t understand the perspective of seeing red just by thinking about it, but also that you understand the perspective of Mary just by thinking about it?

Expand full comment

Because I also think and have experiences?

Expand full comment

And that differentiates you from Mary how?

Expand full comment

This example only has traction because of a linguistic ambiguity in that English doesn’t make a distinction between “know” and “know about”. In Spanish, they are two separate things, conocer and saber, respectively. They would never think that knowing everything about a person, saber, means you know them, conocer, because those are two separate things.

Expand full comment

I address the response that Mary becomes acquainted with an old fact but gains no propositional knowledge.

Expand full comment

P1. If Mary knows all the what-it's-like facts, consciousness is physical.

P2. Mary knows all the what-it's-like facts.

C. Consciousness is physical.

What independent justification does the Mary's room argument give to negate P2? None. It's a bad question begging argument that just asserts the truth of nonphysicalism. If you are willing to accept that assertion with no justification, why not accept P2 up above? I think you'll find Mary's room gives 0 reason not to.

(Compare:

P1. If Mary doesn't know the what-it's-like facts, consciousness is nonphysical.

P2. Mary doesn't know the what-it's-like facts.

C. Consciousness is nonphysical.

Obviously, the physicalist will just give the first argument in response to this, since no independent justification is given to accept the second argument.)

Expand full comment

The implicit assumption is that language is a 1:1 map to reality, when it is only an approximation. If instead of reading a textbook, Mary was given a futuristic device that projected the knowledge directly into her brain by rearranging the atoms into the end-state that would "know" the bit of knowledge, seeing red (or remembering what it is like to see red and therefore having knowledge of it) seems completely plausible.

The fact not all knowledge can be gained through words alone isn't anything new. We have thought experiments, convenient analogies and deep math for describing the behavior of subatomic particles, but we don't actually have knowledge of the particles directly. Language is the abstraction that serves to approximate the reality, and without a brain state being excited that consistently approximates reality (I.E. seeing the color red) "knowledge" loses its traditional meaning. If the "knowledge" of what it's like to see the color red, not the actual color itself, but "the experience of seeing it" is a coherent brain state, then all it takes is arranging the brain in a way that it's having that experience. This can't be done with textbooks, but you can't make your head spontaneously combust or grow an additional foot with textbooks either, they are imprecise tools for imparting all possible brain states.

What's being discussed isn't the color red, it's more precisely the brain state of seeing the color red, and those are different things. We just use language to say "I see the color red" because it would be incredibly inconvenient and pointless to say "certain wavelengths of photons are exciting the cones in my eye to send signals that cause my brain to release certain chemicals and electric impulses which resemble previous times this has happened."

Expand full comment

There are two versions of the Mary Room experiment. One in which premise 1 is wrong and other in which premise 2 is wrong. Non-physicalist intuitions confuse them into a single entity and therefore smuggle their conclusion into the premises - this usually happens when you try to hand-wavingly use such category as "all physical knowledge", without actually understanding the implications of it.

First one is where Mary is simply reading papers and studying other brains, but is not allowed to modify her own brain. In such experiment she indeed would learn something new when leaving the room, but simply because she didn't truly learn *everything* physical about color in the room.

The second version is where Mary *truly* knows everything phisical about color, which includes her ability to freely modify her brain. Then she simply modifies herself so that she perceives objects as red, while still in the black and white room and then after leaving the room she would not learn anything new about the color.

Now, consider a Mary, who is not a human prodigy but an arbitrary powerful Solomonoff inductor with a visual feed - a superintelligent AI, that nevertheless doesn't have any phenomenal conscious experience and is not allowed to modify it's visual feed. While being in the room it accumuates a lot of facts about color red and forms a logical network of inferences. It can even send all the information to a remote server, where everyone will be able to see what it knows about color red. However, due to not observing any red images, our Mary lacks an extensional definition. So in the database, there will be an empty place for a picture of something red. Then, after Mary leaves the room and receives a red image, this empty space in the database will be filled. Definetely Mary learned something new and it wasn't knowledge of conscious experience of color red, because the AI doesn't have such experience. Therefore we have to conclude that the fact of learning anything new upon observation of a red object doesn't mean that this knowledge is non-physical.

Expand full comment

In your first argument, you have simply defined "physical knowledge" to include the experience of what it is like to perceive red. But that is precisely what is up for debate. Is the experience of perceiving something a "physical" fact? I agree that "all physical knowledge" isn't the most precise definition, but the point of the Mary's Room argument is that even with complete knowledge of all laws known to physics and a complete understanding of biology, chemistry, and neuroscience (even far-future versions of these subjects), you will never be able to obtain knowledge of *what it is like* to perceive red. And the reason this is interesting is that *every other* fact about perceiving a physical object could in principle be known though these means. For example, with enough information about the starting state of its brain, Mary could exactly predict the behavior of a rat placed in a maze. But she could not know *what it is like* to be a rat placed in a maze. This seems to imply that there is something different about the kind of knowledge that can only be obtained by experience.

As for your AI example, I don't think it holds up. Wouldn't knowing everything about red allow the AI to produce a red image on the monitors it is hooked up to? Knowing that red occupies certain wavelengths, it could produce the code necessary to display an image emitting those wavelengths (just as Mary could induce a brainstate that correlates with experiencing the perception of red). So I'm not sure what you mean by "an empty place for a picture of something red" in the Mary-AI's database. If the Mary-AI leaves the room and receives perceptual data of the color red through its visual feed for the first time, it won't learn anything new, because all that its visual sensors will receive is the data that red is a color transmitted on certain wavelengths, and per the premises of the thought experiment it will already know those wavelengths. Assuming the AI doesn't have conscious experience, however, it has *not* acquired knowledge of *what it is like* for a conscious being to perceive the color red.

Expand full comment

> As for your AI example, I don't think it holds up. Wouldn't knowing everything about red allow the AI to produce a red image on the monitors it is hooked up to? Knowing that red occupies certain wavelengths, it could produce the code necessary to display an image emitting those wavelengths (just as Mary could induce a brainstate that correlates with experiencing the perception of red).

You say it yourself. Producing the red image by AI is isomorphic to Mary inducing a mind state in herself where she perceives red. If Mary is allowed to do it, then she doesn't learn anything new after leaving the room. Nor will the AI, who has access to image generation based on rgb encoding/wavelengh.

However, if neither of them is allowed such manipulations, if all they can do is read facts about colour red and make logical inferences from them, but never to generate red images using their knowledge, then both will learn something new upon seeing red for the first time. Namely they will now have an example of red object in their memory.

> For example, with enough information about the starting state of its brain, Mary could exactly predict the behavior of a rat placed in a maze. But she could not know *what it is like* to be a rat placed in a maze.

If Mary perfectly knew how data is encoded in both her brain and rats brain it seems to be possible in principle to make an interpreter from rat to Mary, so that Mary received the same visual/audio feed as the rat. It's not that we are dealing with some fundamentally different kind of knowledge. We just lack the necessary tools to parse it, yet.

Imagine if humans lacked the ability to communicate about a particular topic, for instance, trees. That any time a human tries to descriptively talk or write anything about trees, they lose consciousness for a short time. So if one has to learn facts about trees, they have to do it experimentally, themselves. In such world people would also treat knowledge about trees as a metaphysically different kind of knowledge, even though that for us it is obvious that it's not the case.

Expand full comment

Right, and the whole point is that knowing how to induce a brain state is different than the *experience* correlated with that brain state. The computer can tell you how to generate a red image and Mary can tell you how to induce a red-correlated brain state. But neither can tell what it is to *experience* such a brain state or to see such an image. And I disagree that the computer here has any new information upon seeing a red object, or that its “seeing” red is isomorphic to Mary’s generating a red mind state in herself. As I said, the computer learns nothing new when it “sees” red because by assumption it has no conscious experience. So there is no new data it receives upon “seeing” a red object through its sensors, because it already knows everything about the wavelengths/other physical properties generated by a red object, and those physical properties are all it could “learn” from a non-conscious sensor readout of a red object. But Mary does learn something when she sees red or induces a red state in herself—what it is to *experience* red. Crucially, that experience cannot be reduced to bits, as all the other physical data about red can be.

Expand full comment

> Right, and the whole point is that knowing how to induce a brain state is different than the *experience* correlated with that brain state.

And knowing how to write a python code that produces red image file on a server is different from actually having a red image in a format that operational system can interpret, uploaded on a server.

> And I disagree that the computer here has any new information upon seeing a red object, or that its “seeing” red is isomorphic to Mary’s generating a red mind state in herself. As I said, the computer learns nothing new when it “sees” red because by assumption it has no conscious experience.

You assume that the reson why Mary learns something new is due to her conscious experience, so naturally you would believe that an unconscious AI will not learn anything. I claim that this is false and use my experiment with AI Mary to demonstrate it. More importantly, in this experiment I preceisly describe the new thing the AI will learn in terms of a file with a red object, uploaded on a server. Disagreeing with it means that you believe that an AI with all the limitations on image generation and editing, that a human Mary who isn't allowed to modify her own mind has, could have already uploaded a file with red object on the server before leaving the room. Either explain how do you believe this could be the case or accept that you were mistaken in your premise.

>Crucially, that experience cannot be reduced to bits, as all the other physical data about red can be.

You are free to believe it. There may still be *some incredible difference* between what human and AI Marys learn after exiting the room. But the point is that even if this difference exists, the Mary Room mind experiment fails to capture it as both AI and Mary will either learn *something new* or not, if subjected to the same restrictions.

Expand full comment

What you originally said is that the AI would not be able to “modify its visual feed,” not that it could not create files. You even said it would be able to send data outside of itself. Of course the AI is able to modify and create files in its own memory banks.

But I will accept your altered scenario, where the AI is not allowed to create a file of a red object. In this situation, the AI still does not “learn” anything new when a red file enters its memory. It already knows how to create a red file exactly, and knows everything about red wavelengths that will be expressed by the image. The file itself is not any information about red. Let’s say I forbid you writing “1+1=2” on a piece of paper. You still know the equation and how to perform the arithmetic, and the mere fact you aren’t allowed to write it down doesn’t alter your knowledge (of course if you’ve never had *the experience* of writing 1+1=2 then, just like Mary, you will learn what it is to experience that).

Mary too knows everything about red-correlated brain states. But when she induces in herself a red brain state, she still does learn something new, something which was not already part of her prior knowledge—what it is to *experience* perceiving red.

Expand full comment

Would Zombie-Mary also learn something new upon exiting the room and encountering, say, an apple for the first time?

Expand full comment

If the argument is "prima facie devastating", and "refutes" physicalism (your words, not mine), then why is it that the *clear majority* of people with the most expertise on the topic think otherwise?

Expand full comment

Because philosophers are frequently confused in a more sophisticated way and have a bias towards nayuralust/empiricist positions like a lot of modern intellectuals.

Expand full comment