Thanks for this post. I've very much enjoyed some of Yudkowsky's work, especially the brilliant _Harry Potter and the Methods of Rationality_. I was unaware of his very implausible views that you mentioned above.
Regarding the Zombie argument, it sounds as if he might be unaware of the meaning of "metaphysical possibility". Most people who haven't studied philosophy use "possible" in a sense closer to physical or nomological possibility. So he probably thinks that the Zombie argument uses the premise that Zombies are nomologically possible. He'd almost be correct to think that, if consciousness is in fact efficacious, then Zombies are *nomologically* impossible. I guess that's what's going on.
Can you confirm that my interpretation of zombies is correct and his is wrong? (I'd like to be able to cite some qualified philosopher who says that explicitly because for those who don't know about philosophy of mind, it might be hard to tell which of us is right).
I will add that I think the view that no non-human animals are conscious is extremely implausible on its face and much more of a red flag for someone's reliability than the bit about zombies. (Lots of philosophers reject the Zombie argument, and it's not hard to misunderstand a controversial philosophical argument like this.)
I say this even though I haven't researched the arguments on animal consciousness. I just find the view completely crazy on its face. Thus, I think it reveals a bigger problem, something like the lack of a general "reasonableness detector".
(When you come to a conclusion through some piece of reasoning, you should have something inside you that asks, "Okay, but is this a reasonable answer on its face?" Like if you're trying to calculate the circumference of the Earth, and you get the answer "45 miles", you should say, "Wait, that doesn't sound like a reasonable answer.")
I would note that Hoel carves out space for animals not being conscious. In his most recent post he said "A dog would (likely) have primary consciousness (there is something it is like to be your dog) but it would lack secondary consciousness (their consciousness is not as richly self-aware as ours, they never have an internal narration, and so on)."
People differ in what they think is extremely implausible on its face and what constitutes red flags. For instance, I think non-naturalist moral realism is extremely implausible on its face and a red flag about a person's reliability (incidentally, I say this as someone who has done a bit of research on this).
The idea is that what distinguishes conscious from unconscious mental states in the most obvious case of adult humans is that they are introspectively accessible. I.e. for example (borrowing from Dennett), if neuroscientists find that people *deny* they are seeing red when X type of processing of redness information goes on, even if we previously thought that being X type of processing was what being conscious consisted in, we'd say they weren't having a conscious experience of red. (At least in normal conditions, with full attentiveness etc.) This is the idea behind the "higher-order thought" tradition in phil. mind. On this sort of view, it's far from obvious that animals are conscious, since it's far from obvious they can introspect their own experiences and realize they are having them. I think most people take this to be an objection to HOT theories, but all the alternatives face other damaging objections, so its not clearly decisive in my view.
I notice that EY conflates the idea of zombie worlds having the same physical laws with the idea of the causal closure of the physical. In fact zombieists don't have to accept causal closure or determinism. If they don't, they don't have to be epiphenomenalists --- he also confiates dualism and epiphenomenalism, as before.
I am also not sure that "personal attacks" is a fair description of your post. You're not arguing that Eliezer Yudkowsky is evil. You're just arguing that he's often badly wrong.
Woah, you can't say that until you've laid out an object level disagreement, apparently according to basic morality. God wrote this moral law on the hearts of those with, in Eliezer's words, sufficient g-factor (high IQ).
I think the main problem is that I can see the article as coming across as overconfident, if not arrogant to some people. If you say "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong" it's going to come off as insulting because to many people because you're just some rando on the internet criticizing this well-known figure when they have no prior about who you are. If he had said something like "I Think That Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong. Here's why" it comes across less overconfident by including the qualifier "I think" at the beginning, and tells your reader that you're going to explain yourself.
I see Yudkowsky's conception of zombies quoted in your link. Can you briefly state your understanding of what a zombie is? (Sorry if this was answered earlier.)
This is how I understand the argument. Physicalists think that there is a complete physical explanation of all human behavior. (That's not all that they think, but that's one thing they believe.) The argument, I take it, says that *if* that's true, then you can imagine a physical duplicate of you who would behave the same way in every circumstance, but you can also imagine that entity not having any phenomenal consciousness. Intuitively, that scenario would be metaphysically possible. But if so, then consciousness is something over and above the physical features.
My sense of where Yudkowsky has misunderstood the dialectic is that, I guess, he thinks the person giving this argument must *himself* believe that there is a complete physical explanation of all human behavior. But one need not think that (as, e.g., Chalmers does not and I do not). It's enough for the argument that the *physicalists* think that.
Right. And I think the reason that Eliezer thinks is that when he imagines a physical copy of you lacking consciousness he imagines a version of you that just had the consciousness disappear. Now, if you think that consciousness is causally efficacious, if you *just* got rid of consciousness then that wouldn't be a physical copy. But the idea is that the zombie wouldn't be the byproduct of purely consciousness subtraction--you'd need to make other changes.
I have to disagree with you on the zombie argument. You say you can imagine a world where everything- down to atoms- is same, but there is no Consciousness. But this is impossible; it is the same as the case of married bachelors and square circle. I think if everything, down to atoms, is same, then consciousness automatically arises. It is a resultant property. I cannot see how this thought experiment disproves physicalism.
When I think about a square circle, it seems impossible and there's an explanation of why it's possible. But in the case of zombies, neither of those are true.
This case is similar. You are already assuming that consciousness cannot arise from the interactions of molecules when you conduct the thought experiment. But if you assume that physicalism is true before trying to conduct the thought experiment, you cannot continue. It is because when everything is similar but there is no consciousness, it is like saying that you are pushing me but I am not being pushed. It is a contradiction.
"square" and "circle" have well-defined properties. It's not at all clear that "consciousness" is like this. The former are notions about which one can assess the compatibility of something having both qualities from the armchair. The latter? I don't see why I should think one can simply "imagine" physical facts same + no consciousness.
What if consciousness just is a set of physical facts? If it is, how could you imagine the physical facts being the same but there not being consciousness?
Suppose, for instance, I actually knew physicalism were true, and that consciousness could be described in fully physical terms. People who don't understand this and who are unaware of the correct account of consciousness might still report, and regard themselves as capable of imagining an absence of consciousness while imagining all physical facts remain constant, simply because they don't know which physical facts constitute consciousness.
I would just outright question whether a person can, in fact, imagine a world that is physically identical, down to the atoms, but there is no consciousness. Why should I think human brains even have the ability to imagine things like that? I am not even sure people can adequately "imagine" a single atom, and then determine what would follow from that fact. But somehow people who claim to be able to imagine p-zombies are imagining all sorts of ludicrously sophisticated things. What does this imagining consist in?
It'd be one thing if I built a model that simulated the properties of atoms and then I watched how it worked. But I don't think people can readily do these sorts of things in their mind, or, if they can, it's at best at an incredibly rudimentary level.
For all we know, proponents of the conceivability of p-zombies are "imagining" something incredibly trivial. For instance, they could be doing little more than imagining sentences and assigning truth values to them. For instance, they could imagine this:
Proposition 1: All the physical facts are the same as in this world. TRUE
Proposition 2: There is no phenomenal consciousness. TRUE
Yet the ability to imagine assigning truth values to propositions is not diagnostic of whether the propositions could, in fact, both be true at the same time.
I'm not "dictating" what people can or can't imagine. I'm expressing skepticism that they can do what they claim to be able to do.
Would you believe people if they said they could picture the entire solar system in perfect detail, including all of the geographic features of all of the planets, asteroids, and so on?
People can't do that, so they are not claiming it. The problem is that there is nothing in physics that inputs a physical description, coarse or fine grained, and outputs a predicted subjective state. So there not being enough detail in the input is not the problem.
I agree people can't do that. That's the point. People can claim to imagine or conceive of things that they're not really able to imagine or conceive of. It's not even clear, to me, whether there's a sufficiently developed account behind "imagining" a p-zombie for it to even be clear what they're reporting that they're capable of doing.
//The problem is that there is nothing in physics that inputs a physical description, coarse or fine grained, and outputs a predicted subjective state. //
I don't grant that we have a sufficiently well-developed account of physics or subjective states for anyone to be in a position to know this. And if anyone claims to know this already, then I'd probably not agree that they do, and if they made their claims on the basis of such presumptions, they'd probably just be begging the question against skeptics like me.
We have a sufficiently well developed account of the current physical science we have, physics in the "map" sense, to know we cannot do that ....currently.
Maybe physics in the territory sense can do that, maybe it can't. Betting one side or the other isn't much of an argument either way.
That sounded at first reading like a circular argument to me. But now what I think you mean is that presupposing physicalism necessarily means the imagined world that is identical but has no consciousness is impossible, therefore the assertion that it's possible to imagine it is false and thus it cannot actually be a counter argument to physicalism. That makes sense to me.
As I wrote in my other comment I'm unsure why I should be convinced by the assertion that a zombie can be imagined- I can't really imagine it myself, so now what? Can I just assert the opposite due to my lack of imagination? That doesn't sound right to me.
It seems like a chicken and egg problem: I'd be pretty convinced by this piece to believe the zombie argument if I already agreed that I could imagine it, but since I do not, the article utterly fails to convince me. Is this just because I think like a physicalist?
The problem is this: One is doing the thought experiment to prove that physicalism is false only after assuming that physicalism is false; It is impossible to conduct it otherwise.
If you can't imagine a being that is physically identical to you and doesn't have phenomenal consciousness, then you should probably see a doctor. That's either a lie or is almost certainly rooted in some kind of cognitive deficiency.
> Is this just because I think like a physicalist?
No, it's because you are philosophically illiterate and don't understand the argument.
Lol thanks for the laugh. If you can't have a philosophical argument without resorting to personal attacks when the stakes are this low, I fear for the country you vote in.
It's very obvious that you don't get the argument, while acting very confident that you do - that's not a personal attack but a straightforward fact. Sorry if facts trigger you.
Are you for real? You can read my other comment to see what I really think.I have made it very obvious indeed that I am not in a position of authority and am not well versed in the literature but that I have some questions. You reacted to one of those questions with abuse and insult. You're a small and ridiculous person and I'm assuming you'll get banned eventually.
I didn't read your other comments and obviously I shouldn't have to. Obviously I could have been kinder in my response, but it's just ridiculous to say that you cannot conceive of zombies - what is that even supposed to mean? It's like saying I cannot conceive of unicorns - if someone told me they cannot conceive of a horse with a horn on its forehead, then I'd tell them that they are either obviously lying or that they should see a doctor.
When we get to the zombie argument premise I got lost a bit. When you claim that you find the zombie argument convincing, you have for me done nothing to set that up. As you say:
> They sure seem possible. I can quite vividly imagine a version of me that continues through its daily goings-on but that lacks consciousness.
I cannot. I do struggle to argue against this point as it is a matter of imagination, but I feel (perhaps incorrectly?) that I can push it aside for that reason.
> It’s very plausible that if something is impossible, there should be some reason that it is impossible
Perhaps, but I don't think this compels or even moves me in the direction of accepting the premise above.
This is of course fine, but as it is the first detailed disagreement with EY, I thought it poignant that I got lost at exactly that point and wanted to note it. I'm not saying EYs position in the paragraphs to follow make sense to me either, I just fail to be moved by your arguments till this point as well.
Hope I'm not coming across as too negative, I'll update when I'm done reading!
(For further context about my beliefs: I personally lean towards physicalism, probably partially because of EY, but I find multiple non physicalist positions interesting and have not committed either way)
Try to imagine atoms moving around but there’s no heat. You think you can imagine it but you are actually altering other aspects of reality to make the thought experiment work. You aren’t actually imagining what you think you are imagining.
I agree. I think philosophers aren't actually imagining what they think they're imagining.
Speaking for myself I think philosophers overestimate their powers introspection and think they can simulate states of affairs in their minds in ways that yield substantive metaphysical insights. Maybe they can, but it's not clear to me that they've demonstrated or provided much in the way of good arguments or reasons for believing they have such abilities. As I said to Bentham, while I agree that Yudkowsky is overconfident, I think at least some of Bentham's criticisms stem from employing questionable philosophical methods. I think Bentham is overconfident about the efficacy of those methods.
I'm not sure it's a folk concept or a natural kind. I think there may be folk analogues to discussions of "consciousness" among philosophers, but I suspect much of the discussion among philosophers is largely a philosophical invention that isn't even shared among the folk.
(1) Empirical efforts to evaluate how nonphilosophers think about qualia and the hard problem have not found compelling evidence they share the same inclinations as philosophers.
(2) I think the problems philosophers encounter are largely manufactured by the methods of analytic philosophy itself. In other words, I think philosophers have constructed the problems they then seek to address, but these puzzles are often not present or at least not fully present outside of academic philosophy.
See Pete Mandik's paper on qualia quietism and meta-illusionism for a similar take.
Totally agree—I was being brief, and meant consciousness may have become a kind of folk concept for certain kinds of people (philosophers, meditators, etc). But yes, when we read what we'd call "philosophy of mind" from different cultures and times, it's clear people think about this stuff in different ways. Maybe a reified consciousness is the analytic philosophers' unconscious replacement for the concept a soul.
I'm allergic to -isms but I find the ideas from illusionism, materialism, and things in that general philosophical neighborhood make the most sense to me.
I'll try to describe the issue I have. Someone else already insulted me for my failing imagination and presumed I was being annoying on purpose so I'm probably not going to be successful this time either but here's to trying.
It feels to me like this, imagine we've been discussing the color or the sky, and you're telling me to imagine the sky but without rayleigh scattering. I guess I could try to do this, but invariably it's just gonna turn out wrong. I don't really understand Rayleigh scattering all that well, and I'm not a meterologist or an astronomer. Right now I'm picturing a darkly black sky with some shining bright woolly clouds in it (the sun illuminating them directly, and the teinkle of the stars in comparison being so low in brightness to be drawn out to blackness)
But there are invariably going to be problems once my brain paints the picture: for example, a lot of the clouds won't be bright at all, they'll be completely dark, just a thin line being shown at by the sun. So in my case my brain just shuts up and says something like "nah, I don't really know what that looks like. Definitely not like the picture you just imagined, but something like that I guess".
I feel the exact same way here. I can imagine a world unendowed with consciousness I guess, like what you described with Casper. But my brain just goes nah I don't really understand. Take the image of the worlds exactly like the one with Casper in ith but without him. To my thinking, if you have that world you haven't really gotten rid of Casper qfter all: all the consequences of his actions are still there. What have you really described then? You've just made a new world that is exactly alike, except you've added the statement that Casper isn't there, but really behind the screens there must actually be a Casper cause all the things he does are still being done. Do you see what I'm saying? The assertion that it can be imagined does not help me.
Regarding the impossibility point, I don't know the literature very well so I'll take your word that there is no impossibility proof. Nevertheless I feel like this leaves me where I was: I have no reason now to move to any particular view and remain uncommitted. Thanks for your reply.
Just imagine that? How? What am I supposed to be imagining when I'm imagining "consciousness"? It may be that "consciousness" only seems capable of coming apart from other facts when one has an underdeveloped, confused, or mistaken conception of what "consciousness" itself is. The supposed conceivability of p-zombies may be an artifact of some people being confused, mistaken, or errors in introspection.
It’s not just that I can’t imagine p-zombies, I don’t think those who claim to imagine them are, in fact, capable of doing so. I don’t grant that they’re able to do so simply because they say they are, and I have yet to come across any compelling arguments or evidence that would lead me to believe that you or anyone else can, in fact, imagine a p-zombie.
Part of the reason for this is that I think people are routinely and demonstrably mistaken about the contents of their introspections, and that the kind of research Dennett describes in Consciousness Explained, along with more recent research, attests to the unreliability of our capacity to describe our own "inner states."
When I introspect, I can't imagine the sorts of things you claim to be able to imagine. So, perhaps you can, but in the absence of a compelling reason to believe you have distinctive phenomenology that I don't have, or a developed model of human cognition that supports the claims philosophers make about their ability to imagine or conceive of relevant sorts of things (I don't think there is a compelling case for such models), I remain, I believe, justifiably skeptical that philosophers who claim to be able to imagine things are actually able to do so.
I loved your piece on animal consciousness and have no comment.
On the FDT, I perhaps don't know it well enough, but I'd argue in the bomb scenario specifically that FDT could very well choose the box on the right.
I could construct an algorithm that would output right given the additional information that the predictor left me a helpful note showing they were wrong. This note is of course *extremely surprising* to me given my prior! Why should my algorithm continue running normally as if it didn't see the note? I see no problem with choosing the right box after seeing the note. In what way have I misunderstood the scenario?
But the people who output right conditional on the additional information do less well. If you had a strategy resolutely doing left, you'd do better on average.
So the evidence for him being definitely and egregiously wrong is disagreeing with you on two points where it is not at all clear from the provided text that you understand his arguments.
No . . . the evidence is that he is eggregiously wrong on three topics where he expresses total certainty. And these were just the most eggregious cases, I could have given more examples if I wanted to write a booklength report.
Yes. You could easily have written more words -- writing is an area of strength for you. Can you also suspend your own certainty about how well you understand him? That's the hard part. You aren't passing the intellectual Turing test on any of these examples.
You are the classic example of a deluded Eliezer fanboy. Instead of giving concrete criticism, where Matthew supposedly misunderstands Eliezer's arguments, you resort to vague platitudes. Whereas Matthew was very specific in his criticism of Yudkowsky.
You don't seem to be doing any better of a job understanding me than you likely understand Eliezer. Specific criticisms that are off topic, for instance, yours to me, are not relevant and can be disregarded.
If you think that an admonition to do the intellectual work necessary to actually understand what someone is saying well enough to demonstrate this either to himself or someone else is a vague platitude, you might not be ready for primetime.
1) I'm not a Eliezer fanboy, so you're wrong off the bat.
2) Neither of you appear to demonstrate any open-mindedness about what the points you don't understand are about, or even willingness to imagine that you might not understand something. I won't waste my time explaining something under such circumstances.
Deutsch and Yudkowsky on the Many Worlds Interpretation
I don't think MWI is false or unprovable, but I don't think it is proven or even strongly supported by the kind of arguments form parsimony that Deutsch and Yudkowsky make. I also don't think the Copenhagen interpretation is the only alternative.
The first thing to note is that the question of which interpretation is correct is not easily settled by experimental evidence. Interpretations are ways of figuring out what the maths "means", but the maths makes the same predictions however it is interpreted.
The second thing to note is that MWI is more than one theory.
The third thing to note is that many worlders are pointing at something in the physics and saying "that's a world"....but whether it qualifies as a world is a separate question , and a separate kind of question, from whether it is really there in the physics. One would expect a world, or universe, to be large, stable, non-interacting, and so on . A successful MWI needs to jump both hurdles: mathematical correctness, and conceptual correctness.
The fourth problem to note is that all outstanding issues with MWI are connected in some way with quantum mechanical basis....a subject about which Deutsch and Yudkowsky have little to say.
The Basic Problem.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky's writings.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). The later, decoherence based approach, is more emprically adequate, but seems to require additional structure, placing its simplicity in doubt
Coherent superpositions probably exist, but their components aren't worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching.
A Brief History.
Everetts 1957 paper that launched the Relative State Interpretation, assumed that observers measuring coherently superposed systems became entangled with them, going into superposed states themselves. It was soon noticed that this fails to predict that observers will generally see a single, unsuperposed world, or something very close to it. Since one version of an observer that has observed one outcome is in coherent superposition with their counterparts who observed the others, there is no reason why information should not leak from the one to the other, nor any mechanism by which observers can impose a classical (non superposed) basis.
Everett's original theory was designated the "relative state interpretation". It's appropriate that it was not called the Many Worlds Interpretation, since it does not imply the existence of multiple, non-interacting , classical worlds. Or a single one. In fact, it does not predict classical -- sharp and real-valued -- observations at all. The term "many worlds" is associated with Bryce DeWitt, who revisited Everett's work.
These two problems, the problem of non-interaction between superposed observers, and the basis problem, prompted much of the subsequent research into MWI. Things like the decoherence approach were developed to address them. But the complexity of a successful decoherence theory is hard to assess, since decoherence theories often require assumptions beyond core QM, such as special starting states.
Re: the zombie argument, maybe you’ve written more on this elsewhere, but what makes you confident that you can actually imagine zombies? I.e. you think you can imagine a world that is physically identical but in which no physical structures are conscious, but how do you know your imagination is actually possible?
The default is believing things are possible unless there's a reason to think that they're impossible, especially if they seem possible, as is true of zombies. If you said "unicorns are impossible" you'd have to give a justification. Unfortunately, to my mind, no such justification can be offered for zombies.
Zombies only seem plausible if you think of consciousness as a reified "thing", a cream which can be skimmed off of the physical world. Otherwise, if you think of it as a physical process, it isn't so clear how you can say "I'm imagining a world that is identical down to the last atom, except that when these atoms interact in certain ways, the expected physical/physiological process does not occur." Or even more basically, the zombie argument says "Imagine a world that is exactly same in all its particulars but is somehow and inexplicably different." If anything, the absurdity of the thought experiment reveals that the word "consciousness" is a folk concept that misleads more than it helps.
Yes, if you assume that consciousness is a physical process then zombies are impossible. But it seems to me that that gives us a very good reason to think that consciousness is not a physical process. Because they sure seem possible, I can picture them quite easily. Physical stuff is described in terms of behavior--the way that atoms and such move. But no part of that explains subjective experience.
Maybe you can't picture it and mistakenly think you can. That's what I suspect. I don't think actually can picture what you say you can. Introspection and experience are often misleading.
Personally, I agree with you that Yudkowsky is often overconfident. But I think that you yourself are far too confident about the methods, approaches, and ways of thinking you've picked up from analytic philosophers, especially those most inclined to put so much stock in their seemings and are less engaged with empirical considerations.
I'm not talking about issues in the sense of the typical subjects of philosophical or scientific dispute. I'm talking about methods.
How confident are you that the approach that, for instance, Huemer takes to philosophy is reliable? Or more general a rationalist or a prioristic approach to philosophy? I think these are bad methods. And what's worse is it's hard to even evaluate their track record in the way we can evaluate predictions. If the problem is your methods, it can influence the way you view everything, and distort your confidence levels about everything in the ambit of the method.
I think you're right to question Yudkowsky's confidence. But I think mistakes are spread everywhere and they're more obvious for a solitary thinker than when the mistakes are more foundational and more embedded in established institutions. There is strength in numbers, and bad ideas can persist because they carry the prestige of academic approval or command widespread support.
We're on the same wavelength I think. A big philosophical stumbling block is us thinking we can imagine things that we in fact can't. This can be particularly problematic for people who get really entranced by their own conscious experience, maybe by meditating too much, or who otherwise get caught up in their own heads (things I've all been guilty of!). It can also be difficult to tell someone "You think you can imagine such-and-such, but actually you're wrong."
Why can't he picture it? I can picture a brain with a body moving around and doing the things we do except there's no sense of "what it's like" to be that thing. Nothing problematic about that.
The only thing you can rationally put stock in is seemings. Your ideas, if rationally formed, are based on seemings. The only way to change those seemings are to find other seemings stronger than those seemings to epistemically defeat them. Michael Huemer, who commented in this thread, has an excellent theory of justification called phenomenal conservatism that this is based on. I think it solves belief justification. https://iep.utm.edu/phen-con/
There is no lack of engagement with empirical considerations in this post (at least so far from what I've read of this post, which is most of it), and empirical appearances are themselves seemings. E.g., there seems to be a tree in front of you.
You also can't arbitrarily say one form of seeming is false and accept the other form. You can't for example say that introspective seemings ("I am in pain") is invalid prima facie justification and accept empirical seemings as prima facie justification for no reason. You may say that our introspection is sometimes wrong, but so are our empirical appearances.
//You also can't arbitrarily say one form of seeming is false and accept the other form. You can't for example say that introspective seemings ("I am in pain") is invalid prima facie justification and accept empirical seemings as prima facie justification for no reason. //
Who is "you"? I didn't "arbitrarily" say that some seemings are false and others aren't. You don't know why I think what I think, or what arguments or reasons I may have for my views. I have a strong aversion to people entering discussions making assumptions about what other people think, and telling them what they've said when they haven't said any such thing. I don't typically engage with people do this sort of thing. I've found that I rarely end up having a productive discussion with someone who feels the need to tell me what I think and what I've said.
They would also seem possible if consciousness is physical , but the way in which it is physical is unknown. So the imaginability of zombies isn't proof of dualism.
It would also seem like I'm not a brain in a vat when I am a brain in a vat, but that doesn't really move me. The mere possibility that you are wrong shouldn't mean you throw out your intuitions. That's just extreme skepticism - we should go with what seems more likely on its face before just assuming it's wrong. And it seems like p-zombies are plausible, so we should just assume that until proven otherwise.
//And it seems like p-zombies are plausible, so we should just assume that until proven otherwise.//
It doesn't seem this way to me. In fact, they seem highly implausible, so should I assume they're not plausible (or possible, or whatever) until proven otherwise?
When people say that "it seems" a certain way, I find this strange. It seems that way...to who? To themselves? If so, why not say "it seems to me"? When one says "it seems" it often seems to me like they're making claims about how things seem to other people, not just themselves.
The possibility of zombies as such isn't important. They only matter as a way of disproving physicalism, which they don't. The Mary's Room argument is much better , IMO.
I’m not assuming; to the contrary, the default would seem to be with the physicalist position. Consider what happens when someone gets bashed in the head hard enough and gets brain damage. Both his behavior and subjective experience change in remarkably consistent ways, almost as if they are linked, or two sides of the same coin…
And if they are in fact two sides of the same coin, what would the pro-zombie side say about technologies that could reliably determine facts about someone's state of subjective experience based on measurements of the brain, as we in fact have? Wouldn't that thereby weaken the zombie argument?
Also, you still haven’t explained what you mean when you say “I can picture them quite easily.” What exactly are you picturing? I’m not trying to be annoying or demanding an answer. I just think this is a crucial question that lots of people get hung up on. There are some things we convince ourselves we can imagine that we in fact can’t.
Another way to think about zombies—it is like a 19th century biologist asking, “What if there were a world where all the atoms and organic molecules were arranged in exactly the same way as our world, except there was no ‘élan vital.’” In this case, “élan vital” eg life-force, and “consciousness”, are analogous: both initially conceived as unitary things or powers or forces, but over time are understood to be imprecise terms for a collection of many smaller, mutually interacting processes.
In the elan vital case, elan vital doesn't exist. But consciousness clearly does. You can't reduce it to behavior because consciousness is about what experiences feel like rather than the way matter behaves.
I agree the physical effects the mental--that's totally consistent with dualism. If you think there are psychophysical laws that say that consciousness depends on particular brain states, then brain damage would obviously screw that up.
I'm picturing something that acts like a human and is the same down to the last atom but isn't conscious. That sure seems possible.
If you or anyone else can explain what a "psychophysical law" is or would be, and why it wouldn't just reduce to what we would call a "physical law", then I'll concede the entire argument. I think you have dualism baken into the structure of how you think about the zombie argument and what follows from it. You seem to pre-theoretically divide the world into mental and physical, which I see no justification for other than "It seems like that to me."
If "psychophysical laws" were in fact discovered, that would just mean that our understanding of reality would thereby be expanded just as with any new scientific discovery. We wouldn't have bridged some gap between "mental" and "physical"; those are just words and not ontological categories. There is only one world, after all.
//In the elan vital case, elan vital doesn't exist. But consciousness clearly does.//
That's just it though: I don't think what you think "consciousness" is does exist. I don't think there are qualia or phenomenal states, and I don't think it's clear that they exist.
There are views of consciousness (e.g., illusionism) that would treat what you presumably take consciousness to be as being very much mistaken. The analogy holds between *phenomenal consciousness* and elan vital: neither exists.
//I'm picturing something that acts like a human and is the same down to the last atom but isn't conscious. That sure seems possible. //
It doesn't seem possible to me. Just as I could be confused or mistaken in my failure to imagine p-zombies, you could be confused or mistaken in believing you can imagine them. Things can seem possible to you not because they are possible, but because you're conceptually confused.
> The default is believing things are possible unless there's a reason to think that they're impossible, especially if they seem possible
Why is that the default? If I said "unicorns are impossible" and you countered with "unicorns are possible", as far as a third observer is concerned, they are free to take either or neither of those positions as no evidence has entered yet either way.
I don't see why the default should be believing it's possible, especially if that default is then going to change my worldview immensely.
This seems hackable too: I could generate 10^x sentences like "Y are possible" and have you forced to assume they are possible until proven otherwise, leading to either a lot of work on your part or a potentially completely warped worldview. Why accept that consequence?
Wait, you think "Unicorns are metaphysically possible" is just as plausible as "There is not a single possible world where unicorns exist"? No way you *truly* believe that, dude.
I know this is tangential at best, but my new favorite illustration of the difference between causal and evidential decision theory is one I heard on Sean Carroll’s podcast a couple of weeks ago: Calvinist predestination. If your fate in the afterlife is fixed before you’re born, a causal decision theorist will think it’s pointless to do a bunch of pious stuff to try to please God, but an evidential decision theorist will disagree.
As you correctly point out, FDT is not a perfect decision theory and fails in certain cases. This is also true of CDT and EDT, and of course nobody has ever proposed that it was wrong to propose these theories or write about them. And as far as I know, Eliezer has never claimed that FDT is a solution to the problem of ideal decision theory or that critics of FDT are simply confused or irrational. I don't see any issue with his epistemic conduct in this case.
I don't find your ability to imagine a world that has the same physical properties but not consciousness much evidence here. I just don't think people are able to detect contradictions at that level of sophisication. It feels like saying "I can imagine a world where pi is a rational number, therefore the exact constant of pi is a domain of physics, not math." But this is due to a failure of mathematical intuitions. Most people do not have sufficiently sophisticated intuitions to detect contradictions at that level of granularity.
People's intuitions throw up an error when they come across very obvious errors, like married bachelors, or square circles (which is another way to say pi=4). But intuition isn't a reliable guide to spot mathematical (and therefore logical and therefore metaphysical) impossibilities, and I don't see why consciousness ought to be different here.
So I didn't read all of this, but he thinks we're all algorithms because he's probably high on drugs and definitely bypassing rules to actively control and diminish others' lives for his own, and Aella's. He actually sees the control going on. Too bad in doing so, he actually loses any ethics and ultimately any larger perspective from actually learning via not being a piece of shit. If you didn't graduate high school, how can you say someone that did effectively is not capable of making her own decisions? He deserves to be in prison
I think this can also be extended to correspondence theory, scientific realism, the normativity of classical logic, and maybe moral realism (although it's hard to tell if he really is one since he invents his own jargon and it's not always clear what he means).
If you're really interested I can write a critique of these positions.
> And Eliezer does often offer good advice. He is right that people often reason poorly, and there are ways people can improve their thinking. Humans are riddled by biases, and it’s worth reflecting on how that distorts our beliefs.
Hey, you don't get to dump those all other criticisms and then say : "but there is that ONE thing where he doesn't make the same mistakes that he keeps making everywhere else (and by the way, I am not as specialist in that field (?))" :p
If the brains capability to model itself as a requirement for consciousness is understood as the capability to self-referentialism, then it is most certainly correct. What makes it so incredibly different adjucate between different modes of consciousness is the absence of systems theory in the American discourse, and it is this absence that is to blame for 95% of those scientific debates that have been running endless circles around themselves for decades without ever bearing fruit. Read Luhmann, it's about as fruitful as it gets.
Unless a physicalist denied the existence of consciousness, they would not believe that if a block is moving right while thinking "I want to move right," it would describe all the physical facts to say the block is moving right. For physicalists who don't deny consciousness, the block's thinking "I want to move right" is also a physical fact, so that would need to be stated too. According to the block analogy as you've phrased it, physicalists are all consciousness deniers.
The descriptions you've attributed to the epiphenomenalist and interactionist are both descriptions the non-consciousness-denying physicalist would prefer over the description you've attributed to the physicalist.
How are you talking about fundamental physical laws in any of the descriptions? Each description is a particular one about the goings on with these thinking blocks.
I was assuming the blocks were fundamental. The physicalist thinks that consciousness emerges from the fundamental behavior of simple things, the nonphysicalist disagrees.
I mostly agree with your last two points but I think the first isn't quite "egregiously" wrong.
If I understand correctly, your main point about zombies is that although Eliezer's argument is a good one against epiphenomenalism, it doesn't exclude all non-physicalist theories of consciousness. However:
- Of the two alternatives Chalmers lists, type-D interactionism (~dualism) and type-F monism, Eliezer argued at length against the former in other places. He treats dualism separately from epiphenomenalism in his essay: "Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?"
- So, it seems that the main problem is that he hasn't addressed type-F monism. He does bring this up in his reply to Chalmers:
>Type-F monism is a bit harder to grasp, but presumably, on this view, it is not possible for anything to be real at all without being made out of the stuff of consciousness, in which case the zombie world is structurally identical to our own but contains no consciousness by virtue of not being real, nothing to breathe fire into the equations. If you can subtract the monist consciousness of the electron and leave behind the electron's structure and have the structure still be real, then that is equivalent to property dualism or E. This gets us into a whole separate set of issues, really; but I wonder if this isn't isomorphic to what most materialists believe. After all, presumably the standard materialist theory says that there are computations that could exist, but don't exist, and therefore aren't conscious. Though this is an issue on which I confess to still being confused.
But now we've shifted from "The argument is completely egregiously wrong" to "The argument successfully challenges epiphenomenalism, but it assumes that Chalmers is a strong advocate of E when he actually prefers D and F, and it doesn't conclusively prove that physicalism is correct because he doesn't also address type-F monism (although he does argue against D elsewhere and is at least somewhat open to F)." These are still significant errors, especially the first, but they're not as catastrophic as the headline implies, IMO.
Though even if he has, the main gripe is not that in his corpus of work he never rebuts interactionist dualism. It's that he flagrantly misrepresents the zombie argument, making a basic error that demonstrates total ignorance.
Thanks for this post. I've very much enjoyed some of Yudkowsky's work, especially the brilliant _Harry Potter and the Methods of Rationality_. I was unaware of his very implausible views that you mentioned above.
Regarding the Zombie argument, it sounds as if he might be unaware of the meaning of "metaphysical possibility". Most people who haven't studied philosophy use "possible" in a sense closer to physical or nomological possibility. So he probably thinks that the Zombie argument uses the premise that Zombies are nomologically possible. He'd almost be correct to think that, if consciousness is in fact efficacious, then Zombies are *nomologically* impossible. I guess that's what's going on.
I think in the broader context your interpretation is not especially plausible, but that's just a general sense from reading the post.
I also loved HPMOR!
Eliezer replied here https://forum.effectivealtruism.org/posts/ZS9GDsBtWJMDEyFXh/eliezer-yudkowsky-is-frequently-confidently-egregiously?commentId=7fs5nHEkK6AGgPAJ9.
Can you confirm that my interpretation of zombies is correct and his is wrong? (I'd like to be able to cite some qualified philosopher who says that explicitly because for those who don't know about philosophy of mind, it might be hard to tell which of us is right).
I will add that I think the view that no non-human animals are conscious is extremely implausible on its face and much more of a red flag for someone's reliability than the bit about zombies. (Lots of philosophers reject the Zombie argument, and it's not hard to misunderstand a controversial philosophical argument like this.)
I say this even though I haven't researched the arguments on animal consciousness. I just find the view completely crazy on its face. Thus, I think it reveals a bigger problem, something like the lack of a general "reasonableness detector".
(When you come to a conclusion through some piece of reasoning, you should have something inside you that asks, "Okay, but is this a reasonable answer on its face?" Like if you're trying to calculate the circumference of the Earth, and you get the answer "45 miles", you should say, "Wait, that doesn't sound like a reasonable answer.")
I would note that Hoel carves out space for animals not being conscious. In his most recent post he said "A dog would (likely) have primary consciousness (there is something it is like to be your dog) but it would lack secondary consciousness (their consciousness is not as richly self-aware as ours, they never have an internal narration, and so on)."
EY says primary and secondary consciousness are a package deal, and that this is obvious. I find the confidence crazy, but the territory isn't loony in and of itself. Link to Hoel: https://www.theintrinsicperspective.com/p/consciousness-is-a-great-mystery
People differ in what they think is extremely implausible on its face and what constitutes red flags. For instance, I think non-naturalist moral realism is extremely implausible on its face and a red flag about a person's reliability (incidentally, I say this as someone who has done a bit of research on this).
I think Peter Carruthers used to have that view (and Dennett sort of hinted at it and walked it back.) See my forum comment: https://forum.effectivealtruism.org/posts/ZS9GDsBtWJMDEyFXh/eliezer-yudkowsky-is-frequently-confidently-egregiously?commentId=mscCTLnBoTNjRwpKM
The idea is that what distinguishes conscious from unconscious mental states in the most obvious case of adult humans is that they are introspectively accessible. I.e. for example (borrowing from Dennett), if neuroscientists find that people *deny* they are seeing red when X type of processing of redness information goes on, even if we previously thought that being X type of processing was what being conscious consisted in, we'd say they weren't having a conscious experience of red. (At least in normal conditions, with full attentiveness etc.) This is the idea behind the "higher-order thought" tradition in phil. mind. On this sort of view, it's far from obvious that animals are conscious, since it's far from obvious they can introspect their own experiences and realize they are having them. I think most people take this to be an objection to HOT theories, but all the alternatives face other damaging objections, so its not clearly decisive in my view.
I notice that EY conflates the idea of zombie worlds having the same physical laws with the idea of the causal closure of the physical. In fact zombieists don't have to accept causal closure or determinism. If they don't, they don't have to be epiphenomenalists --- he also confiates dualism and epiphenomenalism, as before.
He's also got a hell of a cheek complaining about snark and personal attacks!
I am also not sure that "personal attacks" is a fair description of your post. You're not arguing that Eliezer Yudkowsky is evil. You're just arguing that he's often badly wrong.
Woah, you can't say that until you've laid out an object level disagreement, apparently according to basic morality. God wrote this moral law on the hearts of those with, in Eliezer's words, sufficient g-factor (high IQ).
In fact, including all of those qualifiers like "egregiously" is probably unnecessary.
I think the main problem is that I can see the article as coming across as overconfident, if not arrogant to some people. If you say "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong" it's going to come off as insulting because to many people because you're just some rando on the internet criticizing this well-known figure when they have no prior about who you are. If he had said something like "I Think That Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong. Here's why" it comes across less overconfident by including the qualifier "I think" at the beginning, and tells your reader that you're going to explain yourself.
I see Yudkowsky's conception of zombies quoted in your link. Can you briefly state your understanding of what a zombie is? (Sorry if this was answered earlier.)
This is how I understand the argument. Physicalists think that there is a complete physical explanation of all human behavior. (That's not all that they think, but that's one thing they believe.) The argument, I take it, says that *if* that's true, then you can imagine a physical duplicate of you who would behave the same way in every circumstance, but you can also imagine that entity not having any phenomenal consciousness. Intuitively, that scenario would be metaphysically possible. But if so, then consciousness is something over and above the physical features.
My sense of where Yudkowsky has misunderstood the dialectic is that, I guess, he thinks the person giving this argument must *himself* believe that there is a complete physical explanation of all human behavior. But one need not think that (as, e.g., Chalmers does not and I do not). It's enough for the argument that the *physicalists* think that.
Right. And I think the reason that Eliezer thinks is that when he imagines a physical copy of you lacking consciousness he imagines a version of you that just had the consciousness disappear. Now, if you think that consciousness is causally efficacious, if you *just* got rid of consciousness then that wouldn't be a physical copy. But the idea is that the zombie wouldn't be the byproduct of purely consciousness subtraction--you'd need to make other changes.
I have to disagree with you on the zombie argument. You say you can imagine a world where everything- down to atoms- is same, but there is no Consciousness. But this is impossible; it is the same as the case of married bachelors and square circle. I think if everything, down to atoms, is same, then consciousness automatically arises. It is a resultant property. I cannot see how this thought experiment disproves physicalism.
When I think about a square circle, it seems impossible and there's an explanation of why it's possible. But in the case of zombies, neither of those are true.
This case is similar. You are already assuming that consciousness cannot arise from the interactions of molecules when you conduct the thought experiment. But if you assume that physicalism is true before trying to conduct the thought experiment, you cannot continue. It is because when everything is similar but there is no consciousness, it is like saying that you are pushing me but I am not being pushed. It is a contradiction.
"square" and "circle" have well-defined properties. It's not at all clear that "consciousness" is like this. The former are notions about which one can assess the compatibility of something having both qualities from the armchair. The latter? I don't see why I should think one can simply "imagine" physical facts same + no consciousness.
What if consciousness just is a set of physical facts? If it is, how could you imagine the physical facts being the same but there not being consciousness?
Suppose, for instance, I actually knew physicalism were true, and that consciousness could be described in fully physical terms. People who don't understand this and who are unaware of the correct account of consciousness might still report, and regard themselves as capable of imagining an absence of consciousness while imagining all physical facts remain constant, simply because they don't know which physical facts constitute consciousness.
I would just outright question whether a person can, in fact, imagine a world that is physically identical, down to the atoms, but there is no consciousness. Why should I think human brains even have the ability to imagine things like that? I am not even sure people can adequately "imagine" a single atom, and then determine what would follow from that fact. But somehow people who claim to be able to imagine p-zombies are imagining all sorts of ludicrously sophisticated things. What does this imagining consist in?
It'd be one thing if I built a model that simulated the properties of atoms and then I watched how it worked. But I don't think people can readily do these sorts of things in their mind, or, if they can, it's at best at an incredibly rudimentary level.
For all we know, proponents of the conceivability of p-zombies are "imagining" something incredibly trivial. For instance, they could be doing little more than imagining sentences and assigning truth values to them. For instance, they could imagine this:
Proposition 1: All the physical facts are the same as in this world. TRUE
Proposition 2: There is no phenomenal consciousness. TRUE
Yet the ability to imagine assigning truth values to propositions is not diagnostic of whether the propositions could, in fact, both be true at the same time.
I'd question whether Bob can dictate to Alice what Alice can and can't imagine.
I'm not "dictating" what people can or can't imagine. I'm expressing skepticism that they can do what they claim to be able to do.
Would you believe people if they said they could picture the entire solar system in perfect detail, including all of the geographic features of all of the planets, asteroids, and so on?
People can't do that, so they are not claiming it. The problem is that there is nothing in physics that inputs a physical description, coarse or fine grained, and outputs a predicted subjective state. So there not being enough detail in the input is not the problem.
I agree people can't do that. That's the point. People can claim to imagine or conceive of things that they're not really able to imagine or conceive of. It's not even clear, to me, whether there's a sufficiently developed account behind "imagining" a p-zombie for it to even be clear what they're reporting that they're capable of doing.
//The problem is that there is nothing in physics that inputs a physical description, coarse or fine grained, and outputs a predicted subjective state. //
I don't grant that we have a sufficiently well-developed account of physics or subjective states for anyone to be in a position to know this. And if anyone claims to know this already, then I'd probably not agree that they do, and if they made their claims on the basis of such presumptions, they'd probably just be begging the question against skeptics like me.
If being able to something just means failing to notice some contradiction or impossibility, people can do it easily.
We have a sufficiently well developed account of the current physical science we have, physics in the "map" sense, to know we cannot do that ....currently.
Maybe physics in the territory sense can do that, maybe it can't. Betting one side or the other isn't much of an argument either way.
That sounded at first reading like a circular argument to me. But now what I think you mean is that presupposing physicalism necessarily means the imagined world that is identical but has no consciousness is impossible, therefore the assertion that it's possible to imagine it is false and thus it cannot actually be a counter argument to physicalism. That makes sense to me.
As I wrote in my other comment I'm unsure why I should be convinced by the assertion that a zombie can be imagined- I can't really imagine it myself, so now what? Can I just assert the opposite due to my lack of imagination? That doesn't sound right to me.
It seems like a chicken and egg problem: I'd be pretty convinced by this piece to believe the zombie argument if I already agreed that I could imagine it, but since I do not, the article utterly fails to convince me. Is this just because I think like a physicalist?
The problem is this: One is doing the thought experiment to prove that physicalism is false only after assuming that physicalism is false; It is impossible to conduct it otherwise.
Looks like circular reasoning to me.
If you can't imagine a being that is physically identical to you and doesn't have phenomenal consciousness, then you should probably see a doctor. That's either a lie or is almost certainly rooted in some kind of cognitive deficiency.
> Is this just because I think like a physicalist?
No, it's because you are philosophically illiterate and don't understand the argument.
Lol thanks for the laugh. If you can't have a philosophical argument without resorting to personal attacks when the stakes are this low, I fear for the country you vote in.
It's very obvious that you don't get the argument, while acting very confident that you do - that's not a personal attack but a straightforward fact. Sorry if facts trigger you.
Are you for real? You can read my other comment to see what I really think.I have made it very obvious indeed that I am not in a position of authority and am not well versed in the literature but that I have some questions. You reacted to one of those questions with abuse and insult. You're a small and ridiculous person and I'm assuming you'll get banned eventually.
I didn't read your other comments and obviously I shouldn't have to. Obviously I could have been kinder in my response, but it's just ridiculous to say that you cannot conceive of zombies - what is that even supposed to mean? It's like saying I cannot conceive of unicorns - if someone told me they cannot conceive of a horse with a horn on its forehead, then I'd tell them that they are either obviously lying or that they should see a doctor.
Consciousness automatically arises from the physics if physicslism is true: whether physicslism is true is the whole problem.
Correct
(midway comment, Haven't finished it yet)
When we get to the zombie argument premise I got lost a bit. When you claim that you find the zombie argument convincing, you have for me done nothing to set that up. As you say:
> They sure seem possible. I can quite vividly imagine a version of me that continues through its daily goings-on but that lacks consciousness.
I cannot. I do struggle to argue against this point as it is a matter of imagination, but I feel (perhaps incorrectly?) that I can push it aside for that reason.
> It’s very plausible that if something is impossible, there should be some reason that it is impossible
Perhaps, but I don't think this compels or even moves me in the direction of accepting the premise above.
This is of course fine, but as it is the first detailed disagreement with EY, I thought it poignant that I got lost at exactly that point and wanted to note it. I'm not saying EYs position in the paragraphs to follow make sense to me either, I just fail to be moved by your arguments till this point as well.
Hope I'm not coming across as too negative, I'll update when I'm done reading!
(For further context about my beliefs: I personally lean towards physicalism, probably partially because of EY, but I find multiple non physicalist positions interesting and have not committed either way)
The idea is that if zombies are impossible there should be some explanation of why they are impossible. Unfortunately, there is not.
Can you really not picture them? Just imagine the atoms moving in the same way but there being no consciousness.
Try to imagine atoms moving around but there’s no heat. You think you can imagine it but you are actually altering other aspects of reality to make the thought experiment work. You aren’t actually imagining what you think you are imagining.
I agree. I think philosophers aren't actually imagining what they think they're imagining.
Speaking for myself I think philosophers overestimate their powers introspection and think they can simulate states of affairs in their minds in ways that yield substantive metaphysical insights. Maybe they can, but it's not clear to me that they've demonstrated or provided much in the way of good arguments or reasons for believing they have such abilities. As I said to Bentham, while I agree that Yudkowsky is overconfident, I think at least some of Bentham's criticisms stem from employing questionable philosophical methods. I think Bentham is overconfident about the efficacy of those methods.
Yes, philosophers are good at constructing castles in the sky.
Ultimately I think the zombie argument shows that the idea of “consciousness” as we often talk about it is a folk concept, not a natural kind.
I'm not sure it's a folk concept or a natural kind. I think there may be folk analogues to discussions of "consciousness" among philosophers, but I suspect much of the discussion among philosophers is largely a philosophical invention that isn't even shared among the folk.
(1) Empirical efforts to evaluate how nonphilosophers think about qualia and the hard problem have not found compelling evidence they share the same inclinations as philosophers.
(2) I think the problems philosophers encounter are largely manufactured by the methods of analytic philosophy itself. In other words, I think philosophers have constructed the problems they then seek to address, but these puzzles are often not present or at least not fully present outside of academic philosophy.
See Pete Mandik's paper on qualia quietism and meta-illusionism for a similar take.
Totally agree—I was being brief, and meant consciousness may have become a kind of folk concept for certain kinds of people (philosophers, meditators, etc). But yes, when we read what we'd call "philosophy of mind" from different cultures and times, it's clear people think about this stuff in different ways. Maybe a reified consciousness is the analytic philosophers' unconscious replacement for the concept a soul.
I'm allergic to -isms but I find the ideas from illusionism, materialism, and things in that general philosophical neighborhood make the most sense to me.
I'll try to describe the issue I have. Someone else already insulted me for my failing imagination and presumed I was being annoying on purpose so I'm probably not going to be successful this time either but here's to trying.
It feels to me like this, imagine we've been discussing the color or the sky, and you're telling me to imagine the sky but without rayleigh scattering. I guess I could try to do this, but invariably it's just gonna turn out wrong. I don't really understand Rayleigh scattering all that well, and I'm not a meterologist or an astronomer. Right now I'm picturing a darkly black sky with some shining bright woolly clouds in it (the sun illuminating them directly, and the teinkle of the stars in comparison being so low in brightness to be drawn out to blackness)
But there are invariably going to be problems once my brain paints the picture: for example, a lot of the clouds won't be bright at all, they'll be completely dark, just a thin line being shown at by the sun. So in my case my brain just shuts up and says something like "nah, I don't really know what that looks like. Definitely not like the picture you just imagined, but something like that I guess".
I feel the exact same way here. I can imagine a world unendowed with consciousness I guess, like what you described with Casper. But my brain just goes nah I don't really understand. Take the image of the worlds exactly like the one with Casper in ith but without him. To my thinking, if you have that world you haven't really gotten rid of Casper qfter all: all the consequences of his actions are still there. What have you really described then? You've just made a new world that is exactly alike, except you've added the statement that Casper isn't there, but really behind the screens there must actually be a Casper cause all the things he does are still being done. Do you see what I'm saying? The assertion that it can be imagined does not help me.
Regarding the impossibility point, I don't know the literature very well so I'll take your word that there is no impossibility proof. Nevertheless I feel like this leaves me where I was: I have no reason now to move to any particular view and remain uncommitted. Thanks for your reply.
Just imagine that? How? What am I supposed to be imagining when I'm imagining "consciousness"? It may be that "consciousness" only seems capable of coming apart from other facts when one has an underdeveloped, confused, or mistaken conception of what "consciousness" itself is. The supposed conceivability of p-zombies may be an artifact of some people being confused, mistaken, or errors in introspection.
It’s not just that I can’t imagine p-zombies, I don’t think those who claim to imagine them are, in fact, capable of doing so. I don’t grant that they’re able to do so simply because they say they are, and I have yet to come across any compelling arguments or evidence that would lead me to believe that you or anyone else can, in fact, imagine a p-zombie.
Part of the reason for this is that I think people are routinely and demonstrably mistaken about the contents of their introspections, and that the kind of research Dennett describes in Consciousness Explained, along with more recent research, attests to the unreliability of our capacity to describe our own "inner states."
When I introspect, I can't imagine the sorts of things you claim to be able to imagine. So, perhaps you can, but in the absence of a compelling reason to believe you have distinctive phenomenology that I don't have, or a developed model of human cognition that supports the claims philosophers make about their ability to imagine or conceive of relevant sorts of things (I don't think there is a compelling case for such models), I remain, I believe, justifiably skeptical that philosophers who claim to be able to imagine things are actually able to do so.
I loved your piece on animal consciousness and have no comment.
On the FDT, I perhaps don't know it well enough, but I'd argue in the bomb scenario specifically that FDT could very well choose the box on the right.
I could construct an algorithm that would output right given the additional information that the predictor left me a helpful note showing they were wrong. This note is of course *extremely surprising* to me given my prior! Why should my algorithm continue running normally as if it didn't see the note? I see no problem with choosing the right box after seeing the note. In what way have I misunderstood the scenario?
But the people who output right conditional on the additional information do less well. If you had a strategy resolutely doing left, you'd do better on average.
So the evidence for him being definitely and egregiously wrong is disagreeing with you on two points where it is not at all clear from the provided text that you understand his arguments.
No . . . the evidence is that he is eggregiously wrong on three topics where he expresses total certainty. And these were just the most eggregious cases, I could have given more examples if I wanted to write a booklength report.
Yes. You could easily have written more words -- writing is an area of strength for you. Can you also suspend your own certainty about how well you understand him? That's the hard part. You aren't passing the intellectual Turing test on any of these examples.
You are the classic example of a deluded Eliezer fanboy. Instead of giving concrete criticism, where Matthew supposedly misunderstands Eliezer's arguments, you resort to vague platitudes. Whereas Matthew was very specific in his criticism of Yudkowsky.
Pathetic.
You don't seem to be doing any better of a job understanding me than you likely understand Eliezer. Specific criticisms that are off topic, for instance, yours to me, are not relevant and can be disregarded.
If you think that an admonition to do the intellectual work necessary to actually understand what someone is saying well enough to demonstrate this either to himself or someone else is a vague platitude, you might not be ready for primetime.
So you don't have any specific criticism, gotcha. Typical Eliezer clown, like I said.
Gotcha? Sure, pal. Here's a couple:
1) I'm not a Eliezer fanboy, so you're wrong off the bat.
2) Neither of you appear to demonstrate any open-mindedness about what the points you don't understand are about, or even willingness to imagine that you might not understand something. I won't waste my time explaining something under such circumstances.
" I know nothing about quantum physics, and he sounds persuasive when talking about quantum physics. "
I know a lot about QM, and he's confused and over confident about that too. Also Solomonoff induction, Aumamns theorem, and Bayes
Deutsch and Yudkowsky on the Many Worlds Interpretation
I don't think MWI is false or unprovable, but I don't think it is proven or even strongly supported by the kind of arguments form parsimony that Deutsch and Yudkowsky make. I also don't think the Copenhagen interpretation is the only alternative.
The first thing to note is that the question of which interpretation is correct is not easily settled by experimental evidence. Interpretations are ways of figuring out what the maths "means", but the maths makes the same predictions however it is interpreted.
The second thing to note is that MWI is more than one theory.
The third thing to note is that many worlders are pointing at something in the physics and saying "that's a world"....but whether it qualifies as a world is a separate question , and a separate kind of question, from whether it is really there in the physics. One would expect a world, or universe, to be large, stable, non-interacting, and so on . A successful MWI needs to jump both hurdles: mathematical correctness, and conceptual correctness.
The fourth problem to note is that all outstanding issues with MWI are connected in some way with quantum mechanical basis....a subject about which Deutsch and Yudkowsky have little to say.
The Basic Problem.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky's writings.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). The later, decoherence based approach, is more emprically adequate, but seems to require additional structure, placing its simplicity in doubt
Coherent superpositions probably exist, but their components aren't worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching.
A Brief History.
Everetts 1957 paper that launched the Relative State Interpretation, assumed that observers measuring coherently superposed systems became entangled with them, going into superposed states themselves. It was soon noticed that this fails to predict that observers will generally see a single, unsuperposed world, or something very close to it. Since one version of an observer that has observed one outcome is in coherent superposition with their counterparts who observed the others, there is no reason why information should not leak from the one to the other, nor any mechanism by which observers can impose a classical (non superposed) basis.
Everett's original theory was designated the "relative state interpretation". It's appropriate that it was not called the Many Worlds Interpretation, since it does not imply the existence of multiple, non-interacting , classical worlds. Or a single one. In fact, it does not predict classical -- sharp and real-valued -- observations at all. The term "many worlds" is associated with Bryce DeWitt, who revisited Everett's work.
These two problems, the problem of non-interaction between superposed observers, and the basis problem, prompted much of the subsequent research into MWI. Things like the decoherence approach were developed to address them. But the complexity of a successful decoherence theory is hard to assess, since decoherence theories often require assumptions beyond core QM, such as special starting states.
Re: the zombie argument, maybe you’ve written more on this elsewhere, but what makes you confident that you can actually imagine zombies? I.e. you think you can imagine a world that is physically identical but in which no physical structures are conscious, but how do you know your imagination is actually possible?
The default is believing things are possible unless there's a reason to think that they're impossible, especially if they seem possible, as is true of zombies. If you said "unicorns are impossible" you'd have to give a justification. Unfortunately, to my mind, no such justification can be offered for zombies.
Zombies only seem plausible if you think of consciousness as a reified "thing", a cream which can be skimmed off of the physical world. Otherwise, if you think of it as a physical process, it isn't so clear how you can say "I'm imagining a world that is identical down to the last atom, except that when these atoms interact in certain ways, the expected physical/physiological process does not occur." Or even more basically, the zombie argument says "Imagine a world that is exactly same in all its particulars but is somehow and inexplicably different." If anything, the absurdity of the thought experiment reveals that the word "consciousness" is a folk concept that misleads more than it helps.
Yes, if you assume that consciousness is a physical process then zombies are impossible. But it seems to me that that gives us a very good reason to think that consciousness is not a physical process. Because they sure seem possible, I can picture them quite easily. Physical stuff is described in terms of behavior--the way that atoms and such move. But no part of that explains subjective experience.
Maybe you can't picture it and mistakenly think you can. That's what I suspect. I don't think actually can picture what you say you can. Introspection and experience are often misleading.
Personally, I agree with you that Yudkowsky is often overconfident. But I think that you yourself are far too confident about the methods, approaches, and ways of thinking you've picked up from analytic philosophers, especially those most inclined to put so much stock in their seemings and are less engaged with empirical considerations.
I don't think there are any controversial issues except theism where I express Yudkowskian near certainty.
I'm not talking about issues in the sense of the typical subjects of philosophical or scientific dispute. I'm talking about methods.
How confident are you that the approach that, for instance, Huemer takes to philosophy is reliable? Or more general a rationalist or a prioristic approach to philosophy? I think these are bad methods. And what's worse is it's hard to even evaluate their track record in the way we can evaluate predictions. If the problem is your methods, it can influence the way you view everything, and distort your confidence levels about everything in the ambit of the method.
I think you're right to question Yudkowsky's confidence. But I think mistakes are spread everywhere and they're more obvious for a solitary thinker than when the mistakes are more foundational and more embedded in established institutions. There is strength in numbers, and bad ideas can persist because they carry the prestige of academic approval or command widespread support.
We're on the same wavelength I think. A big philosophical stumbling block is us thinking we can imagine things that we in fact can't. This can be particularly problematic for people who get really entranced by their own conscious experience, maybe by meditating too much, or who otherwise get caught up in their own heads (things I've all been guilty of!). It can also be difficult to tell someone "You think you can imagine such-and-such, but actually you're wrong."
Why can't he picture it? I can picture a brain with a body moving around and doing the things we do except there's no sense of "what it's like" to be that thing. Nothing problematic about that.
The only thing you can rationally put stock in is seemings. Your ideas, if rationally formed, are based on seemings. The only way to change those seemings are to find other seemings stronger than those seemings to epistemically defeat them. Michael Huemer, who commented in this thread, has an excellent theory of justification called phenomenal conservatism that this is based on. I think it solves belief justification. https://iep.utm.edu/phen-con/
There is no lack of engagement with empirical considerations in this post (at least so far from what I've read of this post, which is most of it), and empirical appearances are themselves seemings. E.g., there seems to be a tree in front of you.
You also can't arbitrarily say one form of seeming is false and accept the other form. You can't for example say that introspective seemings ("I am in pain") is invalid prima facie justification and accept empirical seemings as prima facie justification for no reason. You may say that our introspection is sometimes wrong, but so are our empirical appearances.
//You also can't arbitrarily say one form of seeming is false and accept the other form. You can't for example say that introspective seemings ("I am in pain") is invalid prima facie justification and accept empirical seemings as prima facie justification for no reason. //
Who is "you"? I didn't "arbitrarily" say that some seemings are false and others aren't. You don't know why I think what I think, or what arguments or reasons I may have for my views. I have a strong aversion to people entering discussions making assumptions about what other people think, and telling them what they've said when they haven't said any such thing. I don't typically engage with people do this sort of thing. I've found that I rarely end up having a productive discussion with someone who feels the need to tell me what I think and what I've said.
"Because they sure seem possible"
They would also seem possible if consciousness is physical , but the way in which it is physical is unknown. So the imaginability of zombies isn't proof of dualism.
It would also seem like I'm not a brain in a vat when I am a brain in a vat, but that doesn't really move me. The mere possibility that you are wrong shouldn't mean you throw out your intuitions. That's just extreme skepticism - we should go with what seems more likely on its face before just assuming it's wrong. And it seems like p-zombies are plausible, so we should just assume that until proven otherwise.
//And it seems like p-zombies are plausible, so we should just assume that until proven otherwise.//
It doesn't seem this way to me. In fact, they seem highly implausible, so should I assume they're not plausible (or possible, or whatever) until proven otherwise?
When people say that "it seems" a certain way, I find this strange. It seems that way...to who? To themselves? If so, why not say "it seems to me"? When one says "it seems" it often seems to me like they're making claims about how things seem to other people, not just themselves.
The possibility of zombies as such isn't important. They only matter as a way of disproving physicalism, which they don't. The Mary's Room argument is much better , IMO.
I’m not assuming; to the contrary, the default would seem to be with the physicalist position. Consider what happens when someone gets bashed in the head hard enough and gets brain damage. Both his behavior and subjective experience change in remarkably consistent ways, almost as if they are linked, or two sides of the same coin…
And if they are in fact two sides of the same coin, what would the pro-zombie side say about technologies that could reliably determine facts about someone's state of subjective experience based on measurements of the brain, as we in fact have? Wouldn't that thereby weaken the zombie argument?
Also, you still haven’t explained what you mean when you say “I can picture them quite easily.” What exactly are you picturing? I’m not trying to be annoying or demanding an answer. I just think this is a crucial question that lots of people get hung up on. There are some things we convince ourselves we can imagine that we in fact can’t.
Another way to think about zombies—it is like a 19th century biologist asking, “What if there were a world where all the atoms and organic molecules were arranged in exactly the same way as our world, except there was no ‘élan vital.’” In this case, “élan vital” eg life-force, and “consciousness”, are analogous: both initially conceived as unitary things or powers or forces, but over time are understood to be imprecise terms for a collection of many smaller, mutually interacting processes.
In the elan vital case, elan vital doesn't exist. But consciousness clearly does. You can't reduce it to behavior because consciousness is about what experiences feel like rather than the way matter behaves.
I agree the physical effects the mental--that's totally consistent with dualism. If you think there are psychophysical laws that say that consciousness depends on particular brain states, then brain damage would obviously screw that up.
I'm picturing something that acts like a human and is the same down to the last atom but isn't conscious. That sure seems possible.
If you or anyone else can explain what a "psychophysical law" is or would be, and why it wouldn't just reduce to what we would call a "physical law", then I'll concede the entire argument. I think you have dualism baken into the structure of how you think about the zombie argument and what follows from it. You seem to pre-theoretically divide the world into mental and physical, which I see no justification for other than "It seems like that to me."
If "psychophysical laws" were in fact discovered, that would just mean that our understanding of reality would thereby be expanded just as with any new scientific discovery. We wouldn't have bridged some gap between "mental" and "physical"; those are just words and not ontological categories. There is only one world, after all.
//In the elan vital case, elan vital doesn't exist. But consciousness clearly does.//
That's just it though: I don't think what you think "consciousness" is does exist. I don't think there are qualia or phenomenal states, and I don't think it's clear that they exist.
There are views of consciousness (e.g., illusionism) that would treat what you presumably take consciousness to be as being very much mistaken. The analogy holds between *phenomenal consciousness* and elan vital: neither exists.
//I'm picturing something that acts like a human and is the same down to the last atom but isn't conscious. That sure seems possible. //
It doesn't seem possible to me. Just as I could be confused or mistaken in my failure to imagine p-zombies, you could be confused or mistaken in believing you can imagine them. Things can seem possible to you not because they are possible, but because you're conceptually confused.
> The default is believing things are possible unless there's a reason to think that they're impossible, especially if they seem possible
Why is that the default? If I said "unicorns are impossible" and you countered with "unicorns are possible", as far as a third observer is concerned, they are free to take either or neither of those positions as no evidence has entered yet either way.
I don't see why the default should be believing it's possible, especially if that default is then going to change my worldview immensely.
This seems hackable too: I could generate 10^x sentences like "Y are possible" and have you forced to assume they are possible until proven otherwise, leading to either a lot of work on your part or a potentially completely warped worldview. Why accept that consequence?
Wait, you think "Unicorns are metaphysically possible" is just as plausible as "There is not a single possible world where unicorns exist"? No way you *truly* believe that, dude.
What makes you confident there's a difference between actually imagining and imagining?
I know this is tangential at best, but my new favorite illustration of the difference between causal and evidential decision theory is one I heard on Sean Carroll’s podcast a couple of weeks ago: Calvinist predestination. If your fate in the afterlife is fixed before you’re born, a causal decision theorist will think it’s pointless to do a bunch of pious stuff to try to please God, but an evidential decision theorist will disagree.
TL;DR Calvinists are one-boxers.
It's so over for LessWrongcels
As you correctly point out, FDT is not a perfect decision theory and fails in certain cases. This is also true of CDT and EDT, and of course nobody has ever proposed that it was wrong to propose these theories or write about them. And as far as I know, Eliezer has never claimed that FDT is a solution to the problem of ideal decision theory or that critics of FDT are simply confused or irrational. I don't see any issue with his epistemic conduct in this case.
I don't find your ability to imagine a world that has the same physical properties but not consciousness much evidence here. I just don't think people are able to detect contradictions at that level of sophisication. It feels like saying "I can imagine a world where pi is a rational number, therefore the exact constant of pi is a domain of physics, not math." But this is due to a failure of mathematical intuitions. Most people do not have sufficiently sophisticated intuitions to detect contradictions at that level of granularity.
People's intuitions throw up an error when they come across very obvious errors, like married bachelors, or square circles (which is another way to say pi=4). But intuition isn't a reliable guide to spot mathematical (and therefore logical and therefore metaphysical) impossibilities, and I don't see why consciousness ought to be different here.
So I didn't read all of this, but he thinks we're all algorithms because he's probably high on drugs and definitely bypassing rules to actively control and diminish others' lives for his own, and Aella's. He actually sees the control going on. Too bad in doing so, he actually loses any ethics and ultimately any larger perspective from actually learning via not being a piece of shit. If you didn't graduate high school, how can you say someone that did effectively is not capable of making her own decisions? He deserves to be in prison
I agree, and you might want to downvote this post: https://forum.effectivealtruism.org/posts/2S3CHPwaJBE5h8umW/read-the-sequences?commentId=9J76kQkF8k86zcyWF
I think this can also be extended to correspondence theory, scientific realism, the normativity of classical logic, and maybe moral realism (although it's hard to tell if he really is one since he invents his own jargon and it's not always clear what he means).
If you're really interested I can write a critique of these positions.
> And Eliezer does often offer good advice. He is right that people often reason poorly, and there are ways people can improve their thinking. Humans are riddled by biases, and it’s worth reflecting on how that distorts our beliefs.
Hey, you don't get to dump those all other criticisms and then say : "but there is that ONE thing where he doesn't make the same mistakes that he keeps making everywhere else (and by the way, I am not as specialist in that field (?))" :p
Any zététitians around ?
P.S.: bonuses about Jordan Peterson :
https://web.archive.org/web/20181224093855/https://medium.com/s/story/peterson-historian-aide-m%C3%A9moire-9aa3b6b3de04
https://ndpr.nd.edu/reviews/french-theory-how-foucault-derrida-deleuze-co-transformed-the-intellectual-life-of-the-united-states/
https://medium.com/@Corax/argue-like-jordan-peterson-265e4c11b235
He's fine when offering advice on things that are not controvrsial including how to reason better.
Anything can become controversial, I have seen people make fun of bias lists and people that post them.
And the stakes on getting this right are probably even higher than for that of animal suffering...
If the brains capability to model itself as a requirement for consciousness is understood as the capability to self-referentialism, then it is most certainly correct. What makes it so incredibly different adjucate between different modes of consciousness is the absence of systems theory in the American discourse, and it is this absence that is to blame for 95% of those scientific debates that have been running endless circles around themselves for decades without ever bearing fruit. Read Luhmann, it's about as fruitful as it gets.
Unless a physicalist denied the existence of consciousness, they would not believe that if a block is moving right while thinking "I want to move right," it would describe all the physical facts to say the block is moving right. For physicalists who don't deny consciousness, the block's thinking "I want to move right" is also a physical fact, so that would need to be stated too. According to the block analogy as you've phrased it, physicalists are all consciousness deniers.
The descriptions you've attributed to the epiphenomenalist and interactionist are both descriptions the non-consciousness-denying physicalist would prefer over the description you've attributed to the physicalist.
I was talking about the fundamental physical laws. Consciousness would emerge as a higher level description.
How are you talking about fundamental physical laws in any of the descriptions? Each description is a particular one about the goings on with these thinking blocks.
I was assuming the blocks were fundamental. The physicalist thinks that consciousness emerges from the fundamental behavior of simple things, the nonphysicalist disagrees.
I mostly agree with your last two points but I think the first isn't quite "egregiously" wrong.
If I understand correctly, your main point about zombies is that although Eliezer's argument is a good one against epiphenomenalism, it doesn't exclude all non-physicalist theories of consciousness. However:
- Of the two alternatives Chalmers lists, type-D interactionism (~dualism) and type-F monism, Eliezer argued at length against the former in other places. He treats dualism separately from epiphenomenalism in his essay: "Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?"
- So, it seems that the main problem is that he hasn't addressed type-F monism. He does bring this up in his reply to Chalmers:
>Type-F monism is a bit harder to grasp, but presumably, on this view, it is not possible for anything to be real at all without being made out of the stuff of consciousness, in which case the zombie world is structurally identical to our own but contains no consciousness by virtue of not being real, nothing to breathe fire into the equations. If you can subtract the monist consciousness of the electron and leave behind the electron's structure and have the structure still be real, then that is equivalent to property dualism or E. This gets us into a whole separate set of issues, really; but I wonder if this isn't isomorphic to what most materialists believe. After all, presumably the standard materialist theory says that there are computations that could exist, but don't exist, and therefore aren't conscious. Though this is an issue on which I confess to still being confused.
But now we've shifted from "The argument is completely egregiously wrong" to "The argument successfully challenges epiphenomenalism, but it assumes that Chalmers is a strong advocate of E when he actually prefers D and F, and it doesn't conclusively prove that physicalism is correct because he doesn't also address type-F monism (although he does argue against D elsewhere and is at least somewhat open to F)." These are still significant errors, especially the first, but they're not as catastrophic as the headline implies, IMO.
Where does he address type D dualism?
Though even if he has, the main gripe is not that in his corpus of work he never rebuts interactionist dualism. It's that he flagrantly misrepresents the zombie argument, making a basic error that demonstrates total ignorance.