3 Comments

Here's the explanation that dissolves the hard problem of consciousness:

Imagine an idealized Amazon shopping cart code construct. The shopping cart construct is composed of smaller constructs - a graphics object like a bitmap, a data structure for holding items you put in the shopping cart, a string that is your delivery address, and various methods for querying and updating the internal constructs - and some of the constructs are private and only accessible by the shopping cart itself, whereas some are public and capable of being accessed by other high-level constructs.

Now, are you going to duplicate the shopping cart method that allows it to be searched across all the other constructs? Will you add a queryShoppingCartForItemsMethod() to each and every Amazon coding construct used in AWS, the Amazon homepage, the Amazon wishlist, and so on? No, that's a really stupid way to design interaction across multiple distinct submodules.

Instead, you're going to expose a simplified model of the shopping cart construct to all the other coding constructs, such that when working with the shopping cart, those other coding constructs don't need to process all the information contained in it, as that would be stupidly wasteful. You're going to provide an encapsulated object - the shopping cart itself with all its subfields, but make the subfields private, and add queryingMethods() so you don't have to expose the implementation to every other coding construct that wants to use the shopping cart. This is a means of achieving interoperability with multiple distinct submodules that don't all need to know how to query all information in other submodules.

The shopping cart is analogous to consciousness - the graphics object, data structure for holding items, and delivery address, are analogous to your senses. Rather than duplicating visual processing in your memory module, language model, attention module, and so on, your visual system encapsulates the information into a higher level construct - a "quale" - that decides which aspects of its implementation to expose to other modules. This representation is inherently distorted - you're only shown what evolution designed to be useful to other modules in the brain, which is why you might be tempted to say e.g. "That quale is just red simpliciter," even though that's a stupid thing to say for the obvious reasons that e.g. the quale has to encode some level of brightness in order for it to not be black, it has some sort of distance information packed with it (e.g. it's "next" to the green quale but not the smell quale), and so on. There is no over and above structure and function with consciousness - you're just given a misleading picture by introspection since you don't have the ability to e.g. introspect and discover what sort of message-passing architecture your brain uses - just like the Amazon homepage will be afforded a misleading picture of the shopping cart construct since it decides which aspects of its implementation to expose to foreign submodules.

Expand full comment

Is there any way to obtain a transcript of this conversation?

Expand full comment

Yes, it's at the top of the page.

Expand full comment