Discussion about this post

User's avatar
Arturo Macias's avatar

I will answer to this in some detail, while of course for me more than 3 pages is malpractice:-)

Let suffice now to say that if “pain” is penalty in the utility function of the neural network, you are obviously right.

A different thing is if there is a “self” to suffer that penalty. I can program a neural network with extreme aversion to this or that, but the consciousness of the structure depends on the network to be complex enough to have a self able to suffer.

No amount of behavioral evidence would convince you that a Boston dynamics dog suffers, and the reason is that you need more than behavior. As commented in the Shrimp post, a broad extension of the moral circle needs a broad theory of consciousness. If you are a naturalistic dualist, and think that consciousness is epiphenomenic, the difficulty is big and metaphysical.

Expand full comment
JG's avatar

This is one of my favorite things you've ever written. Incredibly informative.

Re the relationship between neuron count and intensity: You raise good arguments, yet I'd still be surprised if people who study this stuff concluded a single neuron could feel pain. It seems like there is a relationship between neuron count and pain (as well as sentience more broadly), just not a linear one, and not one we understand well.

Expand full comment
59 more comments...

No posts