70 Comments
Nov 20Edited

This is one of my favorite things you've ever written. Incredibly informative.

Re the relationship between neuron count and intensity: You raise good arguments, yet I'd still be surprised if people who study this stuff concluded a single neuron could feel pain. It seems like there is a relationship between neuron count and pain (as well as sentience more broadly), just not a linear one, and not one we understand well.

Expand full comment

Thanks!

Expand full comment

<Over time, more and more creatures have been recognized as sentient, such that prior to the 1980s, it was widely believed that animals didn’t suffer at all.

:c

Expand full comment

"more animals are conscious rather than feer."

Expand full comment

I made a post about this recently (https://flyinglionwithabook.substack.com/p/why-arent-animal-welfare-activists), but do you support restrictions on abortion after 16 weeks? A fetus will withdraw from being touched as early as 7 weeks. At 16 weeks if you poke a needle into a fetus he'll move away vigorously and have increased stress hormones, comparable to the kind of hormonal increase seen in children and adults who are in pain. Giving a 16 week old fetus anesthesia stops both the reactive movements and increases in hormones.

Dr. Gary L. George gave the following testimony to the Ohio Legislature about his experience with potential fetal pain:

"While doing my first ultrasound rotation, I observed my first “selective reduction” procedure. A woman had undergone IVF treatment for infertility. She was pregnant with triplets. She and her husband decided that they could only handle having twins and wanted to undergo a “selective reduction” of one of the triplets at about 14-18 weeks. I observed while the ultrasonographer scanned the three babies and provided live images so that the obstetrician could aim a long needle through the mom’s uterus into the chest of one of the baby’s hearts in order to make a lethal injection. As the sharp needle touched the baby’s chest, the baby immediately withdrew and started to rapidly move his arms and legs. The needle was unable to penetrate the chest. The mother started crying when she saw the horrific live images on the screen. Her husband told her not to look and the obstetrician instructed our tech to turn the screen away from the mother’s view to hide the reality of what was happening. The obstetrician made a second and third attempt on the same baby with the same immediate withdrawal and flailing about by the baby but was again unsuccessful. Clearly, the baby was fighting for its life. At that point, the obstetrician decided to try and target another one of the triplets. It was terrifying to see this small human fighting to stay alive. I felt physically ill. A wave of nausea swept over me and I thought I was going to vomit and left the room. I know from talking to the ultrasonographer that the obstetrician was eventually 'successful' in penetrating the chest and heart of one of the triplets. I also know that from that point on, I was no longer ambivalent about abortion. The baby that I saw that day felt pain and suffering. This was not just some automatic reflex. "

(https://www.legislature.ohio.gov/legislation/133/sb23/committee)

Given all this shouldn't we assume that a human fetus can suffer at 16 weeks gestation, if not earlier? If so, would you support restricting abortions past that date? A little over 35,000 surgical abortions occurred in 2021 on fetuses past that developmental stage. The procedure involves either cutting or tearing the fetus to pieces, and then sucking those pieces down a narrow vacuum tube. No anesthesia is administered, even though the American Society for Anesthesiologists recommends fetal anesthesia for fetal surgeries at this same level of development. If they can suffer then I imagine they suffer quite a lot from being ripped to pieces.

What's you opinion on passing laws to either ban abortion after 16 weeks, or require anesthesia for the fetus during the procedure? As it stands Utah is the only state that requires it (though since Dobbs it's become a moot point since abortions after 18 weeks are now illegal in the state).

Expand full comment

I don't know the exact threshold but 16 weeks might be right. I'd have to look more into the science.

Expand full comment

The fetus and shrimp thresholds shall be consistent! Good luck.

Expand full comment

> Most shockingly, crayfish self administer amphetamine, which is utterly puzzling on the assumption that they aren’t conscious. Hard to imagine that a non-conscious creature would have a major preference for drugs.

It's not as shocking as you think: amphetamine 'hijacks' reinforcement conditioning, a process which uses dopaminergic neurons (where dopamine transmission is, sanding off a lot of complexity, the signal to do a thing again) even in the sea slug Aplysia, an organism boasting 20,000 of them total. Amphetamine binds directly to dopamine receptors, and so is self-reinforcing via the most direct mechanism available. It turns out that the circuitry for learning about what's helpful and harmful in your environment is really old and fairly well conserved.

That said, some of the most reinforcing substances to humans don't particularly affect consciousness or even feel good (nicotine is a great example), and some of the most conscious-altering aren't reinforcing at all (psychedelics). I'm not sure how much the presence of reinforcement conditioning in aplysia or any other organism, on its own, changes my estimation of the likelihood it's conscious.

Personality is another factor that might just ride along. It's dependent on both complexity and interspecies variability, and when the latter is constrained personality vanishes no matter how complex an organism's cognition or robust its conscious experience is.

I think the evidence about nociception is most compelling and establishes at minimum something like a jointly sufficient condition for pain. If we have compelling reasons to think an organism is conscious, the presence of nociceptive circuits strongly suggests that organism feels pain. I also find the "how much can you fit in your consciousness" argument compelling vis a vis pain intensity. (It reminds me of one of my favorite philosopher anecdotes, although I can't remember who the philosopher in question was...the upshot is that, knowing both how much his cat liked sitting in front of the fireplace and how few the cat's pleasures were, he would let himself go cold, which bothered him less, so that the cat could keep warm in the spot closest to the fire.)

You don't cover a ton of neuroscience evidence here but I've had the same directional inference experience in my exposure. This was a nice synthesis of empirical and philosophical considerations BB, I like this and all the other shrimp work very much.

Expand full comment

Thanks!

Seems weird that a non-conscious creature would get addicted to drugs. There are no examples of clearly non-conscious creatures doing this, while we know that the normal way it works is creating a desire to ingest the substance.

Expand full comment

Getting addicted to drugs is just a special case of reinforcement learning, though, and because neurons work through chemical signaling, the ability to get addicted to drugs is practically a necessary property of the implementation of an RL algorithm in neurons, and RL algorithms can be quite simple. Desire (maybe more accurate to say urges) is a prominent feature of addiction in humans because the drives associated are very powerful relative to other learned things (that's why addiction is so maladaptive) but it's far, far from the case that everything learned via RL is mediated by conscious experience. The face of addiction in humans is misleading for this purpose, and it's helpful to understand that, as mentioned, addiction to drugs is just a special case of addiction in general, which is a special case maladaptation in a very general learning mechanism. (This actually clears up a lot of popular confusion about addiction, and what it is that people are trying to talk about when they talk about dopamine these days, but that's another overlong comment or two.)

Expand full comment

But the fact that they have reward learning algorithms that make them ingest substances is predicted if they're conscious, surprising if they're not.

Expand full comment

Oh I see what you're thinking. Some responses are more stereotyped than others. But "ingest thing more" is a pretty predictable parameter to be able to modify if you're a thing that eats.

Expand full comment

Right, sure, you can always invoke the explanation that it responds behaviorally to rewards but isn't conscious. But p(respond behaviorally to rewards like drugs)|consciousness>|~consciousness

Expand full comment

I just think your intuitions about what must be the case for RL occur are misplaced. I don't think it's worthless as evidence, but I find other behavioral and neurological evidence more persuasive.

Basically, the ability to get addicted to drugs is something I would expect in almost any organism that can learn to modify its behavior in response to environmental contingencies in more than a couple of ways. If you think that kind of behavioral flexibility is good evidence for consciousness, I don't think addiction capacity shouldn't take you much farther, and I also think there's better ground to stand on.

Expand full comment

More generally I am skeptical of empirical investigations of consciousness, since every phenomenon can always be explained materially.

Expand full comment

This means we can never decisively prove some set of NCCs, but we can still get decent evidence.

Expand full comment

There's a worthy critique of the limits of the approach here, but I don't think it's fair to say I just explained a phenomenon away materialistically. I explained the material basis and then showed that material basis is dissociable from the experiential phenomenon of interest. If the example had been opioids rather than amphetamines, I wouldn't have quibbled because opioids activate pleasure circuitry, not (only and more indirectly in this case) reward learning circuitry.

Expand full comment

While I don’t find most of your criticisms of Eisemann prima facie convincing, a lot of this information about insects is both new and surprising to me. I took Huemer’s claims about insect suffering and citation of Eisemann in “Dialogues on Ethical Vegetarianism” at face value, and when I looked into it I was skeptical of people denying his claims (from the vocabulary being used it sounded like they took any kind of reaction to harmful stimulus as being “pain,” which would make microbes morally relevant). This is much better!

I mostly skimmed because this was a long post. I’ll have to read the whole thing later. Most surprising though is the idea that insects actually react to severe injuries, because I’ve paid attention to how insects react to losing limbs, and they don’t seem to care. I think dismissing this as an anecdote is like dismissing claims that the sky is blue is an anecdote!

Expand full comment

When Mike Tyson gets hit in the head, he seems to ignore it. Does this mean he doesn't feel pain?

Expand full comment

Though I do love the idea that bugs are all little clones of Mike Tyson. This is an entertaining and intellectually compelling narrative that Big Pest Control wouldn’t want you to know about

Expand full comment

This is hilarious! XD

Expand full comment

Does Mike Tyson lose entire limbs in combat? Half of his body? Also, Mike Tyson is in a conscious combat scenario. If Tyson got sucker punched randomly, he would likely react differently, though perhaps not as strongly as someone who didn’t have normal combat experience. Bugs don’t have Mike Tyson’s training or adrenaline when they randomly experience extreme injuries.

Expand full comment

But you might have evolutionary pressures for insects to feel pain weirdly or ignore certain kinds of pain.

Expand full comment

“Might” is still “might.”

Expand full comment

It's likely you would if they have simple brains that can't focus on many things at once that they'd focus on the most salient thing (e.g. sex).

Expand full comment

Maybe, but I don’t have the knowledge to say that or otherwise.

Expand full comment

I will answer to this in some detail, while of course for me more than 3 pages is malpractice:-)

Let suffice now to say that if “pain” is penalty in the utility function of the neural network, you are obviously right.

A different thing is if there is a “self” to suffer that penalty. I can program a neural network with extreme aversion to this or that, but the consciousness of the structure depends on the network to be complex enough to have a self able to suffer.

No amount of behavioral evidence would convince you that a Boston dynamics dog suffers, and the reason is that you need more than behavior. As commented in the Shrimp post, a broad extension of the moral circle needs a broad theory of consciousness. If you are a naturalistic dualist, and think that consciousness is epiphenomenic, the difficulty is big and metaphysical.

Expand full comment

Sure, but in every case we only figure out conscious states through behavior--we can't observe them directly. At some point, when a thing looks and acts like you'd expect it to if it was conscious, it's reasonable to assume it is.

Expand full comment

No, because part of our intuition of consciousness is related to complexity. All the destruction avoidance/reproduction pursuit you describe are a result of being a product of evolution. You find similar examples in cell behavior. Nothing of this tell us if there is a self on the other side of that penalty.

Our intuition is that consciousness comes from complexity and information integration, and its intensity depends on those characteristics.

Now, the size of the cockroach neural network is 0.1 % of human… and complexity is often considered to be super additive.

Having penalty without the self imply there is no pain.

Expand full comment

Why think more complex brains (measured by neuron counts) experience significantly more intense experiences?

Expand full comment

Why not individual eucariota? Why not electrons? If you they are simple, try to solve the Schroedinger equation as they do.

Conciousness is noumenal. I am as much a panphychist as you, but at the end you need a theory of consciousness, not a theory of life behavior. For me is clear that information integration and representation creates conciousness, and as integration information machines we are massively bigger than insects.

But of course, consciousness is noumenal, and what we know about consciousness comes from pure extrapolation: this is easy for other humans, and already imposible for the bat.

Expand full comment

Because eucariota and electrons don't behave as if they're conscious--having flexible behavior to avoid harmful stimuli, for instance.

Expand full comment

Now there are lots of video games that have a complex behavioural repertory. Even algorithms that can speak like a politically correct professor

Expand full comment

I agree with this other commentator's line of reasoning, but to add: I also think that even when there is a subjective self, it's normal to suppose attenuation of the "vividness" of the conscious experience as the nervous system decreases in size and complexity, even if there are unknowns as to how this works.

One can see this even in one's own experience. Dulling of the nervous system (drunk) or lacking emotional reflection upon the pain can already help a lot, even with the same brain and functional reactions.

Expand full comment

<<I’ve also argued that pain in simple creatures is likely pretty intense. If pain serves to teach creatures a lesson, then simpler creatures would need to feel more pain, and creatures with simpler cognition would have their entire consciousness occupied by pain. In addition, the behavioral evidence reviewed by Rethink Priorities seems to suggest that it’s likely that many animals experience a lot of pain—they react quite strongly for stimuli, not like a creature in a dull, barely-conscious haze.>>

One hell of a job your God is doing. I am sure suffering of insects leads to some great Soul-Building. Congrats to Her.

Expand full comment

Is there a reason we are certain that, say, paramecia do not feel pain? They avoid noxious stimuli and are capable of associative learning. A lot of the definitions of pain revolve around having a nervous system, but if we discovered a creature that acted just like a human despite having some sort of strange distributed chemical information processing rather than a nervous system we'd probably say it feels pain.

Expand full comment

They have no brain or central nervous system to integrate the information from the various signals.

Expand full comment

Is a brain or a central nervous system required to have subjective "experience"? Presumably associative learning means that information is being stored and recalled somewhere, whether it's chemical or electrical.

Expand full comment

Wow, my Spotify program uses a REST API to retrieve and store data about the songs it plays. This is more likely if *every* program also works this way. I mean look, this program and this program and this program all communicate by storing and receiving data, so they all must instantiate and make use of Spotify's REST API. I don't even have to crack open any computers or decompile any programs to know this, just appeal to the fact that REST APIs are described as communicating and other programs are also described as communicating, so clearly every single program has a REST API.

Expand full comment

Huh?

Expand full comment

Pain = REST API, animals = programs, description of communication = the way you describe conscious behavior. The point was that your approach to the mind is fundamentally unserious, like trying to divine the contents of a program with vague ideas like: a REST API is involved in communication, programs communicate, therefore every program has a REST API in it. This is just nonsense. Your ideas of pain and consciousness are undeveloped and there is no logical flow to anything you're saying.

Expand full comment

If your point is just "maybe these creatures are behaviorally like creatures in pain but aren't conscious," that's definitely possible but a pretty bad explanation of why so many things align as if they were in pain.

Expand full comment

No, my point is that your inquiry is fundamentally confused. When scientists conduct empirical investigations into "pain," they have to find some way to operationalize the concept. They don't just investigate "pain" simpliciter, they have to construct theoretical accounts of it, and there are multiple competing accounts of what "pain" could be. You use an undifferentiated concept of "pain" because you have no familiarity with how to conduct empirical studies, and that makes your conclusions and arguments in this article surrounding pain meaningless.

For example, let's say I want to make a historical conclusion about slavery. Huge immediate problem: multiple different societies across multiple different historical time periods have multiple differing norms about what is just and unjust work, the extent that physical punishment is incorporated and justified in working, the extent that people work at all, the quality of the jobs performed, and so on. These factors will inform what it is that I choose to call slavery and the conclusions I will draw from my theory about slavery.

For example, I might consider slavery only forms of work where workers are whipped. Some ancient societies might brutalize their workers with far worse punishments, but because they existed before whips were invented, I would not count those societies as slave holding ones. Or maybe I consider slavery whenever an employer has life-destroying privileges over their employees - that is, I consider slave societies to include those that have wage slavery. In some societies employers and employees are allowed to whip and chain each other, but because there is equality in fighting to the death, neither employer nor employee count as slaves, and so those are not slave holding societies. Or maybe I consider slavery to be only when black workers are mistreated by white workers, so many African countries never practiced slavery because there were no white people there, but American countries did because whites ruled over blacks in their labor relations. Or maybe I consider slavery to be people working any job for a living that has remarkably detrimental health effects, like being drafted as a soldier, or working in an oil field, or working in a coal mine without a respirator for 16 hours a day, or being a chimney sweeper.

Your approach is like somebody who wants to study "just slavery" - fundamentally misguided. There is no "just slavery" and there is no "just pain." Theories of pain are going to need to be operationalized according to species, individuals, neuronal architectures, and holistic systems that are being studied. Even taking a toy example like "Pain is whenever nociceptors fire" is going to fall flat when compared across different species because other species are simply not going to all have evolved nociceptors or the cognitive architecture that nociceptors play their embedded, holistic, functional role in - that is, unless you make the conscious decision that you only care to call nociceptors firing pain. You however don't even seem to be aware that you need to operationalize pain in this sort of way, and so your inquiry is more useless than somebody who just sticks with the toy nociceptor example and rules out any species from feeling pain that doesn't have nociceptors, because you're making spurious conclusions about different competing empirical accounts of pain that are all informed by different motivations and methodologies and thus don't really have a common thread among them.

Your quote of what you think my points is - "maybe these creatures are behaviorally like creatures in pain but aren't conscious," is at far too high a level of granularity to even be interpretable. "Behaviorally like" is a gloss term that hides all the complexity in competing accounts and interpretations of animal behavior. So too with "in pain," and "aren't conscious." You have not offered anything like a definition for any of these terms that would make them operationalizable, and so it's impossible to evaluate that sentence in any meaningful detail. "pretty bad explanation of why so many things align as if they were in pain" - you used an undifferentiated concept of "pain," and so I maintain this is silly for you to state because it's impossible for other people to figure out what you mean by pain, and thus impossible to evaluate any evidence for or against your conception(s) of pain.

Expand full comment

Oh wow, it gets worse. Just saw the demented strawman.

"like trying to divine the contents of a program with vague ideas like: a REST API is involved in communication, programs communicate, therefore every program has a REST API in it. This is just nonsense."

Such a clear case of straw manning. So uncharitable & snarky too. Disgusting. 🤮🤮🤮

Can't even accurately represent Matthew's reasoning. The bare minimum. Such a shameful, and demented, caricature. Ideology & motivated reasoning rotting the brain.

Expand full comment

Also, Matthew is frequently guilty of this even in purely philosophical posts. He'll usually say something like "I'm a moral realist because I think pain is morally bad," where the same mistake he makes is that there are competing accounts of badness and he fails to disambiguate which one he means. Most moral antirealists probably agree that pain is bad - because they conceptualize badness in an antirealist-friendly way. Matthew should instead clarify by saying "I'm a moral realist because I think pain is stance-independently morally bad" - this way he doesn't falsely imply that antirealists don't think pain is morally bad, and differentiates which badness concept he is using.

Expand full comment

What is a more accurate representation of Matthew's reasoning? I think Matthew's concepts of pain, behavioral similarities, and consciousness are undifferentiated messes. They are so zoomed-out and detached from any particulars that it's impossible to construct well-reasoned inferences with them, and my analogy with REST APIs being just a form of communication or just a form of retrieving and storing data were a parody of that. A REST API (like pain) is a very highly specific computer science (physiological) construct - abstracting away from that to just saying "communication behavior" or "pain behavior" renders the concept meaningless in inferences, because you can draw inferences of the sort in my analogy; REST API = communication, programs communicate, therefore all programs probably implement REST APIs. I think Matthew is guilty of using this same exact ultra zoomed out/underspecified/not even differentiated concept of pain in his piece, and it's indicative of his lack of skill in operationalizing concepts to conduct an empirical study of them.

Expand full comment

Couple of ambiguous terms there. What's meant by "undifferentiated mess"?

Expand full comment

Even better, the linked sources in Matthew's posts make my point for me. From the "What Neuron Counts Can and Can't Tell Us about Moral Weight" google doc:

>In general, there is great uncertainty about the degree to which neuron counts are correlated with intelligence, though how strong we view this correlation will depend on how intelligence is defined and what is thought to count as a “strong” correlation. In regards to the connection between intelligence and moral standing, there also is much uncertainty about the extent to which intelligence matters for moral weight.

Matthew should really try reading his sources before linking them. He frequently comes to the complete opposite conclusion that he should have if he was interpreting the source correctly, like when he linked an interview piece and used it for evidence that physicists think there is converging physical evidence that the universe is infinite when the interviewees actually did not say that at all https://open.substack.com/pub/benthams/p/against-against-the-infinite?r=1r9dwz&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=63850481

Expand full comment

Imagine I told you to go a place and get me a tool. There would need to be far more clarification on the words "place" and "tool" in order for you to succeed in communicating with me. Likewise, "pain" isn't just a simple toy word that we can just go out and check if other animals and insects are "in pain" - if it was, we probably wouldn't need to do any empirical investigations. Instead, we need to know what we're looking for. Matthew doesn't succeed in clarifying the word pain, or telling us how it works, or telling us its neuronal architecture, or telling us what we need to look for in other species. He merely appeals to random things associated with pain, which is as clarificatory as if I told you that the place to get me the tool is on Earth or has an entrance.

Compare this with a REST API https://en.wikipedia.org/wiki/REST#Architectural_properties that is well defined and is easy to check for. We don't look for communication proxies like storing and retrieving info or something happening on the monitor to determine whether a program uses a REST API. We know what architecture and behavior to look for because it's specified by the design pattern. If we used Matthew's method instead we would be looking for undifferentiated communication mechanisms like one part of the program sending info to another part of the program, which is not very clarificatory. This is further complicated by "pain" being a contested theoretical term with multiple competing accounts of it, whereas alternatives to REST are just considered alternatives https://www.pubnub.com/blog/7-alternatives-to-rest-apis/ - what amount of time has Matthew dedicated to discussing alternate theories of pain, or is he just assuming there is only one single true pain that everybody is researching?

Expand full comment

Regardless of the snarkiness, where is the strawmanning here, though? What exactly does KK's analogy misses in Mathew's reasoning?

As far as I can tell Mathew's reasoning is about noticing that we are conscious and behave in a particular way. Then noticing that clearly non-conscious things do not behave in such way and then inferencing that other things, whose consciousness status is debatable, and which also behave in such way are indeed conscious.

And the REST API analogy seem to be working in exactly the same way. We know that some programs use REST API and behave in a particular way. We also know that some things that clearly do not use REST API do not behave in this way. Therefore the same reasoning pushes us to a conclusion that everything that behaves in this way and may or may not use REST API is likely to be using REST API.

What am I missing?

Expand full comment

Assuming you are approaching in good faith, Matthew gave 5 different arguments. You can read the article to see what they are. None of them correspond to the reasoning pattern:

"like trying to divine the contents of a program with vague ideas like: a REST API is involved in communication, programs communicate, therefore every program has a REST API in it. This is just nonsense."

You can verify this yourself. Look for this reasoning pattern in Matthew's article. You'll never find any paragraph expressing this.

And, to be clear, properly shaming retarded strawmen is not "snarky".

Expand full comment

> Matthew gave 5 different arguments. You can read the article to see what they are. None of them correspond to the reasoning pattern

Second argument is explicitly about that. And I'm surprised that you don't see it. Let's try to look together to figure it out.

> Second, these creatures act in most ways like they’re in pain. If an insect or shrimp is exposed to damage, they’re struggle with great effort and try to get away. Either they are in pain or they evolved a response that makes them behave like they’re in pain. But if a creature struggles and tries to get away, a natural inference is that they’re in pain.

In other words:

1) A huge class of creatures C, including insects and shrimps, try to get away from the source of harm

2) humans do it due to pain

3) therefore every creature from C, including insects and shrimps, feels pain

The argument seem to be isomorphic to the alledged strawman:

1) A huge class of programs C communicate

2) REST API programs do it due to REST API

3) therefore every program from C uses REST API

So, what am I missing?

Expand full comment

Typically when you drop cheap zingers like “that’s nonsense!” etc you need some reasoning why it’s BS. So do you actually have any arg that “Your ideas of pain and consciousness are undeveloped”?

@BenthamsBulldog People like @TheKoopaKing just offer emotional sperging & vague shit talking hoping smarter folks don’t notice they lack a critique. I’d either force a retraction or extract the premise-conclusion arg.

Expand full comment

I posted a longer comment in response to Matthew, but I don't see what I need to clarify with respect to

>Your ideas of pain and consciousness are undeveloped

Matthew doesn't explain what pain or consciousness concepts he is using in this post, so his ideas of them are simply undeveloped. For example, consider competing studies on employment. One of them operationalizes "employment" to mean that a person is making money from any monetary source whatsoever. Another operationalizes "employment" to mean that a person is working 8 hours a day 5 days a week. Another operationalizes "employment" to include people who are always on-call. Another operationalizes "employment" to mean anybody who checks the "yes" button to a survey the paper sent out with a single question, "Are you employed?"

Matthew then reviews the literature and publishes a piece saying that employment is very high because we have a lots of correlate of high employment - people are very rich in the US, people are very busy in the day, people are spending time away from home, we have low inflation. Has Matthew provided any meaningful construal of "employment" whatsoever? Or as he just ignored everything the studies did by using an undifferentiated concept of "employment" and ranting about some things that people typically associate with high employment? I think he's done exactly that with respect to feeling pain, because he doesn't understand how to approach empirical driven enterprises, and it's also why I'm typically ranting in his comments that he should study many science courses while he's in college.

Expand full comment

"Matthew doesn't explain what pain or consciousness concepts he is using in this post, so his ideas of them are simply undeveloped."

So the view is that if you don't explain what you mean, it is "undeveloped"?

By this standard, most of what you said is undeveloped. Your comment used words like "nonsense", "fundamentally confused", "concept", etc. I didn't see you provide any explanation of what "fundamental" means, for example.

Just have reasonable standards. Einstein's theory of relativity makes various truth claims. We wouldn't call Einstein's relativity "undeveloped" because it doesn't specify a theory of truth.

Expand full comment

>So the view is that if you don't explain what you mean, it is "undeveloped"?

The view is that you need to translate ordinary words into well operationalized concepts when you're conducting empirical investigations. It doesn't make sense to study "just employment" or "just computers" or "just groceries" because these are polysemous ordinary language words that mean different things in different cultures, to different people, are used differently against a backgrounds of assumptions, etc.

>By this standard, most of what you said is undeveloped. Your comment used words like "nonsense", "fundamentally confused", "concept", etc. I didn't see you provide any explanation of what "fundamental" means, for example.

Sure, which parts do you think need clarification?

>Just have reasonable standards.

I don't think Matthew's approach is reasonable, he is clueless about how to approach many of the topics he writes about, and fails to demonstrate epistemic virtues when it comes to understanding what it is that scientific researchers do.

Maybe a more obvious example: I want to investigate racism on the Internet, and my progressive sensibilities tell me that anytime somebody calls an African American "black," that is racist. Whenever there is an underrepresentation of African Americans in a discussion group compared to their overall makeup of the US, that is racist. Whenever somebody nonblack speaks in African American Vernacular English, that is racist.

Clearly, this research is going to come across as fucking stupid to many people interested in the broad topic, "Racism on the Internet," because it operationalizes the concept "racism" in ways many people are not interested in. There is a divergence of understanding, motivation, and purpose between my racism research and other people's interest in racism research. A surface level analysis of "racism proxies" would obscure the motivations and purpose of my research, and includes what would be considered irrelevant or misleading or false statements about racism on the Internet if the people exposed to the surface level analysis found out what my motivations and goals in conducting the research were. Matthew's analysis likewise does not make it any more obvious whether or not insects feel pain because he doesn't operationalize the concept and instead lists random findings from people who are not all researching some sort of platonic universal pain concept, but instead working in independent research projects with different goals in mind.

>Einstein's theory of relativity makes various truth claims. We wouldn't call Einstein's relativity "undeveloped" because it doesn't specify a theory of truth.

I don't know what you mean by this, but my default assumption is that philosophers are going to project their own values onto what Einstein's relativity theory is, what it does, how determinate in meaning it is, how universally understood it is, how many versions of it there are, and make many other problematic assumptions about it that absolutely should be challenged in a philosophical context but would probably fall by the wayside in any sort of practical context.

Expand full comment

Perhaps feeling pain is one thing, being conscious, or being conscious of oneself, or being conscious of life and death and God is a different thing, and must be defined.

Expand full comment

This is a very interesting post. I agree that the likelihood of several species including insects and crustaceans being sentient and capable of valenced experience appears to increases the more we investigate and discover their behavior. While there are plausible counter-arguments against this view (some of which have also been described in these comments here), these might just end up being vestiges of an effort to cling on to an a hypothesis that is soon going out of favor.

Nonetheless, I am much less convinced about the intensity of the pain argument you have put forth. Even if one accepts your line of argument that pain probably plays a more important role in a less intelligent species or that the neural correlates of pain represent a far greater fraction of all neural processing in an animal with fewer overall neuron count, the subjective experience of pain depends on the nature of the phenomenal consciousness that emerges in the species. And while it is true that naive assumptions about how sophisticated consciousness is based on absolute neuronal count is probably incorrect, it is also unlikely that overall neural complexity is *entirely* irrelevant for consciousness consideration either. We just don't know how a single unified notion of consciousness emerges (well, while we are at it, why assume it is even unified in a different species?) and the characteristics of that consciousness may well depend on the underlying complexity of the neuronal wiring structure/firing patterns/degrees of freedom.

Expand full comment

Thanks for the post ! This is vey valuable.

I find the argument convincing: why would evolution not include something as similar as pain in autonomous animals, and why would they have such similar behavior in pain if they were not conscious?

Expand full comment

Calling @Anatoly Karlin.

Expand full comment

Why does your arguments preclude oysters and plants btw? Your inductory generalization does increase the probability that they too will one day come under these categories. It doesn't seem to me that locomotion is necessarily linked to pain-reception. In fact historically Jains have philosophically argued this about organisms and that we have an inherent bias for animal-like pain.

Expand full comment

They don't have brains and the things that I describe here as specific evidence don't apply to them. When confronted with an inductive trend of underestimating X, you don't just immediately assume everything has X.

Expand full comment

I mean then the question boils down to whether phenomenologically pain is necessitated by brains. Which by past induction, should reduce your confidences on what kind of physical structures are relevant to consciousness and sentience. Now that reduction is probably very minimal because it requires a larger paradgimal shift which I think is the better way of phrasing it? Since say, saying an insect is conscious is still neuronal activity while a plant would require a fundamentally different physical theory.

I would think the problem is a tad bigger for non physicalists, especially those who already believe in immaterial minds and theism. Like I said, Jainism has a viable model of universal consciousness that also includes non-animal sentience.

Expand full comment