Introduction
The humanist project has been a force for enormous amounts of good—progress in domains both scientific and ethical. Yet unfortunately, at the very heart of it is the word that signifies the ethical error behind it: human. Humanism is about promoting flourishing for humans, a project which, unfortunately, ignores the overwhelming majority of sentient beings on Earth.
Of course, some call themselves humanists and care about animals. With these people, my dispute is just terminological. It would be as if one wanted to promote the welfare of all people, but described their view as whitism—a view originally defined as being about promoting the welfare of white interests. While I’d be broadly on board with their aims—if they truly cared about all people equally—I’d think they should probably change their slogan.
In place of humanism, many have proposed sentientism. Sentientism claims—like humanism—that we should use logic, reason, and evidence to advance welfare. However, it claims, unlike humanism, that we should advance the welfare of all sentient beings, not merely humans.
Sentientists often claim that humanism is objectionably arbitrary. If two beings have similar mental capacities, why in the world should it matter if one is a homo sapien? In reply, people sometimes claim that sentientism is arbitrary—why should we only care about sentient beings? Here, I shall provide several arguments for sentientism, and against the charge of arbitrariness.
It’s intuitive
Here’s one argument for sentientism—it is intuitive that only sentient beings matter. Most of us think that people matter but philosophical zombies—beings that are physically identical to humans but lacking in consciousness—do not matter. If they really have no conscious experience—no hopes, no dreams, no aspirations, no pain, no thoughts, no love—then it seems that nothing that happens to them really matters. If all is really dark inside the zombie, then it seems that nothing that happens to us matters.
Of course, one might reply that it is intuitive that only humans matter. But this is clearly wrong—if a being is in unimaginable agony, it seems that it’s pain is bad regardless of whether it is a homo sapien. If one found out that actually, despite being consciously identical to the rest of you, I was secretly a bulldog (though I suppose I haven’t done a great job covering it up) my suffering would not seem to be any less bad. If we found out that people from Italy, for example, were as smart and conscious as humans, but were not homo sapiens, we would not declare their suffering irrelevant.
Of course, when confronted with this, the speceisists either bite the bullet or adopt crazy criteria. For example, they will say that what matters is being the same species as beings that are mostly smart. Of course, none of these criteria are immune from counterexample—in the case of the most recent criteria, it would imply that if we find out that some terminally ill babies are not technically homo sapiens, their pain isn’t bad, and that if we found out that mentally disabled people weren’t homo sapiens, it would imply that their suffering wasn’t bad too. But even if they can gerrymander some contorted criterion to avoid counterexample—which just to be clear, none of them have successfully done, ever—it will not be intuitive. It does not seem intuitively like the necessary and sufficient conditions that determine whether one matters involve 31 steps and the intelligence of other beings that one can interbreed with.
So sentientism just seems right. Whenever one claims that some criteria justifies some property, it is always possible to declare it arbitrary. However, if it seems like the type of criteria that can, in fact, justify that property, then it is not arbitrary. Just like it’s not arbitrary to claim that one should have to be good at boxing to be a professional boxer, if the criteria that justifies some way of being treated is actually able to justify it, then it is far from arbitrary.
Theories of welfare
Suppose one says that any way of determining which entities to care about is arbitrary. Presumably then, they will think that the only thing that matters is whether some entity can be harmed or not. If a plant can’t be harmed or benefitted by any action, even if it is within our moral circle, we need not take into account the impact of our actions on plants, just as if our moral circle includes all humans, we’ll still only take into account the effect of our actions on those humans who our action affects.
Thus, sentientism is consistent with a maximally inclusive moral circle. As long as we think that only sentient beings can be harmed or benefitted, we can care about all possible entities—sentient and nonsentient—and nonetheless, we’ll conclude that the only ones that matter practically—the only ones we should take into account in our decisions, are the sentient ones.
Philosophers have explored ways that one can he harmed and benefitted. There are a few primary ones, all of which imply sentientism.
Hedonism: happiness is the only thing that is good for anyone, pain is the only thing that is bad for anyone. This implies sentientism because only sentient beings can be happy.
Desire theory: getting what one wants is good for them, having their desires frustrated is bad for them. This implies sentientism because only sentient beings have desires.
Objective list theory: there are a list of things that are good for people including knowledge, friendship, happiness, desire fulfillment, achievement, and more. These, on most plausible accounts, all require sentience.
So in order to avoid sentientism, one has to have an utterly bizarre view of welfare—according to which various mysterious things are good or bad for people.
The transformation test
One way of testing whether an entity matters is to imagine oneself being turned into that entity and ask whether they’d have a reason to care about what happened to them after they were that entity. For example, if we want to know if pigs matter, we imagine being slowly turned into a pig over the course of two years. The question is whether, after one was fully pig, they would rationally care about whether they were tortured.
Of course they would! If I knew by 2025 I’d be a pig, I’d be very against the pig version of me being tortured. But if I knew I’d turn into a plant, I wouldn’t care. Once I was a plant, nothing would happen to me—I’d have no experience. If you think you wouldn’t, remember, there are debilitating mental conditions that leave people at less than the level of a pig. If one would care about what happened to themself after that point, so too should they care about what would happen to them if they were a pig.
Conclusion
Thus, it is only conscious beings that matter. Claiming that conscious beings matter is not arbitrary and is quite intuitive. It seems obvious that plants only matter if they’re conscious—likewise for animals. This follows from every remotely plausible view of welfare.
Sidenote: sorry for the draught in posting; I just started a new blog—see here, am applying for various summer jobs, and I’m in finals week. Still, I’ll try to keep posting articles semi-regularly.
One problem with sentientism as you've defined it is that it's in principle impossible to figure out which beings are sentient besides yourself. Sentient beings are supposed to have a phenomenal experience that's depsychologized - no amount of studying their brain, reactions, dispositions, etc is ever going to produce a "Eureka!" moment where you've discovered that they are in fact sentient, because there's no conceptual link between phenomenal states and physical states. This means that epistemically, some version of phenomenal solipsism ends up being most likely to be true, since there's no way in principle to sort out other putatively sentient beings (zombies) from truly sentient beings (ones with phenomenal consciousness).
Also, to make an inductive or inference to the best explanation appeal that creatures with physical states like yours have phenomenal consciousness because you have phenomenal consciousness, would just be falling into psychologizing bias - your psychologically identical zombie twin would have no phenomenal states, so there's in principle no similarity you can point to between yourself and other physically similar beings that ensures you both have phenomenal consciousness (besides having phenomenal consciousness itself, of which accepting the possibility of p-zombies rules out as having any knowable physical supervenience bases).
Fwiw I think moral status should be granted to individuals on a behaviorist and pragmatic basis - if a being can cry, scream, fight back, produce reports of being in pain, or produce any other behavior that we find morally important, then they have moral status. It's also the morally safer attitude to take in case the phenomenal-consciousness-first approach narrows the moral domain down to 1. (And even if it doesn't, I believe you have a prior commitment to consciousness being non-vague, meaning it "turns on" exactly at some point or another. This development would obviously be a major concern for the phenomenal-consciousness-first approach - when does this happen and how can you be sure it doesn't happen before the point you've identified?)
If any of these objections go through, the phenomenal-consciousness-first approach is going to end up falling back on something like the above behaviorist principle. Also, unless you think it's fine for yourself to be killed while you're sleeping (and thus having no phenomenal experience), then you already make pragmatic appeals to other principles that constitute a being's moral worth.
It doesn't strike me as crazy to think that one relevant notion of welfare is things that have preferences, grounded out as tendencies to choose one thing over another. Under this sort of definition, it seems like non-sentient things could have welfare.