One problem with sentientism as you've defined it is that it's in principle impossible to figure out which beings are sentient besides yourself. Sentient beings are supposed to have a phenomenal experience that's depsychologized - no amount of studying their brain, reactions, dispositions, etc is ever going to produce a "Eureka!" moment where you've discovered that they are in fact sentient, because there's no conceptual link between phenomenal states and physical states. This means that epistemically, some version of phenomenal solipsism ends up being most likely to be true, since there's no way in principle to sort out other putatively sentient beings (zombies) from truly sentient beings (ones with phenomenal consciousness).
Also, to make an inductive or inference to the best explanation appeal that creatures with physical states like yours have phenomenal consciousness because you have phenomenal consciousness, would just be falling into psychologizing bias - your psychologically identical zombie twin would have no phenomenal states, so there's in principle no similarity you can point to between yourself and other physically similar beings that ensures you both have phenomenal consciousness (besides having phenomenal consciousness itself, of which accepting the possibility of p-zombies rules out as having any knowable physical supervenience bases).
Fwiw I think moral status should be granted to individuals on a behaviorist and pragmatic basis - if a being can cry, scream, fight back, produce reports of being in pain, or produce any other behavior that we find morally important, then they have moral status. It's also the morally safer attitude to take in case the phenomenal-consciousness-first approach narrows the moral domain down to 1. (And even if it doesn't, I believe you have a prior commitment to consciousness being non-vague, meaning it "turns on" exactly at some point or another. This development would obviously be a major concern for the phenomenal-consciousness-first approach - when does this happen and how can you be sure it doesn't happen before the point you've identified?)
If any of these objections go through, the phenomenal-consciousness-first approach is going to end up falling back on something like the above behaviorist principle. Also, unless you think it's fine for yourself to be killed while you're sleeping (and thus having no phenomenal experience), then you already make pragmatic appeals to other principles that constitute a being's moral worth.
It doesn't strike me as crazy to think that one relevant notion of welfare is things that have preferences, grounded out as tendencies to choose one thing over another. Under this sort of definition, it seems like non-sentient things could have welfare.
You could rework the definiton of "choices" as just "processes where a system decides between two things", and have the relevant notion of "decide" be computational: as an intuition pump, you could see a computer program that plays chess by deducing info about various possible moves and picking the move with the highest estimated value as "deciding" what move to play, despite not obviously being conscious.
Matt writes "if a being is in unimaginable agony, it seems that it’s pain is bad..."
This is not universally true. Some small-scale societies tortured prisoners of war, others practiced cannibalism. Members of such societies would not intuit that the suffering or others is necessarily wrong. Similar observations can be made wrt to slavery for our own civilization.
What is intuitively right or wrong is culturally specified and can change (evolve) with time. This makes intuition is a poor choice for a starting point for moral arguments.
Is it intuitive that sentience is required for moral value? It seems to me that dumping large quantities of toxic waste on a planet full of plants would plausibly be immoral, unless there were strong justifying reasons (e.g. if we don't do it then the waste will harm a large quantity of sentient beings). Perhaps one might think that sentient beings matter MORE than non-sentient ones, but it strikes me as intuitive that non-sentient living things have some degree of moral worth.
Also, it's false that "in order to avoid sentientism, one has to have an utterly bizarre view of welfare." Perfectionist accounts don't imply sentientism, and they're both much older and much better than any of the three you listed :) There's also a fairly robust literature defending perfectionism; it's a minority view, but it's not like nobody prominent endorses it.
Also, about the "what if we found out babies and disabled people weren't technically human" example, it seems like one could say that given how human-like they are (such that we've been fooled into thinking they were human up until now), it's probable that whatever kind they belong to has the same moral properties as homo sapiens. Thus, we should at least err on the side of caution and treat them as if they have full moral worth.
Of course, I know you're arguing against those who try to limit the moral sphere to human beings (which is obviously wrong), whereas I'm making the opposite point (that it ought to be expanded further). I just thought it was worth commenting.
Sentientism isn't inspired by the bivalve conversation. The bivalve conversation is inspired by sentientism, because sentientism is a reasonable foundation for veganism, whereas -- shall we say kingdomism? -- is not.
One problem with sentientism as you've defined it is that it's in principle impossible to figure out which beings are sentient besides yourself. Sentient beings are supposed to have a phenomenal experience that's depsychologized - no amount of studying their brain, reactions, dispositions, etc is ever going to produce a "Eureka!" moment where you've discovered that they are in fact sentient, because there's no conceptual link between phenomenal states and physical states. This means that epistemically, some version of phenomenal solipsism ends up being most likely to be true, since there's no way in principle to sort out other putatively sentient beings (zombies) from truly sentient beings (ones with phenomenal consciousness).
Also, to make an inductive or inference to the best explanation appeal that creatures with physical states like yours have phenomenal consciousness because you have phenomenal consciousness, would just be falling into psychologizing bias - your psychologically identical zombie twin would have no phenomenal states, so there's in principle no similarity you can point to between yourself and other physically similar beings that ensures you both have phenomenal consciousness (besides having phenomenal consciousness itself, of which accepting the possibility of p-zombies rules out as having any knowable physical supervenience bases).
Fwiw I think moral status should be granted to individuals on a behaviorist and pragmatic basis - if a being can cry, scream, fight back, produce reports of being in pain, or produce any other behavior that we find morally important, then they have moral status. It's also the morally safer attitude to take in case the phenomenal-consciousness-first approach narrows the moral domain down to 1. (And even if it doesn't, I believe you have a prior commitment to consciousness being non-vague, meaning it "turns on" exactly at some point or another. This development would obviously be a major concern for the phenomenal-consciousness-first approach - when does this happen and how can you be sure it doesn't happen before the point you've identified?)
If any of these objections go through, the phenomenal-consciousness-first approach is going to end up falling back on something like the above behaviorist principle. Also, unless you think it's fine for yourself to be killed while you're sleeping (and thus having no phenomenal experience), then you already make pragmatic appeals to other principles that constitute a being's moral worth.
It doesn't strike me as crazy to think that one relevant notion of welfare is things that have preferences, grounded out as tendencies to choose one thing over another. Under this sort of definition, it seems like non-sentient things could have welfare.
I take choices to be conscious processes where you decide between two things. I don't know how non-conscious things could make choices.
You could rework the definiton of "choices" as just "processes where a system decides between two things", and have the relevant notion of "decide" be computational: as an intuition pump, you could see a computer program that plays chess by deducing info about various possible moves and picking the move with the highest estimated value as "deciding" what move to play, despite not obviously being conscious.
Matt writes "if a being is in unimaginable agony, it seems that it’s pain is bad..."
This is not universally true. Some small-scale societies tortured prisoners of war, others practiced cannibalism. Members of such societies would not intuit that the suffering or others is necessarily wrong. Similar observations can be made wrt to slavery for our own civilization.
What is intuitively right or wrong is culturally specified and can change (evolve) with time. This makes intuition is a poor choice for a starting point for moral arguments.
https://truewestmagazine.com/indian-tribes-torture/
Is it intuitive that sentience is required for moral value? It seems to me that dumping large quantities of toxic waste on a planet full of plants would plausibly be immoral, unless there were strong justifying reasons (e.g. if we don't do it then the waste will harm a large quantity of sentient beings). Perhaps one might think that sentient beings matter MORE than non-sentient ones, but it strikes me as intuitive that non-sentient living things have some degree of moral worth.
Also, it's false that "in order to avoid sentientism, one has to have an utterly bizarre view of welfare." Perfectionist accounts don't imply sentientism, and they're both much older and much better than any of the three you listed :) There's also a fairly robust literature defending perfectionism; it's a minority view, but it's not like nobody prominent endorses it.
Also, about the "what if we found out babies and disabled people weren't technically human" example, it seems like one could say that given how human-like they are (such that we've been fooled into thinking they were human up until now), it's probable that whatever kind they belong to has the same moral properties as homo sapiens. Thus, we should at least err on the side of caution and treat them as if they have full moral worth.
Of course, I know you're arguing against those who try to limit the moral sphere to human beings (which is obviously wrong), whereas I'm making the opposite point (that it ought to be expanded further). I just thought it was worth commenting.
I don't have the requisite intuition about the plant case, and I think it's indefensible--see here https://benthams.substack.com/p/the-environmental-ethicists-are-worse
The babies and disabled humans example shows that we don't care about just being human in a biological sense.
I think perfectionism is a pretty fringe and rather implausible view of welfare.
were you inspired by vegan bivalve discourse?
No.
Sentientism isn't inspired by the bivalve conversation. The bivalve conversation is inspired by sentientism, because sentientism is a reasonable foundation for veganism, whereas -- shall we say kingdomism? -- is not.
i agree but i meant the inspiration for the timing of the post
Aha. My bad.