> It seems obviously better to take both actions than to take neither.
Not a judgment shared by many people. And not a course of action taken by many people. It seems that many people's moralities do output "it is better to let a person die than to help them and then beat them up".
Maybe some people think the actions together would be less indicative of virtue than inaction, but it's clearly better -- as everyone's better off -- if both actions occur than if neither occur.
That assumes consequentialist morality (which is not default, although I do believe it's the one people should use) and (probably as a corollary but worth mentioning separately) that action and inaction are weighed equally. Of course "obligation vs. supererogation" is not consequentialist, but being consequentialist is a battle in itself (separate from agreeing which consequences matter - for instance, I have cringed every time I see a grown-for-food animal described as "sentient" by you, so about four times by now).
Sentient means having subjective experience. The purpose of an animal being brought into existence has nothing to do with whether or not it is sentient. No part of my argument assumes consequentialist morality.
"It's clearly better - as everyone's better off" is inherently consequentialist. It is simply not true that every morality system prefers everyone to be better off.
The word "sentient" has several meanings, as listed by Wiktionary, the first one can literally be applied to an amoeba, the second is underdefined because we have no idea how to measure whether perception is "conscious" in non-humans, so the only one I can read out is "sentient=sapient". The purpose does not intensionally (sic, with s, not with t) predict this, but so it happens that none of the animals commonly used for food (out of my mind: cows, pigs, chicken, sheep, and rabbits) are sapient. (And yes, it would mean that a being that belongs to Homo sapiens species but has most of its brain destroyed in an accident is on the same level as cows rather than its genetic brethren.) Of course, I could try to swap your definition instead, but obviously cringe at "sentient cow" is System-1 response that will happen before System-2 reminds "Bentham doesn't mean by sentient what you mean by sentient".
Okay, well, I think you're taking the rationality comment a bit too seriously -- it was obviously in jest. I'm not sure how EA is objectionably parasitic -- it's saved loads of people's lives and improved the conditions for lots of animals. I'd support making meat eating illegal because I think -- and have argued elsewhere -- that factory farming is the worst thing ever. I don't think I treat people as mere resources nor am I a sociopath.
> There is no precisely delineated good people’s club, that has its lines at obligations.
Sure there is. It's the club of people who aren't in jail.
Obligations are social constructs, like contracts and promises. Nobody thinks that contracts and promises are part of the fabric of the universe, so strong realism about them is false. But they are disregarded at your peril -- they are real enough to get you into trouble if you flout them.
Societies need bright lines about what people must do and must refrain from doing for a number of reasons. One is that co-ordination is enhanced, because everyone is following a shared set of
rules, not just their own judgement. Another is that it is unjust to punish people arbitrarily: if people are going to be punished at all, they need to know when they cross the line. And the fear of punishment is itself a useful motivation. If you leave it up to an individual to judge how much tax they pay, you don't gather much tax.
Obviously, where obligation is well-defined, so is supererogation.
How? If you want to argue that goodness is a continuous variable, not a step function, realism would be very useful. Absent realism goodness is a social construction, or nothing. But you don't a claim that it is nothing, and you aren't contesting how it's constructed.
> It seems obviously better to take both actions than to take neither.
Not a judgment shared by many people. And not a course of action taken by many people. It seems that many people's moralities do output "it is better to let a person die than to help them and then beat them up".
Maybe some people think the actions together would be less indicative of virtue than inaction, but it's clearly better -- as everyone's better off -- if both actions occur than if neither occur.
That assumes consequentialist morality (which is not default, although I do believe it's the one people should use) and (probably as a corollary but worth mentioning separately) that action and inaction are weighed equally. Of course "obligation vs. supererogation" is not consequentialist, but being consequentialist is a battle in itself (separate from agreeing which consequences matter - for instance, I have cringed every time I see a grown-for-food animal described as "sentient" by you, so about four times by now).
Sentient means having subjective experience. The purpose of an animal being brought into existence has nothing to do with whether or not it is sentient. No part of my argument assumes consequentialist morality.
"It's clearly better - as everyone's better off" is inherently consequentialist. It is simply not true that every morality system prefers everyone to be better off.
The word "sentient" has several meanings, as listed by Wiktionary, the first one can literally be applied to an amoeba, the second is underdefined because we have no idea how to measure whether perception is "conscious" in non-humans, so the only one I can read out is "sentient=sapient". The purpose does not intensionally (sic, with s, not with t) predict this, but so it happens that none of the animals commonly used for food (out of my mind: cows, pigs, chicken, sheep, and rabbits) are sapient. (And yes, it would mean that a being that belongs to Homo sapiens species but has most of its brain destroyed in an accident is on the same level as cows rather than its genetic brethren.) Of course, I could try to swap your definition instead, but obviously cringe at "sentient cow" is System-1 response that will happen before System-2 reminds "Bentham doesn't mean by sentient what you mean by sentient".
"Both of these reasons are incredibly persuasive and should convince any rational reader. Though some of my readers may be irrational"
I'm not sure this is How to Win Friends and Influence People. :-)
Twas a joke of course. None of my readers are actually irrational -- with one notable exception that we won't go into :).
Okay, well, I think you're taking the rationality comment a bit too seriously -- it was obviously in jest. I'm not sure how EA is objectionably parasitic -- it's saved loads of people's lives and improved the conditions for lots of animals. I'd support making meat eating illegal because I think -- and have argued elsewhere -- that factory farming is the worst thing ever. I don't think I treat people as mere resources nor am I a sociopath.
> There is no precisely delineated good people’s club, that has its lines at obligations.
Sure there is. It's the club of people who aren't in jail.
Obligations are social constructs, like contracts and promises. Nobody thinks that contracts and promises are part of the fabric of the universe, so strong realism about them is false. But they are disregarded at your peril -- they are real enough to get you into trouble if you flout them.
Societies need bright lines about what people must do and must refrain from doing for a number of reasons. One is that co-ordination is enhanced, because everyone is following a shared set of
rules, not just their own judgement. Another is that it is unjust to punish people arbitrarily: if people are going to be punished at all, they need to know when they cross the line. And the fear of punishment is itself a useful motivation. If you leave it up to an individual to judge how much tax they pay, you don't gather much tax.
Obviously, where obligation is well-defined, so is supererogation.
The argument I've made is compatible with anti-realism.
How? If you want to argue that goodness is a continuous variable, not a step function, realism would be very useful. Absent realism goodness is a social construction, or nothing. But you don't a claim that it is nothing, and you aren't contesting how it's constructed.
Fixed! I'll check out Moral Reasons sometime soon.