I am always quite skeptical of these sorts of moral risk arguments, as they seem to have completely untenable consequences. Surely you are not 100% certain that a strong form of deontology is correct--one where duties have lexical priority over utility. Well, in that case you should just adopt that view in your practical reasoning, since any non-zero chance of such a view automatically trumps any considerations of utility. You say that this isn't about taking seriously vanishingly low probabilities, but I just don't see why we shouldn't. What reason is there for taking seriously pretty low probabilities, but not vanishingly low ones? What is the cutoff point for when we should no longer take things seriously? It seems like not acting according to the lexical priority deontological view is just choosing to be irrational and ignoring the view without good grounds for doing so, if we take the view you propose.
I am not completely sure what model to use instead, but I tend towards something like this: We reason in two steps. First we reason on the normative-ethical level, and figure out which view is most plausible. When we have decided on a view, we go to the practical-ethical level and figure out what the chosen theory tells us to do.
To use a less extreme example: Suppose you are in the footbridge variation of the trolley problem. You have a 80% credence in utilitarianism and a 20% credence in some sort of deontology where pushing the man is seriously wrong (worse than letting 150 people die or something). It looks like you should still push the man, given your commitments, since you are quite certain of utilitarianism, even if it would be very wrong conditional on deontology. But on your view, you should almost certainly not push the man. This seems to hold for almost all ethical dilemmas: You should act as a deontologist, even if you are quite certain that utilitarianism is true.
If we instead take my view, we first figure out our normative-ethical credences, which are 80%/20%. From here, we figure out what each view tells us to do. Let's just say that given utilitarianism, you are 90% sure you should push, and 10% you shouldn't. And on deontology, you are 80% sure you shouldn't and 20% you should (these are just made up figures). We then calculate your actual credence that you should push like this:
Given the numbers I made up, this works out to: 0.8*0.9+0.2*0.2=0.76 So you should be 76% sure you should push. This seems much more reasonable, I think.
But that model won't tell us how wrong the actions are. I agree that there's a low probability, if your credence in deontology is low, that the action is wrong, but it still may be very wrong in expectation.
For one, I just don't even think that I can make sense of the notion of a total theory-neutral degree of wrongness of an action. It looks like wrongness only makes sense from within a theory. To be able to judge the total wrongness of an action, you would have to be able to unify all the theories on a single scale, but I just don't see how that makes any sense. How many times can you prick someone with a needle on utilitarianism before it is as bad as stealing someone's car on rights-deontology? I don't really think it even makes sense to attempt to give an answer. You can of course have a specific normative-ethical view where rights can be outweighed by utility, but in that case you are still working within a single theory rather than comparing across theories.
In fact, it doesn't even look like rights have the same value across theories. Some views allow that rights can be outweighed by welfare, while others don't. Does this mean that rights are worth less on the former view, or that welfare is worth more? It just doesn't look like there is an actual answer there. But you would have to give an answer in order to determine the total inter-theoretic rightness of an action.
Secondly, I guess I am not completely with you on the probability of the rightness of an action being seperable from the expected rightness in this way. On your view, there will often be cases where the expected rightness of an action is infinitely/unmeasurably low, and yet you still have the strongest reason to perform that action. That seems pretty absurd. Instead it looks like we should just look at the probability of your action being right and call it a day.
In any case, even if we grant that there is a distinction between rightness and expected rightness, that just means that this argument fails to be an argument for ethical veganism. After all, it seems we should choose the action that has the greatest probability of being right, rather than the one with the greatest expected rightness.
Sorry, but you still wouldn't be worse than a serial killer, because those serial killers also ate meat. Say you eat x meat per year, and kill 0 humans. Jeffrey Dahmer also eats roughly x meat per year, and kills maybe 2 humans per year. x + 2 > x for all values of x, and I don't think most serial killers have abnormally low meat consumption.
I think the problem with this kind of reasoning is that it kills the supererogatory.
Should I have an abortion? Well, my spreadsheet says there's a nontrivial chance that having one would be very immoral. Better not have an abortion.
Should I give all my money to charity? Well, some views say this is absolutely required, so I guess I should, just in case.
Should I give away my kidney? Utilitarians think I have to, and deontologists don't care either way, so I’d better do it just in case the utilitarians are right.
If you do this for every issue you can't ever say an action is supererogatory since you're always defaulting to the most demanding moral theory.
There's also the issue of how to weight "weird" theories like Kantianism, virtue ethics, and particularism. Where do they fit in? If you're trying to give an account of moral risk, you can't just ignore every view that isn't a variety of utilitarianism.
I used to make these risk-type arguments. Now I'm unsure if they really work.
It isn't just flesh (meat) that's morally problematic. It's all animal products: fish, dairy, eggs, fur, leather, wool, silk. The best thing you can do for animals is to go vegan.
I find this an error of category. Ethics is like mathematics, not like Physics. It is deductive from some principles. Those principles are chosen by the subject.
Moral reasoning is correct or incorrect, but moral principles….
If you don’t consider animals in your moral circle, what is the risk to be wrong?
The only way you can be morally “wrong” is if god exists, and his moral reference frame is truly absolute (but why you would accept that moral frame if no punishment is attached to go against it?).
I will clarify this: I am an utilitarian because a in principle I want all conscious beings to be happy, not because I believe they objectively deserve to be happy or something like that. I have a “social welfare function” I want to be maximized. How can I be “right” or “wrong” about that?
Most sophisticated forms of moral antirealism have some take on moral error. Check out Simon Blackburn's writings for details. The basic picture that even if you can't be "wrong" about ethics, you can still hold attitudes and do things you disapprove of. So, unless you don't care about your own standards, you should care about moral risk.
The moral calculus is subject to many types of error; for example, for consequentialists every factual error that can have consequences and that imply risk on the moral calculus.
But error on the moral axioms still looks to me similar to anthropics on the fundamental physical parameters.
Moral comes from “social preferences”, and preferences are not subject to error.
Why does this not cause the boundary to update in the direction of mentally enfeebled humans having less moral value, and being less deserving of protection?
I wanted to hear his take on the question, because most of what I have here is either prescriptive and based around bright lines or is based around potential and practicality. The bright line gets eroded when you talk about consciousness/pain culpability as a relative state, and the practicality/potential line ends at one end on an unknown ideal and at Aktion T4 on the other.
Shouldn’t you give me the argument and let me decide for myself, rather than just stating the conclusion for me?
“Animals have roughly the cognitive capacities of a baby or severely mentally enfeebled person, so if species isn’t relevant, it’s [killing animals] probably roughly as bad. [as killing babies]“
Is this the argument? I agree it isn’t crazy, but is it persuasive? There are people who argue that infanticide is permissible, so even if we buy these premises, the conclusion doesn’t follow without some help from premises left unstated.
What does the argument conclude about eating vat-grown meat?
"A helpful way to see how bad something is—potentially subject to some rare exceptions—is to multiply the probability that you’re doing something bad by how bad it would be. For instance, if there’s a 50% chance that you kill two people, that’s as bad as killing one person."
This is wrong. The cost/value of most things isn't linear. A 50% of losing both legs isn't as bad as losing one for sure. A 0.1% chance of getting a billion dollars isn't equivalent to definitely getting a million. Regardless of whether killing animals scales linearly, I think you shouldn't act like linear utility scaling is normal.
(For the record I'm also opposed to eating meat on ethical grounds. Thanks for the insightful writing on the topic. 🙂)
The idea that humans have rights is a fairly recent innovation. And it took a long time for it to take hold. The idea that animals have rights is a very recent innovation (<50 years) and is only in the early stages of adoption. Given it a century and I suspect it will have taken hold.
The argument is pretty compelling. When I first read Singer's Animal Liberation in the 1970's I suspected he was right (that is it will come to pass that eating meat will be seen as evil--like slavery). The idea that eating meat was wrong certainly was not the norm in the seventies and I did not think it would become so in my lifetime. 200 years later, sure, but not in the next sixty years or so. So I kept eating meant and so far, it hasn't become wrong, though it is clearly moving in that direction.
I like meat and don't want to give it up for an ethical stance that I believe won't be relevant in my lifetime. I also could be wrong about people giving up meat in the future
Do you understand why I might have the impression there's a contradiction here?
Is it that you base your moral choices only on how it will impact your status? Would you enslave a person if it benefited you without impacting your status?
If you were reading the diaries of a slaveholder 250 years ago who fully understood how immoral it was, expecting their actions to be seen as evil in the future, while deciding not to change their practices, how would you view this person?
“If you were reading the diaries of a slaveholder 250 years ago who fully understood how immoral it was, expecting their actions to be seen as evil in the future, while deciding not to change their practices, how would you view this person?”
The same way I view Jefferson and Washington, as the imperfect people they were.
I am unfamiliar with this type of argumentation. Does this mean all animals? Like even bugs and small life we can't even see? I only ask this because it seems to suggests that the world would be better off without humans, or at least a population of humans that something like pre-industrial levels.
Assuming that in the second case they're also killed painlessly, it's worse, because in the first one there's no suffering of other people except for the mentally handicapped. They may be in a state without suffering, but they're isolated from any kind of important relationships. I suppose that as an objective list utilitarian Mathew should answer something like this.
I am always quite skeptical of these sorts of moral risk arguments, as they seem to have completely untenable consequences. Surely you are not 100% certain that a strong form of deontology is correct--one where duties have lexical priority over utility. Well, in that case you should just adopt that view in your practical reasoning, since any non-zero chance of such a view automatically trumps any considerations of utility. You say that this isn't about taking seriously vanishingly low probabilities, but I just don't see why we shouldn't. What reason is there for taking seriously pretty low probabilities, but not vanishingly low ones? What is the cutoff point for when we should no longer take things seriously? It seems like not acting according to the lexical priority deontological view is just choosing to be irrational and ignoring the view without good grounds for doing so, if we take the view you propose.
I am not completely sure what model to use instead, but I tend towards something like this: We reason in two steps. First we reason on the normative-ethical level, and figure out which view is most plausible. When we have decided on a view, we go to the practical-ethical level and figure out what the chosen theory tells us to do.
To use a less extreme example: Suppose you are in the footbridge variation of the trolley problem. You have a 80% credence in utilitarianism and a 20% credence in some sort of deontology where pushing the man is seriously wrong (worse than letting 150 people die or something). It looks like you should still push the man, given your commitments, since you are quite certain of utilitarianism, even if it would be very wrong conditional on deontology. But on your view, you should almost certainly not push the man. This seems to hold for almost all ethical dilemmas: You should act as a deontologist, even if you are quite certain that utilitarianism is true.
If we instead take my view, we first figure out our normative-ethical credences, which are 80%/20%. From here, we figure out what each view tells us to do. Let's just say that given utilitarianism, you are 90% sure you should push, and 10% you shouldn't. And on deontology, you are 80% sure you shouldn't and 20% you should (these are just made up figures). We then calculate your actual credence that you should push like this:
P(Utilitarianism)*P(Push|Utilitarianism)+P(Deontology)*P(Push|Deontology)
Given the numbers I made up, this works out to: 0.8*0.9+0.2*0.2=0.76 So you should be 76% sure you should push. This seems much more reasonable, I think.
But that model won't tell us how wrong the actions are. I agree that there's a low probability, if your credence in deontology is low, that the action is wrong, but it still may be very wrong in expectation.
For one, I just don't even think that I can make sense of the notion of a total theory-neutral degree of wrongness of an action. It looks like wrongness only makes sense from within a theory. To be able to judge the total wrongness of an action, you would have to be able to unify all the theories on a single scale, but I just don't see how that makes any sense. How many times can you prick someone with a needle on utilitarianism before it is as bad as stealing someone's car on rights-deontology? I don't really think it even makes sense to attempt to give an answer. You can of course have a specific normative-ethical view where rights can be outweighed by utility, but in that case you are still working within a single theory rather than comparing across theories.
In fact, it doesn't even look like rights have the same value across theories. Some views allow that rights can be outweighed by welfare, while others don't. Does this mean that rights are worth less on the former view, or that welfare is worth more? It just doesn't look like there is an actual answer there. But you would have to give an answer in order to determine the total inter-theoretic rightness of an action.
Secondly, I guess I am not completely with you on the probability of the rightness of an action being seperable from the expected rightness in this way. On your view, there will often be cases where the expected rightness of an action is infinitely/unmeasurably low, and yet you still have the strongest reason to perform that action. That seems pretty absurd. Instead it looks like we should just look at the probability of your action being right and call it a day.
In any case, even if we grant that there is a distinction between rightness and expected rightness, that just means that this argument fails to be an argument for ethical veganism. After all, it seems we should choose the action that has the greatest probability of being right, rather than the one with the greatest expected rightness.
Sorry, but you still wouldn't be worse than a serial killer, because those serial killers also ate meat. Say you eat x meat per year, and kill 0 humans. Jeffrey Dahmer also eats roughly x meat per year, and kills maybe 2 humans per year. x + 2 > x for all values of x, and I don't think most serial killers have abnormally low meat consumption.
I think the problem with this kind of reasoning is that it kills the supererogatory.
Should I have an abortion? Well, my spreadsheet says there's a nontrivial chance that having one would be very immoral. Better not have an abortion.
Should I give all my money to charity? Well, some views say this is absolutely required, so I guess I should, just in case.
Should I give away my kidney? Utilitarians think I have to, and deontologists don't care either way, so I’d better do it just in case the utilitarians are right.
If you do this for every issue you can't ever say an action is supererogatory since you're always defaulting to the most demanding moral theory.
There's also the issue of how to weight "weird" theories like Kantianism, virtue ethics, and particularism. Where do they fit in? If you're trying to give an account of moral risk, you can't just ignore every view that isn't a variety of utilitarianism.
I used to make these risk-type arguments. Now I'm unsure if they really work.
It isn't just flesh (meat) that's morally problematic. It's all animal products: fish, dairy, eggs, fur, leather, wool, silk. The best thing you can do for animals is to go vegan.
I find this an error of category. Ethics is like mathematics, not like Physics. It is deductive from some principles. Those principles are chosen by the subject.
Moral reasoning is correct or incorrect, but moral principles….
If you don’t consider animals in your moral circle, what is the risk to be wrong?
The only way you can be morally “wrong” is if god exists, and his moral reference frame is truly absolute (but why you would accept that moral frame if no punishment is attached to go against it?).
I will clarify this: I am an utilitarian because a in principle I want all conscious beings to be happy, not because I believe they objectively deserve to be happy or something like that. I have a “social welfare function” I want to be maximized. How can I be “right” or “wrong” about that?
Most sophisticated forms of moral antirealism have some take on moral error. Check out Simon Blackburn's writings for details. The basic picture that even if you can't be "wrong" about ethics, you can still hold attitudes and do things you disapprove of. So, unless you don't care about your own standards, you should care about moral risk.
The moral calculus is subject to many types of error; for example, for consequentialists every factual error that can have consequences and that imply risk on the moral calculus.
But error on the moral axioms still looks to me similar to anthropics on the fundamental physical parameters.
Moral comes from “social preferences”, and preferences are not subject to error.
Why does this not cause the boundary to update in the direction of mentally enfeebled humans having less moral value, and being less deserving of protection?
Why should it be that way?
I wanted to hear his take on the question, because most of what I have here is either prescriptive and based around bright lines or is based around potential and practicality. The bright line gets eroded when you talk about consciousness/pain culpability as a relative state, and the practicality/potential line ends at one end on an unknown ideal and at Aktion T4 on the other.
“These people’s arguments are not crazy. “
Shouldn’t you give me the argument and let me decide for myself, rather than just stating the conclusion for me?
“Animals have roughly the cognitive capacities of a baby or severely mentally enfeebled person, so if species isn’t relevant, it’s [killing animals] probably roughly as bad. [as killing babies]“
Is this the argument? I agree it isn’t crazy, but is it persuasive? There are people who argue that infanticide is permissible, so even if we buy these premises, the conclusion doesn’t follow without some help from premises left unstated.
What does the argument conclude about eating vat-grown meat?
"A helpful way to see how bad something is—potentially subject to some rare exceptions—is to multiply the probability that you’re doing something bad by how bad it would be. For instance, if there’s a 50% chance that you kill two people, that’s as bad as killing one person."
This is wrong. The cost/value of most things isn't linear. A 50% of losing both legs isn't as bad as losing one for sure. A 0.1% chance of getting a billion dollars isn't equivalent to definitely getting a million. Regardless of whether killing animals scales linearly, I think you shouldn't act like linear utility scaling is normal.
(For the record I'm also opposed to eating meat on ethical grounds. Thanks for the insightful writing on the topic. 🙂)
FWIW, I'm not unequivocally sure that Peter Singer's or Michael Tooley's view on infanticide is actually wrong.
The idea that humans have rights is a fairly recent innovation. And it took a long time for it to take hold. The idea that animals have rights is a very recent innovation (<50 years) and is only in the early stages of adoption. Given it a century and I suspect it will have taken hold.
The argument is pretty compelling. When I first read Singer's Animal Liberation in the 1970's I suspected he was right (that is it will come to pass that eating meat will be seen as evil--like slavery). The idea that eating meat was wrong certainly was not the norm in the seventies and I did not think it would become so in my lifetime. 200 years later, sure, but not in the next sixty years or so. So I kept eating meant and so far, it hasn't become wrong, though it is clearly moving in that direction.
Do you personally think slavery is evil?
Of course, so does pretty much everyone else. But 250 years ago, not so much.
Why is it that you support something you expect to be seen as evil in the future?
I like meat and don't want to give it up for an ethical stance that I believe won't be relevant in my lifetime. I also could be wrong about people giving up meat in the future
Do you understand why I might have the impression there's a contradiction here?
Is it that you base your moral choices only on how it will impact your status? Would you enslave a person if it benefited you without impacting your status?
If you were reading the diaries of a slaveholder 250 years ago who fully understood how immoral it was, expecting their actions to be seen as evil in the future, while deciding not to change their practices, how would you view this person?
“If you were reading the diaries of a slaveholder 250 years ago who fully understood how immoral it was, expecting their actions to be seen as evil in the future, while deciding not to change their practices, how would you view this person?”
The same way I view Jefferson and Washington, as the imperfect people they were.
Do you eat meat?
I am unfamiliar with this type of argumentation. Does this mean all animals? Like even bugs and small life we can't even see? I only ask this because it seems to suggests that the world would be better off without humans, or at least a population of humans that something like pre-industrial levels.
It was a long time ago, but I think Singer had mammals and other higher animals capable of suffering in mind.
And yes, there is a branch of this kind of thinking that proposes that humans ought to go extinct.
At some point, I would love to see how you can hold both the position that eating meat is murder but abortion is not.
Well I do eat meat, and have no problem with anortion.
I don't think they would be opposed to aborting animal fetuses either.
I just swatted a fly.
JEDD Mason eventually chose to walk.
Assuming that in the second case they're also killed painlessly, it's worse, because in the first one there's no suffering of other people except for the mentally handicapped. They may be in a state without suffering, but they're isolated from any kind of important relationships. I suppose that as an objective list utilitarian Mathew should answer something like this.