> “I’m not so different from the people who ignore shrimp and insect welfare. We both have a bad habit of ignoring ethical issues that don’t resonate with us intuitively. I simply have been blessed with slightly less defective moral intuitions than them.”
What’s wrong with ignoring ethical issues that aren’t intuitive, that is, rejecting a view because of its counterintuitive implications? You might say this is acceptable at first glance but then attempt to provide defeaters for false beliefs based on intuitions. Still, it seems that we evaluate defeat based on further intuitions. (e.g., epistemic intuitions like “if P is highly unlikely in light of the evidence, then your justification for believing that P is undermined.”)
You say that you have been blessed with slightly less defective moral intuitions than other people. Presumably, people who care little about shrimp and insect welfare would disagree. To them, their intuitions seem right and yours seem mistaken. What can we ultimately do other than trust the way things seem to ourselves—our own intuitions?
I grew up in the American South and I can promise you that some people (who were old when I noticed these attitudes) honestly believed that non-whites were inferior, so it was their intuition that preferring white was not immoral.
It proves too much to just say, “ well, they were taught that —their intuitions were shaped from childhood” because it just shows that we can be inculcated into very mistaken intuitions. We have to make a rational case for the ones we hold before we judge others as less morally in tune.
It seems like you're working with a background presupposition about some kind of relationship between virtue and luck--if you're lucky not to have certain feelings, then that can't contribute to. Or partially constitute, your virtue. (Not sure you'd put it quite like that.) But it seems to me that's a natural place to object, especially for those who have broadly compatibilist views about free will and moral responsibility.
I think the idea here is compatible (pun intended!) with compatibilist views. For I take it the idea here is that one only gets virtue points by doing *hard* and good things, not just good things. But what is hard for one person may be easy for another. I don’t see why responsibility being about sourcehood (or any other compatibilist idea) would necessarily block this sort of view.
I guess I'd be more inclined to say that strength of will is just one among many virtues, and we shouldn't say that nothing can manifest virtue unless it's hard to do.
Yeah the “only” might be a tad strong, but for the kinds of virtues discussed here (not getting annoyed, not getting addicted) it does seem like the virtue is largely consists in resisting some desire for the sake of the good.
I’m not sure about this: “one only exercises genuine virtue when doing the right thing requires going against some other desire one has”. Let’s say that I realise I tend to be selfish but I work hard over many years to behave more selflessly. In the end, selfless behaviour becomes habitual. Am I not then virtuous although, in the moment, I have no desire to behave otherwise?
Your argument that you shouldn’t earn virtue points if your behaviour is just because of a fortunate disposition proves too much. For example, suppose you and a person who gets irritated very easily experience the same amount of irritation, but the person who gets irritated wasn’t lucky enough to be born with your level of impulse control by your logic, this means that you shouldn’t earn any virtue points. Actually, it’s even worse than that because at the end of the day, all human behaviour is determined by a combination of biology and environment, and you certainly didn’t choose either of those and even if you did, that was only because you were lucky enough to be born with a disposition to make moral choices. By the time you have any influence over either your environment or your biology, they have already shaped your personality in its entirety. And even if they didn’t shape your personality that just means, it was a product of random noise and so doesn’t actually say anything about your virtue. More fundamentally, I’m just not sure why having a disposition to do bad things makes you better than somebody who doesn’t have that disposition, and yet your argument would suggest that a person who doesn’t do bad things because he has no desire to has less virtue than someone who resists such a desire. Also, if you were like the guy who gets irritated easily in terms of sharing his dispositions like the amount of irritation, he experiences it would be a trivially true statement. that you would behave like him simply because his behaviour is a combination of the situation and his disposition is so your thought experiment just amounts to saying that His decision algorithm produces the result that it in fact produces. The basic problem with your argument is that either you consider certain lucky dispositions more relevant, but not all such dispositions. In which case you have to explain this arbitrary distinction or everything is irrelevant to virtue points. After all, if you changed all the relevant psychological facts, instead of only the amount of irritation experienced, it would amount to saying that if you are like the person who gets irritated easily, you would behave like him which is pretty much true by definition. This also applies to arguments about either biology or the environment because if you had the same biology and environment as a bad person You would obviously behave like them simply because that is effectively asking if you would behave like them if your personality were replaced by their personality and you were put in their situation, but such trivial statements can’t be used as a basis for radical arguments for why common sensicaly virtuous, behaviour is actually not virtuous.
“I’m not more open minded than they are—I just have more substantively reasonable intuitions.”
I’ve often thought the same thing. As I dove into rationalism, Scott Alexander, effective altruism and the like, sometimes I’d being intellectually convinced of things without changing my actions (stuff like, “you should donate to AI safety”, “bacteria welfare, even a .001% chance”, whatever). I’ve often repeated in my head that I’m not a better person, I just have more correct beliefs! Or perhaps I should say less wrong beliefs.
This leads me lightly into the epistemic wilderness, however. I’ve really internalized how much people think their own beliefs are correct, and it unsettles me when intelligent people disagree on important matters (you and God’s existence is a good one).
I’m releasing a post about this on Monday, but I wonder if effective altruists should get less virtue points for donating to charity because they believe it’s so important, instead of someone who sees charity as superfluous but wants to donate anyway.
"I’m not so different from the people who ignore shrimp and insect welfare. We both have a bad habit of ignoring ethical issues that don’t resonate with us intuitively."
You can go one step further, and realize that it isn't a bad habit of ignoring moral issues that don't resonate.
This coincides with Kant's position in Groundwork of the Metaphysics of Morals that moral action is only that which is done intentionally for duty, i.e., for the sake of conformity with maxims of the universal legislative form. In Kant's words:
"I also set aside those actions which really conform to duty, but to which men have no direct inclination, performing them because they are impelled thereto by some other inclination. For in this case we can readily distinguish whether the action which agrees with duty is done from duty, or from a selfish view. It is much harder to make this distinction when the action accords with duty and the subject has besides a direct inclination to it. For example, it is always a matter of duty that a dealer should not over charge an inexperienced purchaser; and wherever there is much commerce the prudent tradesman does not overcharge, but keeps a fixed price for everyone, so that a child buys of him as well as any other. Men are thus honestly served; but this is not enough to make us believe that the tradesman has so acted from duty and from principles of honesty: his own advantage required it; it is out of the question in this case to suppose that he might besides have a direct inclination in favour of the buyers, so that, as it were, from love he should give no advantage to one over another. Accordingly the action was done neither from duty nor from direct inclination, but merely with a selfish view.
On the other hand, it is a duty to maintain one's life; and, in addition, everyone has also a direct inclination to do so. But on this account the often anxious care which most men take for it has no intrinsic worth, and their maxim has no moral import. They preserve their life as duty requires, no doubt, but not because duty requires. On the other hand, if adversity and hopeless sorrow have completely taken away the relish for life; if the unfortunate one, strong in mind, indignant at his fate rather than desponding or dejected, wishes for death, and yet preserves his life without loving it- not from inclination or fear, but from duty- then his maxim has a moral worth."
I simply think most behaviors aren't virtues or vices, and that the only real vices are people who truly harm others, like people who rape, murder, physically abuse, rob, etc. Calling someone evil for being rude is absurd.
All true. Although the title and piece would’ve been even more impactful had you placed emphasis on the first person perspective rather than the second.
> Similarly, if I want to judge people for ignoring the plight of the shrimp, I shouldn’t compare their attitudes to how I feel about shrimp. Intuitively, I feel the pull of shrimp welfare’s importance. Instead, I should compare how I react to weirder stuff—e.g., the welfare of the weird ocean organisms like Rotifers that almost definitely aren’t conscious but are so numerous that even a tiny chance they’re conscious is significant.
I wonder about levels of generality here. You're extending the magnitude of the ethical failing that results in indifference to shrimp welfare. "The creatures are weird, so I don't care." So you point out the even weirder creature, and point out that you also fall prey to that.
But many of the other examples you give seem more qualitative. For example, gambling addiction versus some other kind of addiction. It's not that one lacks virtue by developing the affliction of a gambling addiction when they encounter a certain magnitude of gambling; they lose virtue by developing an addiction (possibly just as bad) to something besides gambling that enamours them just as much.
For this reason, I felt like the example you gave seemed pretty weak. "I should be more virtuious by caring even more about the expected value of bacteria suffering!" is something you're absolutely predisposed to doing. Way way more than the ordinary person is predisposed to caring about the EV of animal suffering.
The appropriate analogy seems to be you caring about something completely orthogonal to that. Like deontology. But that just gets into the substantive debate of whether one should care about deontological constraints at all.
Maybe your point is confined to people who know that something is wrong in expectation and still do it?
Spot on. Reminds me of the penultimate chapter of Mere Christianity, “Nice People or New Men?”
Thanks for the reminder!
Ah, I was going to say this reminds me of C.S. Lewis, and I couldn't remember where. But that's exactly it!
It's also called ... the attribution bias!
> “I’m not so different from the people who ignore shrimp and insect welfare. We both have a bad habit of ignoring ethical issues that don’t resonate with us intuitively. I simply have been blessed with slightly less defective moral intuitions than them.”
What’s wrong with ignoring ethical issues that aren’t intuitive, that is, rejecting a view because of its counterintuitive implications? You might say this is acceptable at first glance but then attempt to provide defeaters for false beliefs based on intuitions. Still, it seems that we evaluate defeat based on further intuitions. (e.g., epistemic intuitions like “if P is highly unlikely in light of the evidence, then your justification for believing that P is undermined.”)
You say that you have been blessed with slightly less defective moral intuitions than other people. Presumably, people who care little about shrimp and insect welfare would disagree. To them, their intuitions seem right and yours seem mistaken. What can we ultimately do other than trust the way things seem to ourselves—our own intuitions?
And examples and counter examples are legion.
I grew up in the American South and I can promise you that some people (who were old when I noticed these attitudes) honestly believed that non-whites were inferior, so it was their intuition that preferring white was not immoral.
It proves too much to just say, “ well, they were taught that —their intuitions were shaped from childhood” because it just shows that we can be inculcated into very mistaken intuitions. We have to make a rational case for the ones we hold before we judge others as less morally in tune.
It seems like you're working with a background presupposition about some kind of relationship between virtue and luck--if you're lucky not to have certain feelings, then that can't contribute to. Or partially constitute, your virtue. (Not sure you'd put it quite like that.) But it seems to me that's a natural place to object, especially for those who have broadly compatibilist views about free will and moral responsibility.
I think the idea here is compatible (pun intended!) with compatibilist views. For I take it the idea here is that one only gets virtue points by doing *hard* and good things, not just good things. But what is hard for one person may be easy for another. I don’t see why responsibility being about sourcehood (or any other compatibilist idea) would necessarily block this sort of view.
I guess I'd be more inclined to say that strength of will is just one among many virtues, and we shouldn't say that nothing can manifest virtue unless it's hard to do.
Yeah the “only” might be a tad strong, but for the kinds of virtues discussed here (not getting annoyed, not getting addicted) it does seem like the virtue is largely consists in resisting some desire for the sake of the good.
Solzhenitsyn did have many profound insights...
I’m not sure about this: “one only exercises genuine virtue when doing the right thing requires going against some other desire one has”. Let’s say that I realise I tend to be selfish but I work hard over many years to behave more selflessly. In the end, selfless behaviour becomes habitual. Am I not then virtuous although, in the moment, I have no desire to behave otherwise?
Your argument that you shouldn’t earn virtue points if your behaviour is just because of a fortunate disposition proves too much. For example, suppose you and a person who gets irritated very easily experience the same amount of irritation, but the person who gets irritated wasn’t lucky enough to be born with your level of impulse control by your logic, this means that you shouldn’t earn any virtue points. Actually, it’s even worse than that because at the end of the day, all human behaviour is determined by a combination of biology and environment, and you certainly didn’t choose either of those and even if you did, that was only because you were lucky enough to be born with a disposition to make moral choices. By the time you have any influence over either your environment or your biology, they have already shaped your personality in its entirety. And even if they didn’t shape your personality that just means, it was a product of random noise and so doesn’t actually say anything about your virtue. More fundamentally, I’m just not sure why having a disposition to do bad things makes you better than somebody who doesn’t have that disposition, and yet your argument would suggest that a person who doesn’t do bad things because he has no desire to has less virtue than someone who resists such a desire. Also, if you were like the guy who gets irritated easily in terms of sharing his dispositions like the amount of irritation, he experiences it would be a trivially true statement. that you would behave like him simply because his behaviour is a combination of the situation and his disposition is so your thought experiment just amounts to saying that His decision algorithm produces the result that it in fact produces. The basic problem with your argument is that either you consider certain lucky dispositions more relevant, but not all such dispositions. In which case you have to explain this arbitrary distinction or everything is irrelevant to virtue points. After all, if you changed all the relevant psychological facts, instead of only the amount of irritation experienced, it would amount to saying that if you are like the person who gets irritated easily, you would behave like him which is pretty much true by definition. This also applies to arguments about either biology or the environment because if you had the same biology and environment as a bad person You would obviously behave like them simply because that is effectively asking if you would behave like them if your personality were replaced by their personality and you were put in their situation, but such trivial statements can’t be used as a basis for radical arguments for why common sensicaly virtuous, behaviour is actually not virtuous.
“I’m not more open minded than they are—I just have more substantively reasonable intuitions.”
I’ve often thought the same thing. As I dove into rationalism, Scott Alexander, effective altruism and the like, sometimes I’d being intellectually convinced of things without changing my actions (stuff like, “you should donate to AI safety”, “bacteria welfare, even a .001% chance”, whatever). I’ve often repeated in my head that I’m not a better person, I just have more correct beliefs! Or perhaps I should say less wrong beliefs.
This leads me lightly into the epistemic wilderness, however. I’ve really internalized how much people think their own beliefs are correct, and it unsettles me when intelligent people disagree on important matters (you and God’s existence is a good one).
I’m releasing a post about this on Monday, but I wonder if effective altruists should get less virtue points for donating to charity because they believe it’s so important, instead of someone who sees charity as superfluous but wants to donate anyway.
"I’m not so different from the people who ignore shrimp and insect welfare. We both have a bad habit of ignoring ethical issues that don’t resonate with us intuitively."
You can go one step further, and realize that it isn't a bad habit of ignoring moral issues that don't resonate.
"But one only exercises genuine virtue when doing the right thing requires going against some other desire one has."
Counterexample: Jesus was perfectly virtuous at times when he seemed not to experience contrary desires.
This coincides with Kant's position in Groundwork of the Metaphysics of Morals that moral action is only that which is done intentionally for duty, i.e., for the sake of conformity with maxims of the universal legislative form. In Kant's words:
"I also set aside those actions which really conform to duty, but to which men have no direct inclination, performing them because they are impelled thereto by some other inclination. For in this case we can readily distinguish whether the action which agrees with duty is done from duty, or from a selfish view. It is much harder to make this distinction when the action accords with duty and the subject has besides a direct inclination to it. For example, it is always a matter of duty that a dealer should not over charge an inexperienced purchaser; and wherever there is much commerce the prudent tradesman does not overcharge, but keeps a fixed price for everyone, so that a child buys of him as well as any other. Men are thus honestly served; but this is not enough to make us believe that the tradesman has so acted from duty and from principles of honesty: his own advantage required it; it is out of the question in this case to suppose that he might besides have a direct inclination in favour of the buyers, so that, as it were, from love he should give no advantage to one over another. Accordingly the action was done neither from duty nor from direct inclination, but merely with a selfish view.
On the other hand, it is a duty to maintain one's life; and, in addition, everyone has also a direct inclination to do so. But on this account the often anxious care which most men take for it has no intrinsic worth, and their maxim has no moral import. They preserve their life as duty requires, no doubt, but not because duty requires. On the other hand, if adversity and hopeless sorrow have completely taken away the relish for life; if the unfortunate one, strong in mind, indignant at his fate rather than desponding or dejected, wishes for death, and yet preserves his life without loving it- not from inclination or fear, but from duty- then his maxim has a moral worth."
Priggish?
What've they done to you at Oxford? Oy vey...
I've just been reading too much C.S. Lewis.
I simply think most behaviors aren't virtues or vices, and that the only real vices are people who truly harm others, like people who rape, murder, physically abuse, rob, etc. Calling someone evil for being rude is absurd.
All true. Although the title and piece would’ve been even more impactful had you placed emphasis on the first person perspective rather than the second.
It doesn't count as easygoing unless you can keep it up through marriage and children.
> Similarly, if I want to judge people for ignoring the plight of the shrimp, I shouldn’t compare their attitudes to how I feel about shrimp. Intuitively, I feel the pull of shrimp welfare’s importance. Instead, I should compare how I react to weirder stuff—e.g., the welfare of the weird ocean organisms like Rotifers that almost definitely aren’t conscious but are so numerous that even a tiny chance they’re conscious is significant.
I wonder about levels of generality here. You're extending the magnitude of the ethical failing that results in indifference to shrimp welfare. "The creatures are weird, so I don't care." So you point out the even weirder creature, and point out that you also fall prey to that.
But many of the other examples you give seem more qualitative. For example, gambling addiction versus some other kind of addiction. It's not that one lacks virtue by developing the affliction of a gambling addiction when they encounter a certain magnitude of gambling; they lose virtue by developing an addiction (possibly just as bad) to something besides gambling that enamours them just as much.
For this reason, I felt like the example you gave seemed pretty weak. "I should be more virtuious by caring even more about the expected value of bacteria suffering!" is something you're absolutely predisposed to doing. Way way more than the ordinary person is predisposed to caring about the EV of animal suffering.
The appropriate analogy seems to be you caring about something completely orthogonal to that. Like deontology. But that just gets into the substantive debate of whether one should care about deontological constraints at all.
Maybe your point is confined to people who know that something is wrong in expectation and still do it?