6 Comments

Some quick thoughts on each section:

(1) Virtue or quality of will seem the obvious determinant of desert. Those who wish well for others are virtuous and deserve positive well-being. Those with overall malicious desires may be overall vicious, and thereby deserve at least some suffering to be returned upon themselves. The agent's personal moral beliefs are irrelevant to their quality of will: it's moral motivation "de re" rather than "de dicto" that matters.

(It's an interesting question to what extent *acting* upon ill-will, in ways that actually cause harm, is either essential or partly affects desert.)

(2) McMahan's Time-Relative Interests Account could be repurposed to account for this. Perhaps you indirectly "deserve" to suffer just to the extent that you are directly psychologically connected (esp. via similar values) to a timeslice that directly deserves to suffer.

(3) Have you read Strawson's 'Freedom and Resentment'? Desert judgments stem from adopting the "participant stance", whereas thinking about changing character as the result of a virus outside of one's control seems to push one towards adopting the "objective stance" towards one's future self. It then seems weird to want that person to suffer. But the deeper question here is which stance yields more accurate judgments about desert. If it's the participant stance, then prompts that push us towards adopting the objective stance are actually biasing in a weird way.

(4) I don't think there's a set point of how much well-being people deserve. That seems a weird view. A better view might take desert to instead apply a moral discount to the impartial value of one's well-being. Only in extreme cases will the discount exceed 100% and turn negative.

Expand full comment
author

(1) But then you get the result that true believers can never deserve to suffer. Additionally, an anti-realist who lacks moral motivation, finding moral facts strange and thinking they're unintelligible and irrelevant even if they exist, on this account, would deserve bad things.

2) I have a paper in progress where I argue against this view strenuously.

3) No, I'll check it out.

4) Yeah I agree, my sense of the desert literature is generally that it disagrees, but this allows us to resolve these issues. For example, the geometry of desert doesn't even entertain the view you describe.

Expand full comment

I think you've misunderstood my proposal re: 1. As I said, your moral beliefs are strictly irrelevant to your quality of will on standard (de re) accounts. "De re" moral motivation means being motivated by the things (e.g. others' welfare) that are *actually* moral.

So a true-believing Nazi has ill will (or bad desires) towards Jewish people, for example, even though they mistakenly believe their antisemitism to be morally good. And the anti-realists I know tend to be just as virtuous as the realists, despite not really believing in virtue. Beliefs simply aren't relevant to how good your desires are.

Expand full comment
author

Oh oops. So would a true-believing deontologist who thus doesn't care adequately about the welfare of others deserve worse than a consequentialist?

Expand full comment

Someone who lacks beneficence would deserve less than the beneficent, at least. But I could see a "true-believing deontologist" who is fully beneficent, and just has additional commitments about moral constraints on top of that. I don't think those additional commitments, despite being mistaken, make the agent "ill-willed" in the relevant sense. It's an interesting question which moral mistakes have this further feature, and why.

(I think it's probably tied to our conception of respect and disrespect. One of my more controversial views is that I think that person-affecting views and negative utilitarianism -- views which deny that life can have positive intrinsic value -- are disrespectful in this way, and hence vicious/blameworthy in a similar kind of way to being racist, speciesist, etc. Whereas, e.g., prioritarians and Rossian pluralist deontologists strike me as merely mistaken, not disrespectful views.)

Expand full comment
author

It seems the distinction is that if you believe extra moral stuff on top of the correct things, that doesn't make you vicious. But that seems to result in a few odd results.

1) It would imply that, if I'm right that sadistic pleasure is good, then not thinking it's good is vicious. That's odd.

2) There doesn't seem to be a sharp divide between believing extra moral things and believing two few moral things. For example, suppose one adopts a person affecting view because they believe in an extra fact--the value of the pleasure of existing people is hundreds of times more important. Or suppose someone agrees with all of the moral facts about how to treat animals but thinks that there are additional facts that make humans thousands of times more important than you think they are; the person who uses that to justify mistreatment of animals would seem as deserving of ill fate as the person who thinks animals don't matter.

3) I don't have the intuition that people with the person-affecting views are morally worse than those of us who have the correct views about population ethics.

Expand full comment