Some quick thoughts on each section:

(1) Virtue or quality of will seem the obvious determinant of desert. Those who wish well for others are virtuous and deserve positive well-being. Those with overall malicious desires may be overall vicious, and thereby deserve at least some suffering to be returned upon themselves. The agent's personal moral beliefs are irrelevant to their quality of will: it's moral motivation "de re" rather than "de dicto" that matters.

(It's an interesting question to what extent *acting* upon ill-will, in ways that actually cause harm, is either essential or partly affects desert.)

(2) McMahan's Time-Relative Interests Account could be repurposed to account for this. Perhaps you indirectly "deserve" to suffer just to the extent that you are directly psychologically connected (esp. via similar values) to a timeslice that directly deserves to suffer.

(3) Have you read Strawson's 'Freedom and Resentment'? Desert judgments stem from adopting the "participant stance", whereas thinking about changing character as the result of a virus outside of one's control seems to push one towards adopting the "objective stance" towards one's future self. It then seems weird to want that person to suffer. But the deeper question here is which stance yields more accurate judgments about desert. If it's the participant stance, then prompts that push us towards adopting the objective stance are actually biasing in a weird way.

(4) I don't think there's a set point of how much well-being people deserve. That seems a weird view. A better view might take desert to instead apply a moral discount to the impartial value of one's well-being. Only in extreme cases will the discount exceed 100% and turn negative.

Expand full comment