Conservatives, like deontologists, should be moderates rather than absolutists. They should be open to the possibility that a *sufficiently* better outcome can justify replacement. They just don't think it should be as cheap/easy as utilitarianism implies. So I don't think that one-shot conservatives should be too bothered by the objection that replacement is still possible on their view. The moderate "spirit" of their view just requires that replacement is not justified by merely *marginal* improvements.
What is puzzling is not that replacement can be worth it if it just generates a lot of utility. What's puzzling is that, even if each year post-replacement is only slightly better than each year prior to replacement, if there are enough years. If you knew your son would live forever but could be replaced by someone whose life would be .00000000000001% better each year, and also live forever, it seems like believers in replacement would be hesitant to suggest that you should replace your son.
Another thing: your view also implies that if a person just got killed and replaced some arbitrarily large number of times over a very short time range, this would be arbitrarily bad. So for any world containing 10 billion people being horrifically tortured, there is some worse world containing just one person who is very happy, but where the causal history of the universe contained lots of people living for a very short time and then being replaced. That seems unintuitive.
> "If you knew your son would live forever but could be replaced by someone whose life would be .00000000000001% better each year, and also live forever, it seems..."
That's a puzzle about welfare numbers, not a puzzle about replacement. Most ppl don't intuitively accept that a tiny percentage improvement, over infinite (or sufficiently many) years, constitutes a massive (potentially infinite) improvement.
> "your view also implies that if a person just got killed and replaced some arbitrarily large number of times over a very short time range, this would be arbitrarily bad."
It's not clear that it does imply that. You're implicitly assuming a certain ("neutral") conception of value. But more relativistic (agent-relative, time-relative, world-relative, etc.) conceptions are possible. Conservatism about value is naturally understood as a kind of partiality towards our current world-mates, and so which outcomes you should prefer depends on your current world-state (since that changes which individuals you should care especially about). That leaves open how we should think about the prospect of replacing future individuals (who are not yet there for us to be attached to). So replacing one person twice is not necessarily equivalent to replacing two existing people (once each).
You're also assuming that replacement adds intrinsic badness. An alternative possibility is that it merely reduces the value of the replacement -- but not below zero. Plausibly, a world of only positive lives could not be bad overall, no matter how much death & replacement occurs. So this seems a better conception of how replacement affects value.
Finally, if individual welfare is capped or features diminishing marginal value, then replacement of those near the cap would presumably be overall good. (Indeed, I generally feel positively about the fact that there is generational turnover -- and not *only* for instrumental reasons, though that's certainly part of it.)
//That's a puzzle about welfare numbers, not a puzzle about replacement. Most ppl don't intuitively accept that a tiny percentage improvement, over infinite (or sufficiently many) years, constitutes a massive (potentially infinite) improvement.//
I'm not sure if this is true, but even if it is, they are wrong to do this.
//It's not clear that it does imply that. You're implicitly assuming a certain ("neutral") conception of value. But more relativistic (agent-relative, time-relative, world-relative, etc.) conceptions are possible. Conservatism about value is naturally understood as a kind of partiality towards our current world-mates, and so which outcomes you should prefer depends on your current world-state (since that changes which individuals you should care especially about). That leaves open how we should think about the prospect of replacing future individuals (who are not yet there for us to be attached to). So replacing one person twice is not necessarily equivalent to replacing two existing people (once each).//
But then every argument for consequentialism can be an argument against it! It seems clear that acting morally rightly doesn't make things worse. But if it diminishes the future value, that seems like Long-Term conservatism. I don't think that conservatism can be just deontic rather than axiological--seemingly if a plague killed everyone and replaced them with slightly happier people, that would be bad.
As you know, I think individual welfare being capped is indefensible, but even if it is capped, as long as the increases are sufficiently slight, such that it's below the decline in marginal value, then the objection to Long-Term conservatism would still succeed.
Do you think that for your first counterexample, believers in replacement would still be hesitant if you "packaged" the improvement in a way that didn't appear imperceptible? Like if instead of having the replacements life be a tiny bit better each year, it was a lot better a few years and the same most other years?
I don't think that your second counterexample is that counterintuitive if you really think about it and put it in more concrete terms. If there was some machine some mad scientist had made that created new people and then immediately painlessly killed them, I think most people would want such a machine shut down. They'd probably be willing to undergo some small hardship, like stubbing their toe or getting a painful dust speck in their eye, in order to shut it down.
Therefore, it stands to reason that shutting down 10 quadrillion such machines would probably be worth an even greater hardship, like the people being tortured in your example.
Yes, I think people would want it shut down, but they wouldn't think the machine was arbitrarily bad no matter how many people were replaced. Pro lifers, for example, don't generally seem to think the deaths of lots of fetuses is very tragic.
Not sure what they'd think about that first counterexample, but they need not be absolutists, so I think they don't have to bite the bullet for the scenario.
Your counterexample to long term conservatism doesn't hold water for me. It seems to me like a world where people live forever is much more of a utopia than one where people die, even if the people who die are happier moment to moment.
Remember that when assessing how positive a life is, you add up how positive it is over a person's entire life. So when comparing the "utopia" of very happy people to the world of moderately happy immortals, you wouldn't compare 100 years of the immortals' lives to the lives of the very happy people. You'd compare the immortals' entire lives to the very happy peoples' entire lives. Since the immortals live a very long time in your example, they are probably far, far happier than the people in the "utopia." Even if the very happy peoples' lives are 10, 100, even 1000 times better, moment to moment, the immortal has lived so long that their cumulative life is far better.
I think in general your framing of both long term and short term conservatism suffer from relying too much on time, rather than taking a timeless view of people's lives. I think a better framing would be to have two types of conservatism: one in which when someone is replaced there is a fixed penalty in value for doing so, and one where the penalty is a ratio of however much value the replaced person would have generated had they not been replaced. I suppose there could also be a hybrid view that has a fixed penalty to start with that then increases in severity relative to how happy a person's life was. I am not sure which of these best captures Cohen's intuition best and has the least counterintuitive conclusions, but they definitely seem better than standard utilitarianism.
Hmm, I really don't share that intuition. I agree that lives are better on average in the first world, but they don't seem better all things considered.
But suppose that we modify the case again so that we go through a thousand generations and then the thousandth generation lives forever, hundreds of times happier than the first one. That seems better than if the first one lived forever.
1) The lack of realistic examples makes this far less compelling, at least to me.
2) The demonstrated reliability of a valued thing is itself be a source of value that a new, apparently more valuable, desired object may not be able to demonstrate immediately, since it hasn't had the time to show it is just as reliable. This can apply to new gadgets or interpersonal relationships.
3) The discussion of value seems one-dimensional and not in keeping with human nature. It also isn't clear who is doing the valuing at times, and it isn't clear how we can actually compare the various scenarios you describe in anything other than fairly superficial ways.
1) Yes, but people's intuition can vary sometimes widely, are not always constant, and are often wrong. And that's true of realistic cases, not to mention the unrealistic cases. Also I'm fine thinking about intuitions but not sure your argument is as solid as could be just relying on intuitions about unrealistic cases. Wouldn't using realistic cases also demonstrate the practical utility of your philosophy?
2) Not sure how we actually apply that but ok...
3) I'm not sure what "value" of any kind means independent of what someone or group of people values.
1) This is true. But when intuitions conflict, we should believe what is more obvious.
2) It's a philosophical position not a practical position.
3) Do you think the following sentence is incoherent: "there might be things of value that no one happens to value?" I think it is and is, in fact, true. But if this is so, then value can't just be about things being valued by particular people. If you lack the concept, read ethical intuitionism.
1) I guess this boils down to my opinion that your arguments would be both stronger and more compelling as pieces of writing if the examples were more realistic, but that is just my two cents.
2) OK I guess, gotta think about this more.
3) I'm familiar with ethical intuitionism and reject it. To me, value only makes sense with a valuer. But this is a more fundamental issue that obviously wont be resolved here.
The words 'conservatism' and 'liberalism' appear too freighted with other meaning. Let's say that some thought-experiment superintelligence selects a human at random to set global policy, and this individual indicates society should be reverted to the way it was in the 1920s, or the 1320s, or the mesolithic because of a belief that older ways of doing things and being human provided more overall happiness, though in ways that sometimes are repugnant to modern folk. This would appear a kind of radical choice against conserving existing value for the sake of greater future value, even though that future would look like the past. And yet precisely the argument against 'conservatives' isn't that they want to keep everything exactly as it is today, but that they want to turn back time, to roll back civilization to when you could refuse to serve blacks or hire women. Is there -anyone- on earth saying "Yes this 2023 version is perfect we must conserve it", or is everyone supposing that replacing it with something else (whether it looks futuristic or retrograde) would cause more happiness?
I am not arguing against general conservatism, just a niche position in ethics, adopted by the socialist Marxist GA Cohen, among others, called conservatism about value. I called my position liberalism about value as a joke.
Conservatives, like deontologists, should be moderates rather than absolutists. They should be open to the possibility that a *sufficiently* better outcome can justify replacement. They just don't think it should be as cheap/easy as utilitarianism implies. So I don't think that one-shot conservatives should be too bothered by the objection that replacement is still possible on their view. The moderate "spirit" of their view just requires that replacement is not justified by merely *marginal* improvements.
What is puzzling is not that replacement can be worth it if it just generates a lot of utility. What's puzzling is that, even if each year post-replacement is only slightly better than each year prior to replacement, if there are enough years. If you knew your son would live forever but could be replaced by someone whose life would be .00000000000001% better each year, and also live forever, it seems like believers in replacement would be hesitant to suggest that you should replace your son.
Another thing: your view also implies that if a person just got killed and replaced some arbitrarily large number of times over a very short time range, this would be arbitrarily bad. So for any world containing 10 billion people being horrifically tortured, there is some worse world containing just one person who is very happy, but where the causal history of the universe contained lots of people living for a very short time and then being replaced. That seems unintuitive.
> "If you knew your son would live forever but could be replaced by someone whose life would be .00000000000001% better each year, and also live forever, it seems..."
That's a puzzle about welfare numbers, not a puzzle about replacement. Most ppl don't intuitively accept that a tiny percentage improvement, over infinite (or sufficiently many) years, constitutes a massive (potentially infinite) improvement.
> "your view also implies that if a person just got killed and replaced some arbitrarily large number of times over a very short time range, this would be arbitrarily bad."
It's not clear that it does imply that. You're implicitly assuming a certain ("neutral") conception of value. But more relativistic (agent-relative, time-relative, world-relative, etc.) conceptions are possible. Conservatism about value is naturally understood as a kind of partiality towards our current world-mates, and so which outcomes you should prefer depends on your current world-state (since that changes which individuals you should care especially about). That leaves open how we should think about the prospect of replacing future individuals (who are not yet there for us to be attached to). So replacing one person twice is not necessarily equivalent to replacing two existing people (once each).
You're also assuming that replacement adds intrinsic badness. An alternative possibility is that it merely reduces the value of the replacement -- but not below zero. Plausibly, a world of only positive lives could not be bad overall, no matter how much death & replacement occurs. So this seems a better conception of how replacement affects value.
Finally, if individual welfare is capped or features diminishing marginal value, then replacement of those near the cap would presumably be overall good. (Indeed, I generally feel positively about the fact that there is generational turnover -- and not *only* for instrumental reasons, though that's certainly part of it.)
//That's a puzzle about welfare numbers, not a puzzle about replacement. Most ppl don't intuitively accept that a tiny percentage improvement, over infinite (or sufficiently many) years, constitutes a massive (potentially infinite) improvement.//
I'm not sure if this is true, but even if it is, they are wrong to do this.
//It's not clear that it does imply that. You're implicitly assuming a certain ("neutral") conception of value. But more relativistic (agent-relative, time-relative, world-relative, etc.) conceptions are possible. Conservatism about value is naturally understood as a kind of partiality towards our current world-mates, and so which outcomes you should prefer depends on your current world-state (since that changes which individuals you should care especially about). That leaves open how we should think about the prospect of replacing future individuals (who are not yet there for us to be attached to). So replacing one person twice is not necessarily equivalent to replacing two existing people (once each).//
But then every argument for consequentialism can be an argument against it! It seems clear that acting morally rightly doesn't make things worse. But if it diminishes the future value, that seems like Long-Term conservatism. I don't think that conservatism can be just deontic rather than axiological--seemingly if a plague killed everyone and replaced them with slightly happier people, that would be bad.
As you know, I think individual welfare being capped is indefensible, but even if it is capped, as long as the increases are sufficiently slight, such that it's below the decline in marginal value, then the objection to Long-Term conservatism would still succeed.
Do you think that for your first counterexample, believers in replacement would still be hesitant if you "packaged" the improvement in a way that didn't appear imperceptible? Like if instead of having the replacements life be a tiny bit better each year, it was a lot better a few years and the same most other years?
I don't think that your second counterexample is that counterintuitive if you really think about it and put it in more concrete terms. If there was some machine some mad scientist had made that created new people and then immediately painlessly killed them, I think most people would want such a machine shut down. They'd probably be willing to undergo some small hardship, like stubbing their toe or getting a painful dust speck in their eye, in order to shut it down.
Therefore, it stands to reason that shutting down 10 quadrillion such machines would probably be worth an even greater hardship, like the people being tortured in your example.
Yes, I think people would want it shut down, but they wouldn't think the machine was arbitrarily bad no matter how many people were replaced. Pro lifers, for example, don't generally seem to think the deaths of lots of fetuses is very tragic.
Not sure what they'd think about that first counterexample, but they need not be absolutists, so I think they don't have to bite the bullet for the scenario.
Your counterexample to long term conservatism doesn't hold water for me. It seems to me like a world where people live forever is much more of a utopia than one where people die, even if the people who die are happier moment to moment.
Remember that when assessing how positive a life is, you add up how positive it is over a person's entire life. So when comparing the "utopia" of very happy people to the world of moderately happy immortals, you wouldn't compare 100 years of the immortals' lives to the lives of the very happy people. You'd compare the immortals' entire lives to the very happy peoples' entire lives. Since the immortals live a very long time in your example, they are probably far, far happier than the people in the "utopia." Even if the very happy peoples' lives are 10, 100, even 1000 times better, moment to moment, the immortal has lived so long that their cumulative life is far better.
I think in general your framing of both long term and short term conservatism suffer from relying too much on time, rather than taking a timeless view of people's lives. I think a better framing would be to have two types of conservatism: one in which when someone is replaced there is a fixed penalty in value for doing so, and one where the penalty is a ratio of however much value the replaced person would have generated had they not been replaced. I suppose there could also be a hybrid view that has a fixed penalty to start with that then increases in severity relative to how happy a person's life was. I am not sure which of these best captures Cohen's intuition best and has the least counterintuitive conclusions, but they definitely seem better than standard utilitarianism.
Hmm, I really don't share that intuition. I agree that lives are better on average in the first world, but they don't seem better all things considered.
But suppose that we modify the case again so that we go through a thousand generations and then the thousandth generation lives forever, hundreds of times happier than the first one. That seems better than if the first one lived forever.
A few thoughts—
1) The lack of realistic examples makes this far less compelling, at least to me.
2) The demonstrated reliability of a valued thing is itself be a source of value that a new, apparently more valuable, desired object may not be able to demonstrate immediately, since it hasn't had the time to show it is just as reliable. This can apply to new gadgets or interpersonal relationships.
3) The discussion of value seems one-dimensional and not in keeping with human nature. It also isn't clear who is doing the valuing at times, and it isn't clear how we can actually compare the various scenarios you describe in anything other than fairly superficial ways.
1) I think we can have intuitions about unrealistic cases.
2) Conservatism about value involves holding all else equal.
3) This is about moral value, not what any particular person values.
1) Yes, but people's intuition can vary sometimes widely, are not always constant, and are often wrong. And that's true of realistic cases, not to mention the unrealistic cases. Also I'm fine thinking about intuitions but not sure your argument is as solid as could be just relying on intuitions about unrealistic cases. Wouldn't using realistic cases also demonstrate the practical utility of your philosophy?
2) Not sure how we actually apply that but ok...
3) I'm not sure what "value" of any kind means independent of what someone or group of people values.
1) This is true. But when intuitions conflict, we should believe what is more obvious.
2) It's a philosophical position not a practical position.
3) Do you think the following sentence is incoherent: "there might be things of value that no one happens to value?" I think it is and is, in fact, true. But if this is so, then value can't just be about things being valued by particular people. If you lack the concept, read ethical intuitionism.
1) I guess this boils down to my opinion that your arguments would be both stronger and more compelling as pieces of writing if the examples were more realistic, but that is just my two cents.
2) OK I guess, gotta think about this more.
3) I'm familiar with ethical intuitionism and reject it. To me, value only makes sense with a valuer. But this is a more fundamental issue that obviously wont be resolved here.
The words 'conservatism' and 'liberalism' appear too freighted with other meaning. Let's say that some thought-experiment superintelligence selects a human at random to set global policy, and this individual indicates society should be reverted to the way it was in the 1920s, or the 1320s, or the mesolithic because of a belief that older ways of doing things and being human provided more overall happiness, though in ways that sometimes are repugnant to modern folk. This would appear a kind of radical choice against conserving existing value for the sake of greater future value, even though that future would look like the past. And yet precisely the argument against 'conservatives' isn't that they want to keep everything exactly as it is today, but that they want to turn back time, to roll back civilization to when you could refuse to serve blacks or hire women. Is there -anyone- on earth saying "Yes this 2023 version is perfect we must conserve it", or is everyone supposing that replacing it with something else (whether it looks futuristic or retrograde) would cause more happiness?
I am not arguing against general conservatism, just a niche position in ethics, adopted by the socialist Marxist GA Cohen, among others, called conservatism about value. I called my position liberalism about value as a joke.