> "if creating this person would also increase the welfare of the third party by .0000001 units, then it would be worth doing."
This is a good point, and suggests a stronger version of the argument based on just two premises (your 2nd and 3rd principles).
> "Objections?"
I think the commonsense view here is to embrace time-inconsistency due to value changes. We should care more about existing people, and so reject (at time t1) the prospect of harming an existing person merely to bring a better new life into existence. If it was predictable that creating the new life would result in our doing the subsequent transfer, then we shouldn't create the new life. But if we don't anticipate the transfer, we could rationally follow a sequence of steps that yields this result.
We should create the beneficial life (when it has no apparent downside). And then, having done so, our values must change (at t2) to give full weight to this newly-existing person. Given our new values, we should then endorse the transfer. But it doesn't follow that the combination act of harming + creating is one we should regard positively, from the perspective of our t1-values. So the argument is invalid.
I think generally when people think about the Pareto principle, they mean it only to apply to people who do exist or definitely will. But I agree that the more extreme version can jettison p1 (just by entailing it).
As for the objection, if you should do A and do B after having done A, then it seems you should do A and B. So it seems one of the acts would have to be not worth doing for the argument not to work.
It could be that one thinks that after the first act has been taken the second stops being worth taking or vice versa. However, this is ruled out by the following two principles.
1 Normative Decision-Tree Separability: the moral status of the options at a choice node does not depend on other parts of the decision tree than those that can be reached from that node. .
2 Expansion Improvability: The fact that a choice enables future choices that are worth taking does not count against it.
Both seem plausible.
Unrelated, I just found this blog by a philosophy professor--might be up your alley. https://www.umsu.de/blog/
> "if you should do A and do B after having done A, then it seems you should do A and B."
Are you talking about objective (fact-relative) or subjective (evidence-relative) shoulds? Objectively, you should do A at t1 if the actual complete outcome of doing so would not harm the t1-people, but if it would be followed by B (welfare transfer harming t1-ppl to benefit the new person), then you objectively should not do A.
Subjectively, maybe it depends how likely one is to do B afterwards. By backwards induction, it's probably too likely. After all, B will (at t2) be worth doing for the sake of the t2-people. So you'd be relying on your future failure to do as you then subjectively ought. Supposing you're rational and informed about your future options, it will also be subjectively wrong to do A.
Next imagine that you've been misinformed. But if you subjectively should_t1 do A *only because you don't realize it will be followed by B*, then even if you should_t2 do B, it doesn't follow that you should_t1 do A&B. The current case makes this clear.
Given value relativity (e.g. based on shifting populations), you need to disambiguate expansion improvability: is the future choice one that *is* (by your current values) worth taking, or one that merely *will be* (by your future values) worth taking? If the latter, it's pretty clear how adding choices could be undesirable. Just think of Odysseus and the Sirens. (Suppose the sirens do not make him irrational in any way. They simply change his preferences -- for as long as he hears their song -- so that he truly wants, more than anything else, to dive to his death.)
imo, appealing to formal axioms in these sorts of discussions is inevitably question-begging. This is because you can't really tell whether they're plausible or not until you see how they apply to these edge cases. And if one finds (e.g.) population-relative value plausible, then it's just going to seem completely costless to reject any formal axiom that turns out to be incompatible with this. So it's better to just focus on working out the right thing to say about the edge-case in question.
In all these cases, I'm talking about what you objectively should do. Expansion relativity is about whether, based on your current values, you would take some future act. Thus, in the Siren case, your current values would sanction not untying from the mast.
Expansion relativity is very plausibly a requirement of rationality--it amounts to nothing more than the fact that an action opens up future worthwhile choices doesn't count against it, which is very obvious to me. I disagree that formal axioms are question-begging--take transitivity, for instance. Presumably you don't think that appealing to transitivity in a proof is question begging?
I think that these principles are, while maybe not quite as obvious as transitivity, pretty close. And I generally think that these principles are more reliable than case specific intuitions for a few reasons.
First is something that you've talked about; our intuitions about the cases are often colored by various biases and such and don't capture the whole picture.
Second, these broad principles apply to an infinite range of cases; if, for example, expansion relativity were false, it would be a bizarre coincidence if it basically only had a counterexample in this one case. I've elaborated on this in more detail here https://benthams.substack.com/p/a-bayesian-analysis-of-when-utilitarianism
Third, is a point made by Huemer, that the formal ethical intuitions are the best candidates for something that has been grasped by reason alone, not distorted by various intuitions. https://fakenous.substack.com/p/revisionary-intuitionism
One thing I'd be curious about: do you reject Gustafsson-style money pumps because they rely on these types of formal intuitions?
I don't know that I outright "reject" money-pump arguments, but I do think they're pretty dialectically ineffective. I'd expect opponents to think that 'resolute choice' is the way to go in such cases, and hence reject separability from the start. (It makes a real difference whether, at a past choice point, you promised yourself that you'd get to *this* point and then stop, no matter that it *now* seems preferable to take at least one more step...)
I think the counterexamples here are very systematic: they'll arise whenever you have principled grounds for changing your evaluative perspective, e.g. due to changes in population.
> "Presumably you don't think that appealing to transitivity in a proof is question begging?"
Question-begging is contextual. You can certainly use formal principles in presenting an argument (or "proof") in the first place. But if someone has an intuitive counterexample, *responding* with formal principles is likely question-begging.
You say that it's "very obvious" that "the fact that an action opens up future worthwhile choices doesn't count against it." But this is only obvious when one's evaluative perspective is unchanging (as is ordinarily the case). In the case from the OP, by contrast, it makes perfect sense to think that adding option B could count against taking option A. (Do you really not understand why this makes sense?) That is, the case at hand is just an obvious counterexample to your general principle. I honestly have no idea why you would find it plausible *outside of contexts in which the relevant evaluative standards were being held fixed throughout*. It just makes it seem like you aren't willing to seriously grapple with how to deal with changing evaluative standards.
Now ask yourself what you should think of the prospect of A (bringing the new life into existence), both (i) conditional on B (subsequent welfare transfer), and (ii) conditional on not-B.
There's a principled view here that makes very clear why the desirability of choice A might depend on *not* subsequently doing B, even though B would *become* desirable after choosing A.
I'll check it out. I'm actually writing a paper right now about Frick's claim that our obligations to improve future welfare are entirely conditional on the choice to have already caused someone to exist.
One other thing--I think you also need transitivity for the argument, to get that the act at the end of the sequence is better than the one at the beginning of the sequence.
Gustafsson, in his book 'money pump arguments' has a pretty detailed discussion of resolute choice and argues, I think persuasively, that it is irrational. They could reject separability, but anyone could always reject a premise in any argument. If one appeals to very plausible premises, the mere fact that their opponent could reject them would not make the principle implausible. There are intuitive counterexamples to transitivity--the spectrum cases. Nonetheless, I think that we're justified in accepting transitivity because the principle is more plausible than the counterexamples.
When you give the hope argument to show that one should kill one to stop two killings, they could always reject that one should hope for better things. Nonetheless, that principle is very plausible, so I think one should not reject it.
As for the next point that you make, there are two replies. First, the principle does not state that the mere fact that actions open up future options that will be subjectively chosen can't count against it--that principle would be absurd; if someone knew that, if they went to a party, there was a high probability of them doing cocaine after desiring to do so, this would count against going to the party. The principle says that if an action opens up another action that is objectively worth taking, that cannot count against it. This is way more plausible and has no intuitively compelling counterexample that I can think of.
Imagine the following dialogue.
A: I think I'll go to the store. This is worthwhile.
B: Okay, well, when you're at the store, I'll give you this objectively worthwhile offer!
A: Okay, great, well then I won't go to the sotre.
B: Why?
A: Well, I would not want to take the sequence of actions including going to the store.
B: You can turn down the offer.
A: No no. It's a good offer. That's why I would take it. Thus, I won't go to the store.
This seems manifestly irrational.
Also, and this isn't quite as important, but I don't think that the counterexamples you give are genuine counterexamples because their values don't change. At T1, their values are such that if confronted with the choice at T2, they'd take it. The beings they have obligations to change across the two cases, but the necessary and sufficient conditions plausibly do not change.
I think if I adopted your view, maybe I'd just reject decision tree separability--that seems less obvious than the other principle. Nonetheless, I think that decision tree separability is super plausible.
The statement "you should inflict N units of suffering on an existing person to create a future person with more than N units of utility on net" on its own isn't really radical. It's radical if we assume one of the following conditions:
* One is *obligated* to create the future person in this way.
* The process by which the existing person is harmed is in a rights-violating way (e.g., used as a mere means).
In order to deduce a radical conclusion, you need to reformulate your premises in a way that explicitly references obligations or you need to specify that the harm happens in a way that is plausibly rights-violating. But then the premises are not very plausible axioms. E.g. it is not a very plausible axiom that we are obligated to create persons with positive utility (even extremely high positive utility). E.g. while it may be plausible to redirect threats to third-parties in order to minimize overall harm, it is not plausible to harm third-parties as a means to minimize overall harm (or in whatever way counts as a rights violation according to the deontologist); after all, deontologists are generally okay with redirection in general (e.g., 70% of deontologists support switching the lever from 5 to 1 in the trolley problem, according to PhilPapers surveys).
If the prescription is reformulated to explicitly exclude the two conditions that I mentioned above, it is not radical at all: "you are permitted do an action which has the side-effect of creating a future person with >N units of net utility, but which also has the side-effect of inflicting <N units of harm on an existing person." In fact, most would probably accept that you are permitted to do this action even if the new person has less net utility than the harm caused to the existing person.
I make a claim about what you should do, not about obligations. I'm not a maximizing consequentialist, I'm a scalar one. In this case, they're not used as a mere means--and something that would otherwise be impermissibly rights violating like inflicting suffering generally isn't if it is for the greater good.
> I make a claim about what you should do, not about obligations. I'm not a maximizing consequentialist, I'm a scalar one
Thats fine. I'm speaking from the perspective of what's plausible to non-consequentialists.
> In this case, they're not used as a mere means--and something that would otherwise be impermissibly rights violating like inflicting suffering generally isn't if it is for the greater good
You think most deontologists believe that think inflicting suffering is prima-facie rights violating? Even if done for the greater good? How do you explain why most deontologists support switching in the trolley case, despite the suffering inflicted?
> Why do we have to say anything about obligations?
I don't know what you mean by "have", but your post purports to discuss whats plausible/radical even from the perspective of deontologists. And the "radical" conclusion that you mention is only radical if construed in terms of obligations or rights violations.
> Yes, if you just pressed a button which caused someone severe pain, that would be a rights violation.
Without specifying further conditions, I'm not sure why you believe that.
How do you explain why most deontologists support switching in the trolley case, despite the suffering inflicted?
How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?
"I don't know what you mean by "have", but your post purports to discuss whats plausible/radical even from the perspective of deontologists. And the "radical" conclusion that you mention is only radical if construed in terms of obligations or rights violations."
No it's not; most people don't think you should inflict 49.9 units of suffering on an existing person to create a person with 50 utilts.
"How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?"
> No it's not; most people don't think you should inflict 49.9 units of suffering on an existing person to create a person with 50 utilts.
You phrased this as "to create a person" as if the suffering is imposed as a means to create the additional person. If you instead phrase this to make it clear that the suffering is a side-effect, and that you aren't talking about obligations, then I don't know where you get the idea that most people would disagree.
> "How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?"
What's the account that explains how inflicting suffering is impermissibly rights violating, yet the following aren't considered rights violating:
* Redirecting harm.
* Causing harm as an undesirable side-effect.
* Harming people in ways that aren't physically aggressive (e.g., insulting people, outcompeting someone for valuable resources, etc.).
Which deontologists have articulated the view that merely inflicting harm/suffering is a prima-facie rights violation?
I reject 3. It's good to give someone 50 + -49.9 units of utility but plausibly bad if the harm is redirected to someone else. It's the difference between effectively adding no new negative utility to the world and adding -N utility (among other actions), since agents are the fundamental locations of value rather than collections of agents.
Yes, you should flip the switch because 5>1, and there's no relevant difference between the 5 and the 1 other than number. In this post's situation there is a relevant difference between the initial -49.9 and the new person's -49.8: the fact that 50-49.9>0. You can't divorce the threat from the context of also causing a greater amount of positive utility for the threatened person. What's true is the much weaker statement that there are many (N,e) pairs such that one should inflict N units of suffering on an existing person to create a future person with N+e units of utility. But when e is very small, other considerations come into play, such as not causing harm to someone who doesn't get an associated benefit. In the trolley problem the act of omission of not flipping the switch is still a harm you're inflicting on the 5.
I'm not quite sure what you're saying the difference is. Is it just that the new person has more utility? We could stipulate that the existing person has more utility.
It's okay to harm someone if at the same time you're benefiting them a greater amount. It's not okay to harm someone if at the same time you're benefiting someone else a greater amount, unless the difference is pretty big. The trolley problem is a choice between harming one person or five people, so is an example of the difference being pretty big.
You should read the section in which I defend the Pareto principle and then explain that even on most views that deny it, this action will be permitted.
> "if creating this person would also increase the welfare of the third party by .0000001 units, then it would be worth doing."
This is a good point, and suggests a stronger version of the argument based on just two premises (your 2nd and 3rd principles).
> "Objections?"
I think the commonsense view here is to embrace time-inconsistency due to value changes. We should care more about existing people, and so reject (at time t1) the prospect of harming an existing person merely to bring a better new life into existence. If it was predictable that creating the new life would result in our doing the subsequent transfer, then we shouldn't create the new life. But if we don't anticipate the transfer, we could rationally follow a sequence of steps that yields this result.
We should create the beneficial life (when it has no apparent downside). And then, having done so, our values must change (at t2) to give full weight to this newly-existing person. Given our new values, we should then endorse the transfer. But it doesn't follow that the combination act of harming + creating is one we should regard positively, from the perspective of our t1-values. So the argument is invalid.
I think generally when people think about the Pareto principle, they mean it only to apply to people who do exist or definitely will. But I agree that the more extreme version can jettison p1 (just by entailing it).
As for the objection, if you should do A and do B after having done A, then it seems you should do A and B. So it seems one of the acts would have to be not worth doing for the argument not to work.
It could be that one thinks that after the first act has been taken the second stops being worth taking or vice versa. However, this is ruled out by the following two principles.
1 Normative Decision-Tree Separability: the moral status of the options at a choice node does not depend on other parts of the decision tree than those that can be reached from that node. .
2 Expansion Improvability: The fact that a choice enables future choices that are worth taking does not count against it.
Both seem plausible.
Unrelated, I just found this blog by a philosophy professor--might be up your alley. https://www.umsu.de/blog/
Thanks for the pointer to Wo's blog!
> "if you should do A and do B after having done A, then it seems you should do A and B."
Are you talking about objective (fact-relative) or subjective (evidence-relative) shoulds? Objectively, you should do A at t1 if the actual complete outcome of doing so would not harm the t1-people, but if it would be followed by B (welfare transfer harming t1-ppl to benefit the new person), then you objectively should not do A.
Subjectively, maybe it depends how likely one is to do B afterwards. By backwards induction, it's probably too likely. After all, B will (at t2) be worth doing for the sake of the t2-people. So you'd be relying on your future failure to do as you then subjectively ought. Supposing you're rational and informed about your future options, it will also be subjectively wrong to do A.
Next imagine that you've been misinformed. But if you subjectively should_t1 do A *only because you don't realize it will be followed by B*, then even if you should_t2 do B, it doesn't follow that you should_t1 do A&B. The current case makes this clear.
Given value relativity (e.g. based on shifting populations), you need to disambiguate expansion improvability: is the future choice one that *is* (by your current values) worth taking, or one that merely *will be* (by your future values) worth taking? If the latter, it's pretty clear how adding choices could be undesirable. Just think of Odysseus and the Sirens. (Suppose the sirens do not make him irrational in any way. They simply change his preferences -- for as long as he hears their song -- so that he truly wants, more than anything else, to dive to his death.)
imo, appealing to formal axioms in these sorts of discussions is inevitably question-begging. This is because you can't really tell whether they're plausible or not until you see how they apply to these edge cases. And if one finds (e.g.) population-relative value plausible, then it's just going to seem completely costless to reject any formal axiom that turns out to be incompatible with this. So it's better to just focus on working out the right thing to say about the edge-case in question.
In all these cases, I'm talking about what you objectively should do. Expansion relativity is about whether, based on your current values, you would take some future act. Thus, in the Siren case, your current values would sanction not untying from the mast.
Expansion relativity is very plausibly a requirement of rationality--it amounts to nothing more than the fact that an action opens up future worthwhile choices doesn't count against it, which is very obvious to me. I disagree that formal axioms are question-begging--take transitivity, for instance. Presumably you don't think that appealing to transitivity in a proof is question begging?
I think that these principles are, while maybe not quite as obvious as transitivity, pretty close. And I generally think that these principles are more reliable than case specific intuitions for a few reasons.
First is something that you've talked about; our intuitions about the cases are often colored by various biases and such and don't capture the whole picture.
Second, these broad principles apply to an infinite range of cases; if, for example, expansion relativity were false, it would be a bizarre coincidence if it basically only had a counterexample in this one case. I've elaborated on this in more detail here https://benthams.substack.com/p/a-bayesian-analysis-of-when-utilitarianism
Third, is a point made by Huemer, that the formal ethical intuitions are the best candidates for something that has been grasped by reason alone, not distorted by various intuitions. https://fakenous.substack.com/p/revisionary-intuitionism
One thing I'd be curious about: do you reject Gustafsson-style money pumps because they rely on these types of formal intuitions?
I don't know that I outright "reject" money-pump arguments, but I do think they're pretty dialectically ineffective. I'd expect opponents to think that 'resolute choice' is the way to go in such cases, and hence reject separability from the start. (It makes a real difference whether, at a past choice point, you promised yourself that you'd get to *this* point and then stop, no matter that it *now* seems preferable to take at least one more step...)
I think the counterexamples here are very systematic: they'll arise whenever you have principled grounds for changing your evaluative perspective, e.g. due to changes in population.
> "Presumably you don't think that appealing to transitivity in a proof is question begging?"
Question-begging is contextual. You can certainly use formal principles in presenting an argument (or "proof") in the first place. But if someone has an intuitive counterexample, *responding* with formal principles is likely question-begging.
You say that it's "very obvious" that "the fact that an action opens up future worthwhile choices doesn't count against it." But this is only obvious when one's evaluative perspective is unchanging (as is ordinarily the case). In the case from the OP, by contrast, it makes perfect sense to think that adding option B could count against taking option A. (Do you really not understand why this makes sense?) That is, the case at hand is just an obvious counterexample to your general principle. I honestly have no idea why you would find it plausible *outside of contexts in which the relevant evaluative standards were being held fixed throughout*. It just makes it seem like you aren't willing to seriously grapple with how to deal with changing evaluative standards.
Try to put yourself into the mindset of someone who finds population-relative evaluation plausible. E.g., imagine that you accept the hybrid view I outline here: https://rychappell.substack.com/p/killing-vs-failing-to-create
Now ask yourself what you should think of the prospect of A (bringing the new life into existence), both (i) conditional on B (subsequent welfare transfer), and (ii) conditional on not-B.
There's a principled view here that makes very clear why the desirability of choice A might depend on *not* subsequently doing B, even though B would *become* desirable after choosing A.
See also my old defense of option-dependent preferences: https://www.philosophyetc.net/2011/03/option-dependent-preferences.html
I'll check it out. I'm actually writing a paper right now about Frick's claim that our obligations to improve future welfare are entirely conditional on the choice to have already caused someone to exist.
One other thing--I think you also need transitivity for the argument, to get that the act at the end of the sequence is better than the one at the beginning of the sequence.
Gustafsson, in his book 'money pump arguments' has a pretty detailed discussion of resolute choice and argues, I think persuasively, that it is irrational. They could reject separability, but anyone could always reject a premise in any argument. If one appeals to very plausible premises, the mere fact that their opponent could reject them would not make the principle implausible. There are intuitive counterexamples to transitivity--the spectrum cases. Nonetheless, I think that we're justified in accepting transitivity because the principle is more plausible than the counterexamples.
When you give the hope argument to show that one should kill one to stop two killings, they could always reject that one should hope for better things. Nonetheless, that principle is very plausible, so I think one should not reject it.
As for the next point that you make, there are two replies. First, the principle does not state that the mere fact that actions open up future options that will be subjectively chosen can't count against it--that principle would be absurd; if someone knew that, if they went to a party, there was a high probability of them doing cocaine after desiring to do so, this would count against going to the party. The principle says that if an action opens up another action that is objectively worth taking, that cannot count against it. This is way more plausible and has no intuitively compelling counterexample that I can think of.
Imagine the following dialogue.
A: I think I'll go to the store. This is worthwhile.
B: Okay, well, when you're at the store, I'll give you this objectively worthwhile offer!
A: Okay, great, well then I won't go to the sotre.
B: Why?
A: Well, I would not want to take the sequence of actions including going to the store.
B: You can turn down the offer.
A: No no. It's a good offer. That's why I would take it. Thus, I won't go to the store.
This seems manifestly irrational.
Also, and this isn't quite as important, but I don't think that the counterexamples you give are genuine counterexamples because their values don't change. At T1, their values are such that if confronted with the choice at T2, they'd take it. The beings they have obligations to change across the two cases, but the necessary and sufficient conditions plausibly do not change.
I think if I adopted your view, maybe I'd just reject decision tree separability--that seems less obvious than the other principle. Nonetheless, I think that decision tree separability is super plausible.
The statement "you should inflict N units of suffering on an existing person to create a future person with more than N units of utility on net" on its own isn't really radical. It's radical if we assume one of the following conditions:
* One is *obligated* to create the future person in this way.
* The process by which the existing person is harmed is in a rights-violating way (e.g., used as a mere means).
In order to deduce a radical conclusion, you need to reformulate your premises in a way that explicitly references obligations or you need to specify that the harm happens in a way that is plausibly rights-violating. But then the premises are not very plausible axioms. E.g. it is not a very plausible axiom that we are obligated to create persons with positive utility (even extremely high positive utility). E.g. while it may be plausible to redirect threats to third-parties in order to minimize overall harm, it is not plausible to harm third-parties as a means to minimize overall harm (or in whatever way counts as a rights violation according to the deontologist); after all, deontologists are generally okay with redirection in general (e.g., 70% of deontologists support switching the lever from 5 to 1 in the trolley problem, according to PhilPapers surveys).
If the prescription is reformulated to explicitly exclude the two conditions that I mentioned above, it is not radical at all: "you are permitted do an action which has the side-effect of creating a future person with >N units of net utility, but which also has the side-effect of inflicting <N units of harm on an existing person." In fact, most would probably accept that you are permitted to do this action even if the new person has less net utility than the harm caused to the existing person.
I make a claim about what you should do, not about obligations. I'm not a maximizing consequentialist, I'm a scalar one. In this case, they're not used as a mere means--and something that would otherwise be impermissibly rights violating like inflicting suffering generally isn't if it is for the greater good.
> I make a claim about what you should do, not about obligations. I'm not a maximizing consequentialist, I'm a scalar one
Thats fine. I'm speaking from the perspective of what's plausible to non-consequentialists.
> In this case, they're not used as a mere means--and something that would otherwise be impermissibly rights violating like inflicting suffering generally isn't if it is for the greater good
You think most deontologists believe that think inflicting suffering is prima-facie rights violating? Even if done for the greater good? How do you explain why most deontologists support switching in the trolley case, despite the suffering inflicted?
"Thats fine. I'm speaking from the perspective of what's plausible to non-consequentialists."
Why do we have to say anything about obligations?
"You think most deontologists believe that think inflicting suffering is prima-facie rights violating?"
Yes, if you just pressed a button which caused someone severe pain, that would be a rights violation.
> Why do we have to say anything about obligations?
I don't know what you mean by "have", but your post purports to discuss whats plausible/radical even from the perspective of deontologists. And the "radical" conclusion that you mention is only radical if construed in terms of obligations or rights violations.
> Yes, if you just pressed a button which caused someone severe pain, that would be a rights violation.
Without specifying further conditions, I'm not sure why you believe that.
How do you explain why most deontologists support switching in the trolley case, despite the suffering inflicted?
How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?
"I don't know what you mean by "have", but your post purports to discuss whats plausible/radical even from the perspective of deontologists. And the "radical" conclusion that you mention is only radical if construed in terms of obligations or rights violations."
No it's not; most people don't think you should inflict 49.9 units of suffering on an existing person to create a person with 50 utilts.
"How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?"
There are different accounts.
> No it's not; most people don't think you should inflict 49.9 units of suffering on an existing person to create a person with 50 utilts.
You phrased this as "to create a person" as if the suffering is imposed as a means to create the additional person. If you instead phrase this to make it clear that the suffering is a side-effect, and that you aren't talking about obligations, then I don't know where you get the idea that most people would disagree.
> "How do you explain why even you think deontologists find it plausible to redirect threats despite the harm inflicted?"
What's the account that explains how inflicting suffering is impermissibly rights violating, yet the following aren't considered rights violating:
* Redirecting harm.
* Causing harm as an undesirable side-effect.
* Harming people in ways that aren't physically aggressive (e.g., insulting people, outcompeting someone for valuable resources, etc.).
Which deontologists have articulated the view that merely inflicting harm/suffering is a prima-facie rights violation?
I reject 3. It's good to give someone 50 + -49.9 units of utility but plausibly bad if the harm is redirected to someone else. It's the difference between effectively adding no new negative utility to the world and adding -N utility (among other actions), since agents are the fundamental locations of value rather than collections of agents.
Interesting! Do you think that one should flip the switch in the trolley problem or, in the driver case, turn the wheel? If so, why?
Yes, you should flip the switch because 5>1, and there's no relevant difference between the 5 and the 1 other than number. In this post's situation there is a relevant difference between the initial -49.9 and the new person's -49.8: the fact that 50-49.9>0. You can't divorce the threat from the context of also causing a greater amount of positive utility for the threatened person. What's true is the much weaker statement that there are many (N,e) pairs such that one should inflict N units of suffering on an existing person to create a future person with N+e units of utility. But when e is very small, other considerations come into play, such as not causing harm to someone who doesn't get an associated benefit. In the trolley problem the act of omission of not flipping the switch is still a harm you're inflicting on the 5.
I'm not quite sure what you're saying the difference is. Is it just that the new person has more utility? We could stipulate that the existing person has more utility.
It's okay to harm someone if at the same time you're benefiting them a greater amount. It's not okay to harm someone if at the same time you're benefiting someone else a greater amount, unless the difference is pretty big. The trolley problem is a choice between harming one person or five people, so is an example of the difference being pretty big.
You should read the section in which I defend the Pareto principle and then explain that even on most views that deny it, this action will be permitted.
The deflection action isn't a pareto improvement, the other one is.