I think this supercedes the collaborative chat I was attempting on you-know-what. Although I'd still like to do something on moral reasoning in general, at some point.
«But it would hold that if one is in the Maimer scenario, and not acting in self-defense would have no other spillover effects, what they really ought to do is not act in self-defense. Many find this to be quite counterintuitive. »
This kind of reasoning in that scenario leaves you vulnerable to the strongly implied threat (by the scenario) of collaborating with hostile, divine/eldritch simulators.
Assume that you find yourself absolutely certain, that your attacker won't harm anyone in the future, and this will not have spillover effects. You must come to the conclusion, that you are at least partially under hostile memetic influence of an outside hostile entity, because:
A) such a belief does not pass anyone's metacognitive sanity checks
which implies
B) someone must have transmitted that "knowledge" to you via means unknown
Possession of that knowledge, implies, that the attacker has been perfectly predicted by an
external entity to the very end of his life. And that the entity must thereby have gained considerable amounts of intel on his attacker's future physical and social environment, to confirm that claim with certainty.
No entity cannot plausibly predict someone to such a high degree, without not also controlling them. If that entity knew that much about the attacker and its future environment, it must have most likely instigated or allowed the attack, and said entity should thereby assumed to be extremely hostile. This makes the attacker an unwilling puppet under enemy control and a source of future information about your society.
Your attacker has negative moral worth and is a mere extension of the entity’s body. You owe to the person that he once was (if he ever was free from corruption), the mercy of death. You have an obligation to your community, to deprive an enemy of a source of strategic intel. [yeah, no negative spillover effects my ass. You cannot trust claims you believe to be “certain”, if they are in contradiction with any and all plausible underlying logical frameworks!]
If you cannot perceive eldritch horrors, when one practically licks your face, are you even a philosopher ;)
There is an un-noted shift from "It’s wrong to kill someone to harvest their organs" to "A perfect being would not kill someone in a way that enabled them to harvest their organs" in that the latter phrasing drops any suggestion of intentionality. Organ harvesting could be an impermissible reason for killing without being an impermissible outcome.
Imagine that in a city with budgetary issues, a very wealthy person is caught breaking the law and incurs a massive fine. It seems perfectly plausible to think that: the city shouldn't just seize one person's money to fund its budget, but the city should impose fines for wrongdoing, and the city can regard it as a fortunate byproduct of the latter that their budget issues are solved. This is like the organ self defense case.
If you're not already a consequentialist, I don't think there will be much tension between thinking that a certain outcome is fortunate (acquiring money/organs) and that whether you can bring it about depends on other factors.
“ Maimer: someone comes at me trying to harm me significantly. The only way I can stop him is by harming him more significantly. He’s gone temporarily insane, so if I don’t harm him, he won’t harm anyone else in the future.”
Two responses:
1) If you’re a utilitarian and the attacker is a regular person, you should prioritize yourself since you will generate more utility in the future (e.g not eating meat)
2) In the real world we’re in a state of information scarcity and we don’t know whether the person has really gone temporarily insane or if they’re just pretending.
You lose me at the step from 1 to 2 in your organ harvesting example. Why would anyone even consider that?
I also don't understand why your 50:50 human shield argument violates the Pareto principle? You don't seem to explain that at all.
And you say that your choice between 2, 1 and inaction violates some supposed principle of irrelevance of things not chosen. But that contradicts the bit where you say "irrelevant to choosing between two things", but you're choosing between 3 things: you only just mentioned changing the assumption from choosing between 2 & 1 to choosing between 2, 1 and inaction. And anyway why are the things not chosen irrelevant? I must have thought I might choose them, I just decided not to.
Organ harvesting does improve the world. It saves more lives in aggregate. That's the entire reason it's supposed to be a counterexample to consequentialism--it's a case where an action has good consequences but seems wrong.
The shield one violates ex ante Pareto because killing them only if they're not a human shield is better in expectation.
It violates the principle because on it you should choose one case over inaction and inaction over a second but the second over the first. So the second beats inaction only if the first is present.
Thanks for replying. I looked up Pareto and found that there was a thing called Pareto Efficiency that I'd never heard of. I was thinking of the Pareto Principle - the 80/20 split thing and I'd thought that was why you'd had to think of two 50:50 options. But why make the 2nd one killing someone painlessly? You could make it, rescue the human shield like in a Bruce Willis movie and then we'd obviously choose that. But this isn't how real people in real world situations will react when they are really under attack so I don't really think such approaches are really helpful because they do tend to go against human nature.
But if I am forced to chose between self-defense and utilitarianism, why should I choose utilitarianism? Seems like your morality has very little moral force.
Morality, at best, describes what it would be good from an impartial observer’s perspective for you to do. If the only justification to care about this impartial observer is that an impartial observer would, that’s hearsay persuasive. We don’t like in an array of impartial observers.
I think this supercedes the collaborative chat I was attempting on you-know-what. Although I'd still like to do something on moral reasoning in general, at some point.
«But it would hold that if one is in the Maimer scenario, and not acting in self-defense would have no other spillover effects, what they really ought to do is not act in self-defense. Many find this to be quite counterintuitive. »
This kind of reasoning in that scenario leaves you vulnerable to the strongly implied threat (by the scenario) of collaborating with hostile, divine/eldritch simulators.
Assume that you find yourself absolutely certain, that your attacker won't harm anyone in the future, and this will not have spillover effects. You must come to the conclusion, that you are at least partially under hostile memetic influence of an outside hostile entity, because:
A) such a belief does not pass anyone's metacognitive sanity checks
which implies
B) someone must have transmitted that "knowledge" to you via means unknown
Possession of that knowledge, implies, that the attacker has been perfectly predicted by an
external entity to the very end of his life. And that the entity must thereby have gained considerable amounts of intel on his attacker's future physical and social environment, to confirm that claim with certainty.
No entity cannot plausibly predict someone to such a high degree, without not also controlling them. If that entity knew that much about the attacker and its future environment, it must have most likely instigated or allowed the attack, and said entity should thereby assumed to be extremely hostile. This makes the attacker an unwilling puppet under enemy control and a source of future information about your society.
Your attacker has negative moral worth and is a mere extension of the entity’s body. You owe to the person that he once was (if he ever was free from corruption), the mercy of death. You have an obligation to your community, to deprive an enemy of a source of strategic intel. [yeah, no negative spillover effects my ass. You cannot trust claims you believe to be “certain”, if they are in contradiction with any and all plausible underlying logical frameworks!]
If you cannot perceive eldritch horrors, when one practically licks your face, are you even a philosopher ;)
There is an un-noted shift from "It’s wrong to kill someone to harvest their organs" to "A perfect being would not kill someone in a way that enabled them to harvest their organs" in that the latter phrasing drops any suggestion of intentionality. Organ harvesting could be an impermissible reason for killing without being an impermissible outcome.
Imagine that in a city with budgetary issues, a very wealthy person is caught breaking the law and incurs a massive fine. It seems perfectly plausible to think that: the city shouldn't just seize one person's money to fund its budget, but the city should impose fines for wrongdoing, and the city can regard it as a fortunate byproduct of the latter that their budget issues are solved. This is like the organ self defense case.
If you're not already a consequentialist, I don't think there will be much tension between thinking that a certain outcome is fortunate (acquiring money/organs) and that whether you can bring it about depends on other factors.
“ Maimer: someone comes at me trying to harm me significantly. The only way I can stop him is by harming him more significantly. He’s gone temporarily insane, so if I don’t harm him, he won’t harm anyone else in the future.”
Two responses:
1) If you’re a utilitarian and the attacker is a regular person, you should prioritize yourself since you will generate more utility in the future (e.g not eating meat)
2) In the real world we’re in a state of information scarcity and we don’t know whether the person has really gone temporarily insane or if they’re just pretending.
You lose me at the step from 1 to 2 in your organ harvesting example. Why would anyone even consider that?
I also don't understand why your 50:50 human shield argument violates the Pareto principle? You don't seem to explain that at all.
And you say that your choice between 2, 1 and inaction violates some supposed principle of irrelevance of things not chosen. But that contradicts the bit where you say "irrelevant to choosing between two things", but you're choosing between 3 things: you only just mentioned changing the assumption from choosing between 2 & 1 to choosing between 2, 1 and inaction. And anyway why are the things not chosen irrelevant? I must have thought I might choose them, I just decided not to.
Organ harvesting does improve the world. It saves more lives in aggregate. That's the entire reason it's supposed to be a counterexample to consequentialism--it's a case where an action has good consequences but seems wrong.
The shield one violates ex ante Pareto because killing them only if they're not a human shield is better in expectation.
It violates the principle because on it you should choose one case over inaction and inaction over a second but the second over the first. So the second beats inaction only if the first is present.
Thanks for replying. I looked up Pareto and found that there was a thing called Pareto Efficiency that I'd never heard of. I was thinking of the Pareto Principle - the 80/20 split thing and I'd thought that was why you'd had to think of two 50:50 options. But why make the 2nd one killing someone painlessly? You could make it, rescue the human shield like in a Bruce Willis movie and then we'd obviously choose that. But this isn't how real people in real world situations will react when they are really under attack so I don't really think such approaches are really helpful because they do tend to go against human nature.
Because that's how the scenario was designed.
But if I am forced to chose between self-defense and utilitarianism, why should I choose utilitarianism? Seems like your morality has very little moral force.
Because it's true. Morality describes, by definition, what you should do.
Morality, at best, describes what it would be good from an impartial observer’s perspective for you to do. If the only justification to care about this impartial observer is that an impartial observer would, that’s hearsay persuasive. We don’t like in an array of impartial observers.