1. There exists a hypothetical mutation path between all currently existing sentient beings, e.g. if you had the right technology, you could turn any person into any other person, or any dog into a person, or any person into a chicken etc., without breaking life function or consciousness, by rearranging the relevant molecules bit by bit.
2. If you knew you will be turned into another being, e.g. a chicken a copy of another person, you would still care about the utility of your future self after the transformation.
3. Whether you care about the utility of a hypothetical future version of yourself shouldn't depend on whether that future self is actually caused by a transformation path from your current self, or caused by some other causality (e.g. if an exact identical copy of yourself popped into existence in 5 minutes by random chance while you actually die in 5 minutes, swampman-style, you should still care about this future self as much as you would care about your normal future self).
Basically we're all bad copies of each other and should therefore at least somewhat care about each other.
I find this intuition pump somewhat convincing. I don't think it leads to a full acceptance of utilitarianism, and I don't think that people who use utilitarian arguments are actually driven by caring about others in practice. But I think it's at least compelling enough that we shouldn't torture a galaxy supercluster of chickens just to gain one cookie or something extreme like that. It also probably should motive us to have a very small benevolence bias even for our enemies (although that still trades off against instrumental deterrence and intrinsic 復讐).
Is it equivalent for 2 persons to have 0 utility vis a vis 1 person to have +10 utility and 1 person to have -10 utility, that is to have 1 very rich person and 1 slave ?
3 sounds nice, but it's false. If something is better for my enemies and worse for none, ceteris paribus, then I don't want it to happen. Otherwise I lose 復讐 points.
4 is just silly. Imagine telling people, "Have you considered being indifferent between 9 utility points for yourself, or 3 utility points for yourself and two strangers each?" The obvious answer is, "Yes we've considered it and reject the proposal."
Once more, utilitarianism is a system based on preference falsification. The parsimony throws out the things we actually care about.
First two principles just imply consequentialism. Thus, in order for all of the principles to imply Utilitarianism, the last two principles must be sufficient to imply that Consequentialism implies Utilitarianism. In other words, principles 3 and 4 must imply the following: the act that makes the world best is the act that maximizes utility.
But principles 3 and 4 don't imply that. Because these principles don't rule out the possibility of factors other than utility that make the world better. Presumably by utility you just mean a unit of well-being, as Utilitarianism is a welfarist theory. In that case, the latter two principles are not sufficient to show that non-welfarist considerations are irrelevant.
The last two principles are sufficient to show that, when comparing the goodness of two states of affairs, the state of affairs with greater total utility is better *with respect to utility*. Because I'm reading (4) as just stating that _total_ utility is the only relevant factor when evaluating the goodness of states of affairs. And (3) further implies that _more_ total utility is better than less total utility.
If you added another principle which stipulated that non-utility concerns are irrelevant to the goodness of the world, then we maybe could derive Utilitarianism from these principles. We could do something like this:
1. One should do what perfectly moral third parties would prefer they do.
2. For any possible events, perfectly moral people should prefer the better one occur rather than the worse one.
3. If something is better for some and worse for none, it is better overall.
4. Distribution of utility across people is irrelevant as long as the utility is fixed (so, for example, it’s just as good for three people to have 3 utility as for one person to have 9).
5. The goodness of a given state of affairs is determined solely by the goodness of its utility distribution.
6. (from 1, 2) Agents should do what produces the best state of affairs (consequentialism)
7. (from 3, 4) For any two utility distributions A and B, A is better than B if and only if A has greater total utility.
8. (from 5, 7) For any two states of affairs A and B, A is better than B if and only if A has greater total utility.
9. (from 6, 8) Agents should do whatever produces the greatest total utility.
There's also a few tweaks to make here to make it technically valid (e.g., you use "would" for the first principle but "should" in the second principle), but the general idea is valid. Of course, all of the principles (except for maybe 3) are contestable.
An intuition pump in favor of utilitarianism:
1. There exists a hypothetical mutation path between all currently existing sentient beings, e.g. if you had the right technology, you could turn any person into any other person, or any dog into a person, or any person into a chicken etc., without breaking life function or consciousness, by rearranging the relevant molecules bit by bit.
2. If you knew you will be turned into another being, e.g. a chicken a copy of another person, you would still care about the utility of your future self after the transformation.
3. Whether you care about the utility of a hypothetical future version of yourself shouldn't depend on whether that future self is actually caused by a transformation path from your current self, or caused by some other causality (e.g. if an exact identical copy of yourself popped into existence in 5 minutes by random chance while you actually die in 5 minutes, swampman-style, you should still care about this future self as much as you would care about your normal future self).
Basically we're all bad copies of each other and should therefore at least somewhat care about each other.
I find this intuition pump somewhat convincing. I don't think it leads to a full acceptance of utilitarianism, and I don't think that people who use utilitarian arguments are actually driven by caring about others in practice. But I think it's at least compelling enough that we shouldn't torture a galaxy supercluster of chickens just to gain one cookie or something extreme like that. It also probably should motive us to have a very small benevolence bias even for our enemies (although that still trades off against instrumental deterrence and intrinsic 復讐).
4. is strange.
Is it equivalent for 2 persons to have 0 utility vis a vis 1 person to have +10 utility and 1 person to have -10 utility, that is to have 1 very rich person and 1 slave ?
what do you mean by perfectly moral in 1. & 2.?
You are "hiding" an important assumption that is not necessarily true:
0. There exists a perfectly moral third party.
3 sounds nice, but it's false. If something is better for my enemies and worse for none, ceteris paribus, then I don't want it to happen. Otherwise I lose 復讐 points.
4 is just silly. Imagine telling people, "Have you considered being indifferent between 9 utility points for yourself, or 3 utility points for yourself and two strangers each?" The obvious answer is, "Yes we've considered it and reject the proposal."
Once more, utilitarianism is a system based on preference falsification. The parsimony throws out the things we actually care about.
They might be basic but all of them are pretty controversial and non-obvious. Maybe I’m confused over the intention of this post.
First two principles just imply consequentialism. Thus, in order for all of the principles to imply Utilitarianism, the last two principles must be sufficient to imply that Consequentialism implies Utilitarianism. In other words, principles 3 and 4 must imply the following: the act that makes the world best is the act that maximizes utility.
But principles 3 and 4 don't imply that. Because these principles don't rule out the possibility of factors other than utility that make the world better. Presumably by utility you just mean a unit of well-being, as Utilitarianism is a welfarist theory. In that case, the latter two principles are not sufficient to show that non-welfarist considerations are irrelevant.
The last two principles are sufficient to show that, when comparing the goodness of two states of affairs, the state of affairs with greater total utility is better *with respect to utility*. Because I'm reading (4) as just stating that _total_ utility is the only relevant factor when evaluating the goodness of states of affairs. And (3) further implies that _more_ total utility is better than less total utility.
If you added another principle which stipulated that non-utility concerns are irrelevant to the goodness of the world, then we maybe could derive Utilitarianism from these principles. We could do something like this:
1. One should do what perfectly moral third parties would prefer they do.
2. For any possible events, perfectly moral people should prefer the better one occur rather than the worse one.
3. If something is better for some and worse for none, it is better overall.
4. Distribution of utility across people is irrelevant as long as the utility is fixed (so, for example, it’s just as good for three people to have 3 utility as for one person to have 9).
5. The goodness of a given state of affairs is determined solely by the goodness of its utility distribution.
6. (from 1, 2) Agents should do what produces the best state of affairs (consequentialism)
7. (from 3, 4) For any two utility distributions A and B, A is better than B if and only if A has greater total utility.
8. (from 5, 7) For any two states of affairs A and B, A is better than B if and only if A has greater total utility.
9. (from 6, 8) Agents should do whatever produces the greatest total utility.
There's also a few tweaks to make here to make it technically valid (e.g., you use "would" for the first principle but "should" in the second principle), but the general idea is valid. Of course, all of the principles (except for maybe 3) are contestable.