Utilitarianism can be derived from the following principles
One should do what perfectly moral third parties would prefer they do.
For any possible events, perfectly moral people should prefer the better one occur rather than the worse one.
If something is better for some and worse for none, it is better overall.
Distribution of utility across people is irrelevant as long as the utility is fixed (so, for example, it’s just as good for three people to have 3 utility as for one person to have 9).
The proof will be left as an exercise for the reader.
An intuition pump in favor of utilitarianism:
1. There exists a hypothetical mutation path between all currently existing sentient beings, e.g. if you had the right technology, you could turn any person into any other person, or any dog into a person, or any person into a chicken etc., without breaking life function or consciousness, by rearranging the relevant molecules bit by bit.
2. If you knew you will be turned into another being, e.g. a chicken a copy of another person, you would still care about the utility of your future self after the transformation.
3. Whether you care about the utility of a hypothetical future version of yourself shouldn't depend on whether that future self is actually caused by a transformation path from your current self, or caused by some other causality (e.g. if an exact identical copy of yourself popped into existence in 5 minutes by random chance while you actually die in 5 minutes, swampman-style, you should still care about this future self as much as you would care about your normal future self).
Basically we're all bad copies of each other and should therefore at least somewhat care about each other.
I find this intuition pump somewhat convincing. I don't think it leads to a full acceptance of utilitarianism, and I don't think that people who use utilitarian arguments are actually driven by caring about others in practice. But I think it's at least compelling enough that we shouldn't torture a galaxy supercluster of chickens just to gain one cookie or something extreme like that. It also probably should motive us to have a very small benevolence bias even for our enemies (although that still trades off against instrumental deterrence and intrinsic 復讐).
4. is strange.
Is it equivalent for 2 persons to have 0 utility vis a vis 1 person to have +10 utility and 1 person to have -10 utility, that is to have 1 very rich person and 1 slave ?