"1. An action is right if and only if it would be taken by one who was fully rational and impartial.
2. One behind the veil of ignorance or in the egg scenario who was given full rationality would be fully rational and impartial.
3. Therefore, an action is right if it would be taken by one who was in the veil of ignorance or egg scenario and was fully rational.
4. One who was fully rational in the egg scenario or veil of ignorance scenario would take only those actions prescribed by utilitarianism.
5. So an action is right only if it is prescribed by utilitarianism."
There's a lot of merit to this argument, but let's quickly break down some forms of counterattack you haven't mentioned:
(1.) A lot of people object to 1 on the basis that we should be partial towards our nearest and dearest, towards those who are entangled in our lives in various ways, towards those we owe gratitude, towards our constitute commitments etc. etc. Other people will argue that we should be biased towards acting on our desires because they are our desires.
(4) Is considered dicey, perhaps unfairly, because of Rawl's classical and somewhat bizarre claim that you should adopt Maxmin behind the veil of ignorance.
(5). Okay I have a somewhat technical objection here. In the argument that you establish that those in the egg/Veil of Ignorance condition would be rational and impartial, that they would be utilitarian, that they would be right and that rationality and impartiality are necessary and sufficient conditions for goodness. However you never actually rule out that there aren't alternative ways of being rational and impartial which aren't utilitarian but which nevertheless count as Rational and impartial- ways of being rational and impartial that apply when one is not behind the veil. Maybe outside the Egg & Veil of ignorance scenarios, there are other ways of acting that count as rational and impartial, and are therefore permissible by your 1. Is it plausible that there are such things? Hard to say.
5) It says if and only if, so any scenario where people are rational and impartial acting in some way entails one should act in that way. Of course, this premise then assumes that all rational and impartial ways of making decisions converge (otherwise the premise is false), but that seems eminently plausible. It's hard to think of a counterexample and if you buy the broad analysis that immorality stems from irrationality or partiality, then there couldn't, in principle, be any divergence because they all do the right things.
I've often thought that the best justification for Utilitarianism came from Kant's Golden Rule, as it logically extends into the objective moral arbitrator concept.
This is not a strange conclusion since the veil of ignorance was originally developed by William Vickrey and [John Harsanyi](https://www.journals.uchicago.edu/doi/10.1086/257416), the latter of which is a utilitarian. Although Rawls gave it the cool name.
All the veil does is transform your 'inequality attitude' into a 'risk attitude', so people with different attitudes still end up with different ideas about what we should do behind the veil (e.g. Harsanyi and Rawls). My friend and mathematician Jobst Heitzig fully understands the veil yet still thinks we shouldn't act as utilitarians behind the veil of ignorance, but should instead pursue 'fairness'. What he describes as 'fairness' is something like, if person A costs $100 to save and person B $99, we should roll a 199 sided die and save B if it lands on 1-100 and A if it lands on 101-199, instead of always choosing person B. I disagree, but he clearly understands the veil and just has different priorities.
I do think the veil is better at framing certain issues (like population ethics) than utilitarianism. Maximizing happiness leads to the repugnant conclusion, but behind the veil we can just choose not to do that. This leads to something like [meta-preferentialism](https://forum.effectivealtruism.org/posts/m5gowRugYQW8zQybh/meta-preference-utilitarianism) (very old post of mine, doesn't reflect my current thinking or writing style, but should broadly gesture at what I mean.)
Same is true for risk aversion, most people would not take a 1% chance of creating a world with trillions of maximally happy people and a 99% chance of destroying everything, over a guaranteed world with billions of maximally happy people. Utilitarianism requires the former, contractualism allows the latter.
Your friend's view violates ex ante pareto. Now, that's a thing you can do, but it isn't very appealing.
The veil doesn't help with population ethics at all--there are only two natural ways to extend it to population ethics, one implies average utilitarianism, the other total utilitarianism. But average utilitarianism is crazy, implying you should create miserable people in hell as long as they're less miserable than the existing people.
Oops I made a typo, I switched up A and B, B get's 1-100 and A get's 101-199, because B is cheaper. I'll edit it. There are many interpretations of ex ante pareto and I haven't read all of them. But I don’t think it follows from the following:
A prospect P ex-ante Pareto-dominates prospect Q if and only if all individuals in the population would rather have P than Q, before knowing how the randomness resolves (i.e., before the die lands).
In the A, B example, the prospects we consider all have the feature that they redistribute chances between individuals. It is not the case that the prospect P that helps person B and costs me $99 (utilitarianism) is preferred by all relevant individuals (A, B, and me) to the prospect Q that tossed the 199-sided die (‘fairness’ maximizing). Because individuals B might be seen as preferring P (utilitarianism), person A would presumably prefer the lottery. So P does not ex ante Pareto-dominate Q.
…
What do you mean by 'natural'? Philosophers have proposed countless ways to deal with population ethics, including utilitarians who propose various methods of discounting, or asymmetries between suffering and happiness etc. A lot of it seems like arbitrary differences in subjective preferences (not all of it, if your theory is e.g. self contradictory I would consider it 'objectively wrong', but a lot that remains seems subjective). Contractualism helps because it allows for compromise. So in a world with two people, one who wants there to be a billion people and one who wants there to be three billion people, they can do a value-handshake and pursue two billion people (assuming they care equally about this, it's continuous etc).
Same with risk-aversion when it comes to non-existence. We can say that 0% is the 'natural' or 'rational' rate, but why? Seems pretty arbitrary to me. In a world with two people, one with a rate of 1% and another with 3% we can just do a value-handshake and let contractualism allow us to give us 2% (assuming bla bla bla). I can't think of a reason why prescribing these people 0% is 'rational'.
EDIT: Or rather, I don’t think allowing 2% is irrational
Okay well when you're taking averages you either take them while including possible beings or when taking merely actual beings. The first has the same results as not taking the average and the second has crazy results like that one should create miserable people in hell.
It's not about all preferring--it's about everyone being made better off. Of course, that will depend on your theory of well-being.
It’s not (always) averages, people can make tradeoffs. If some people care more about population ethics and others care more about risk aversion, people can trade off a bit of their influence in population ethics for more influence in risk aversion and vise versa.
(To clarify my point about risk aversion, we can make an argument against risk aversion in some circumstances, e.g. Dutch books, but we can’t make a Dutch book argument against risk aversion when it comes to non-existence, because we can’t run it multiple times.)
I don’t think I’ve actually met anyone that is an averagist in the sense of creating people in hell. If people behind the veil say they wouldn’t like to be created in hell and also that we should pursue ‘naïve averagism’, that’s a contradiction and I consider that objectively wrong. I think irl ‘averagists’ use things like pain-pleasure asymmetry.
If we’re all rational and impartial behind the veil, then I think those preferences do point towards 'everyone being made better off'.
I think our disagreement might be because of different metaethics. I'm a moral antirealist and you mentioned you were a moral realist, would you be up for an adversarial collaboration, or a back and forth?
"1. An action is right if and only if it would be taken by one who was fully rational and impartial.
2. One behind the veil of ignorance or in the egg scenario who was given full rationality would be fully rational and impartial.
3. Therefore, an action is right if it would be taken by one who was in the veil of ignorance or egg scenario and was fully rational.
4. One who was fully rational in the egg scenario or veil of ignorance scenario would take only those actions prescribed by utilitarianism.
5. So an action is right only if it is prescribed by utilitarianism."
There's a lot of merit to this argument, but let's quickly break down some forms of counterattack you haven't mentioned:
(1.) A lot of people object to 1 on the basis that we should be partial towards our nearest and dearest, towards those who are entangled in our lives in various ways, towards those we owe gratitude, towards our constitute commitments etc. etc. Other people will argue that we should be biased towards acting on our desires because they are our desires.
(4) Is considered dicey, perhaps unfairly, because of Rawl's classical and somewhat bizarre claim that you should adopt Maxmin behind the veil of ignorance.
(5). Okay I have a somewhat technical objection here. In the argument that you establish that those in the egg/Veil of Ignorance condition would be rational and impartial, that they would be utilitarian, that they would be right and that rationality and impartiality are necessary and sufficient conditions for goodness. However you never actually rule out that there aren't alternative ways of being rational and impartial which aren't utilitarian but which nevertheless count as Rational and impartial- ways of being rational and impartial that apply when one is not behind the veil. Maybe outside the Egg & Veil of ignorance scenarios, there are other ways of acting that count as rational and impartial, and are therefore permissible by your 1. Is it plausible that there are such things? Hard to say.
1) I have an article objecting to the first view which I linked in the article. https://benthams.substack.com/p/believers-in-special-obligations
4) All the worse for Rawls . . .
5) It says if and only if, so any scenario where people are rational and impartial acting in some way entails one should act in that way. Of course, this premise then assumes that all rational and impartial ways of making decisions converge (otherwise the premise is false), but that seems eminently plausible. It's hard to think of a counterexample and if you buy the broad analysis that immorality stems from irrationality or partiality, then there couldn't, in principle, be any divergence because they all do the right things.
I've often thought that the best justification for Utilitarianism came from Kant's Golden Rule, as it logically extends into the objective moral arbitrator concept.
This is not a strange conclusion since the veil of ignorance was originally developed by William Vickrey and [John Harsanyi](https://www.journals.uchicago.edu/doi/10.1086/257416), the latter of which is a utilitarian. Although Rawls gave it the cool name.
All the veil does is transform your 'inequality attitude' into a 'risk attitude', so people with different attitudes still end up with different ideas about what we should do behind the veil (e.g. Harsanyi and Rawls). My friend and mathematician Jobst Heitzig fully understands the veil yet still thinks we shouldn't act as utilitarians behind the veil of ignorance, but should instead pursue 'fairness'. What he describes as 'fairness' is something like, if person A costs $100 to save and person B $99, we should roll a 199 sided die and save B if it lands on 1-100 and A if it lands on 101-199, instead of always choosing person B. I disagree, but he clearly understands the veil and just has different priorities.
I do think the veil is better at framing certain issues (like population ethics) than utilitarianism. Maximizing happiness leads to the repugnant conclusion, but behind the veil we can just choose not to do that. This leads to something like [meta-preferentialism](https://forum.effectivealtruism.org/posts/m5gowRugYQW8zQybh/meta-preference-utilitarianism) (very old post of mine, doesn't reflect my current thinking or writing style, but should broadly gesture at what I mean.)
Same is true for risk aversion, most people would not take a 1% chance of creating a world with trillions of maximally happy people and a 99% chance of destroying everything, over a guaranteed world with billions of maximally happy people. Utilitarianism requires the former, contractualism allows the latter.
EDIT: How do you make a word a hyperlink?
EDIT 2: Switched A and B
Your friend's view violates ex ante pareto. Now, that's a thing you can do, but it isn't very appealing.
The veil doesn't help with population ethics at all--there are only two natural ways to extend it to population ethics, one implies average utilitarianism, the other total utilitarianism. But average utilitarianism is crazy, implying you should create miserable people in hell as long as they're less miserable than the existing people.
Oops I made a typo, I switched up A and B, B get's 1-100 and A get's 101-199, because B is cheaper. I'll edit it. There are many interpretations of ex ante pareto and I haven't read all of them. But I don’t think it follows from the following:
A prospect P ex-ante Pareto-dominates prospect Q if and only if all individuals in the population would rather have P than Q, before knowing how the randomness resolves (i.e., before the die lands).
In the A, B example, the prospects we consider all have the feature that they redistribute chances between individuals. It is not the case that the prospect P that helps person B and costs me $99 (utilitarianism) is preferred by all relevant individuals (A, B, and me) to the prospect Q that tossed the 199-sided die (‘fairness’ maximizing). Because individuals B might be seen as preferring P (utilitarianism), person A would presumably prefer the lottery. So P does not ex ante Pareto-dominate Q.
…
What do you mean by 'natural'? Philosophers have proposed countless ways to deal with population ethics, including utilitarians who propose various methods of discounting, or asymmetries between suffering and happiness etc. A lot of it seems like arbitrary differences in subjective preferences (not all of it, if your theory is e.g. self contradictory I would consider it 'objectively wrong', but a lot that remains seems subjective). Contractualism helps because it allows for compromise. So in a world with two people, one who wants there to be a billion people and one who wants there to be three billion people, they can do a value-handshake and pursue two billion people (assuming they care equally about this, it's continuous etc).
Same with risk-aversion when it comes to non-existence. We can say that 0% is the 'natural' or 'rational' rate, but why? Seems pretty arbitrary to me. In a world with two people, one with a rate of 1% and another with 3% we can just do a value-handshake and let contractualism allow us to give us 2% (assuming bla bla bla). I can't think of a reason why prescribing these people 0% is 'rational'.
EDIT: Or rather, I don’t think allowing 2% is irrational
Okay well when you're taking averages you either take them while including possible beings or when taking merely actual beings. The first has the same results as not taking the average and the second has crazy results like that one should create miserable people in hell.
It's not about all preferring--it's about everyone being made better off. Of course, that will depend on your theory of well-being.
It’s not (always) averages, people can make tradeoffs. If some people care more about population ethics and others care more about risk aversion, people can trade off a bit of their influence in population ethics for more influence in risk aversion and vise versa.
(To clarify my point about risk aversion, we can make an argument against risk aversion in some circumstances, e.g. Dutch books, but we can’t make a Dutch book argument against risk aversion when it comes to non-existence, because we can’t run it multiple times.)
I don’t think I’ve actually met anyone that is an averagist in the sense of creating people in hell. If people behind the veil say they wouldn’t like to be created in hell and also that we should pursue ‘naïve averagism’, that’s a contradiction and I consider that objectively wrong. I think irl ‘averagists’ use things like pain-pleasure asymmetry.
If we’re all rational and impartial behind the veil, then I think those preferences do point towards 'everyone being made better off'.
I think our disagreement might be because of different metaethics. I'm a moral antirealist and you mentioned you were a moral realist, would you be up for an adversarial collaboration, or a back and forth?