8 Comments

"1. An action is right if and only if it would be taken by one who was fully rational and impartial.

2. One behind the veil of ignorance or in the egg scenario who was given full rationality would be fully rational and impartial.

3. Therefore, an action is right if it would be taken by one who was in the veil of ignorance or egg scenario and was fully rational.

4. One who was fully rational in the egg scenario or veil of ignorance scenario would take only those actions prescribed by utilitarianism.

5. So an action is right only if it is prescribed by utilitarianism."

There's a lot of merit to this argument, but let's quickly break down some forms of counterattack you haven't mentioned:

(1.) A lot of people object to 1 on the basis that we should be partial towards our nearest and dearest, towards those who are entangled in our lives in various ways, towards those we owe gratitude, towards our constitute commitments etc. etc. Other people will argue that we should be biased towards acting on our desires because they are our desires.

(4) Is considered dicey, perhaps unfairly, because of Rawl's classical and somewhat bizarre claim that you should adopt Maxmin behind the veil of ignorance.

(5). Okay I have a somewhat technical objection here. In the argument that you establish that those in the egg/Veil of Ignorance condition would be rational and impartial, that they would be utilitarian, that they would be right and that rationality and impartiality are necessary and sufficient conditions for goodness. However you never actually rule out that there aren't alternative ways of being rational and impartial which aren't utilitarian but which nevertheless count as Rational and impartial- ways of being rational and impartial that apply when one is not behind the veil. Maybe outside the Egg & Veil of ignorance scenarios, there are other ways of acting that count as rational and impartial, and are therefore permissible by your 1. Is it plausible that there are such things? Hard to say.

Expand full comment

I've often thought that the best justification for Utilitarianism came from Kant's Golden Rule, as it logically extends into the objective moral arbitrator concept.

Expand full comment

This is not a strange conclusion since the veil of ignorance was originally developed by William Vickrey and [John Harsanyi](https://www.journals.uchicago.edu/doi/10.1086/257416), the latter of which is a utilitarian. Although Rawls gave it the cool name.

All the veil does is transform your 'inequality attitude' into a 'risk attitude', so people with different attitudes still end up with different ideas about what we should do behind the veil (e.g. Harsanyi and Rawls). My friend and mathematician Jobst Heitzig fully understands the veil yet still thinks we shouldn't act as utilitarians behind the veil of ignorance, but should instead pursue 'fairness'. What he describes as 'fairness' is something like, if person A costs $100 to save and person B $99, we should roll a 199 sided die and save B if it lands on 1-100 and A if it lands on 101-199, instead of always choosing person B. I disagree, but he clearly understands the veil and just has different priorities.

I do think the veil is better at framing certain issues (like population ethics) than utilitarianism. Maximizing happiness leads to the repugnant conclusion, but behind the veil we can just choose not to do that. This leads to something like [meta-preferentialism](https://forum.effectivealtruism.org/posts/m5gowRugYQW8zQybh/meta-preference-utilitarianism) (very old post of mine, doesn't reflect my current thinking or writing style, but should broadly gesture at what I mean.)

Same is true for risk aversion, most people would not take a 1% chance of creating a world with trillions of maximally happy people and a 99% chance of destroying everything, over a guaranteed world with billions of maximally happy people. Utilitarianism requires the former, contractualism allows the latter.

EDIT: How do you make a word a hyperlink?

EDIT 2: Switched A and B

Expand full comment