36 Comments
User's avatar
Joseph's avatar

For what it's worth, I don't think you've presented the Rawlsian version of the veil of ignorance here because your argument relies on beings from behind the veil knowing, e.g., that they'd be much more likely to become a shrimp than a human.

From the authoritative Utilitarianism.net: "The “veil of ignorance” thought experiment was originally developed by Vickrey and Harsanyi, though nowadays it is more often associated with John Rawls, who coined the term and tweaked the thought experiment to arrive at different conclusions. Specifically, Rawls appealed to a version in which you are additionally ignorant of the relative probabilities of ending up in various positions, to block the utilitarian implications and argue instead for a “maximin” position that gives lexical priority to raising the well-being of the worst-off."

I think it's fair to say "well, a version of the veil of ignorance that prevents you from knowing the probabilities is arbitrarily restrictive," but I don't think it's fair to say (on the basis of what you've argued here) "Rawlsians should be on board with strongly prioritizing animals."

Expand full comment
blank's avatar

>A first thing one notices when they imagine they’re equally likely to be any of the conscious creatures ever born is that the odds they’ll be a human are very low. It’s about 600 times more likely that you’d be born this year as a chicken in a factory farm than a human.

This is fudging the fact that there are no 'odds' that a person could be born either as a chicken or a human. These sorts of comparisons are logically unsound and worthless.

Expand full comment
Bentham's Bulldog's avatar

But there doesn't have to be actual objective probability of this. All that's needed is uncertainty.

Expand full comment
blank's avatar

The kind of uncertainty you are implying is that humans and chickens have souls before conception, and some kind of dice are rolled to decide on whether each soul becomes a human, a chicken, or something else. If you want to make that argument, you would be wise to bring scripture into it and to leave John Rawls in the dumpster where he belongs.

Expand full comment
Bentham's Bulldog's avatar

No no I'm not saying that. I'm not saying there actually is such a process. The veil of ignorance is a thought experiment. It asks you how you'd want to establish a society *if you didn't know who you were*. That doesn't mean that you actually could be other people. As an analogy, if you didn't know the first digit of pi, you should expect it's probably not 3, even though we in fact know it couldn't have been anything else!

Expand full comment
JerL's avatar

I think the point is that it's not clear exactly what the veil of ignorance is exactly assuming: I know what it means to not know the first digit of pi; I'm not sure what it means to not know who I am in a sufficiently broad sense to include uncertainty that I might be a shrimp.

FWIW, I agree with the intuition behind the veil, that there is *some* sense in which I "could have been" a shrimp, but making precise the "could" here is actually pretty hairy, and naive formulations tend to imply a pretty crazy metaphysics, like that my pre-existing soul was waiting to be implanted in a creature, and its host was picked at random.

I think a maybe more convincing reformulation is to think about uncertainty over whether or not we might stand in the same relation to some other entity as animals do to us

Expand full comment
Bentham's Bulldog's avatar

I just don't get what's so confusing. You often have ignorance whether you're the person who has some property. So imagine being wholly in the dark about who you are.

Expand full comment
JerL's avatar

Any situation in which I am capable of noticing my confusion about who I am, or reasoning about that uncertainty, is one that I am ~100% confident excludes the possibility of me being a shrimp.

Expand full comment
Michael's avatar

You have snuck utilitarianism in through the back door by imagining everyone is calculating probabilities from behind the veil of ignorance. If everyone in the original position is risk-neutrally maximizing expected value, they'll wind up designing a society that is utilitarian. (Or maybe they are like standard behavioral agents who have taste for low variance, okay, then you get a utilitarian society that hedges a little bit.)

But Rawls argues at length that, even if rational agents in general should be good orthodox Bayesian utility maximizers, in the original position, one should instead use the maximin criterion, not any kind of expected value calculation. And this leads to the difference principle rather than utility maximization.

What the difference principle would look like if the original position includes animals, I have no idea. Probably with even more concern for insects, though also stranger and less convincing as an argument.

Expand full comment
Bentham's Bulldog's avatar

I haven't assumed that everyone is risk-neutrally maximizing expected value. I've just assumed that people would care, from behind the veil of ignorance, about the intensely suffering creatures that they'd be near-guaranteed to be!

Now, clearly if one was behind the veil of ignorance, they would not only care about the single worst-off individual. Preventing a genocide would be choiceworthy even if it makes the single worst-off person an infinitesimal amount worse off.

Expand full comment
Michael's avatar

Once you bring in probability (even if just by counting) there is some kind of expected utility calculation going on. Maximin truly doesn't care about probability.

Rawls actually thinks from behind the veil of ignorance, people would actually only care about the single worst-off individual. I think this incredible paranoia is the weirdest thing about Rawls.

I guess it probably makes a little more sense from the perspective of "designing the political system of a society" rather than "collectively agreeing on the optimal world state".

Expand full comment
Bentham's Bulldog's avatar

But if you agree that this is "incredible paranoia" then it sounds like this isn't the right procedure for decision making from behind the veil of ignorance!

Expand full comment
Michael's avatar

It seems like a terrible procedure for decision making both behind and in front of the veil of ignorance, unless you actually have reason to believe you are facing an enemy playing against you.

I think it probably is the case that non-utilitarians who find the original position compelling under some decision rule or another, and are willing to countenance that animals should be part of it, would find they have to care a lot about animals.

However, if you believe people should be doing EU maximization from behind the veil of ignorance, then you are already just a utilitarian anyway.

Expand full comment
Bentham's Bulldog's avatar

I think you're equivocating on EU maximization. On the one hand you say it's an inevitable result of "bring[ing] in probability." On the other hand, you say it commits you to utilitarianism! Surely merely caring about utilitarianism behind the veil of ignorance doesn't commit you to utilitarianism!

And note: my claim is very modest. I haven't said that the veil of ignorance is a perfect guide. I've just said it's a pretty good guide to impartial value.

Expand full comment
Michael's avatar

I think I've written in a confusing tone somehow. My claim is not that your argument here doesn't go through on its own terms, it's just that a true Rawlsian wouldn't care how many bugs there are, most likely even one suffering bug is enough to worry them.

(I think there probably are other objections a Rawlsian might make, like what does it mean for nonhuman animals to be included in the original position, when in the Rawlsian story so much is tied up with a picture of humans as equal rational agents?)

Expand full comment
Linch's avatar

Yes and Rawls is obviously wrong here. The maxmin principle is utterly insane, and I do not use those words lightly.

Expand full comment
David Rosania's avatar

Not animals. Shrimp. There is either a threshold below which behavior is unconscious, in which case I do not think any of the evidence you have presented in your posts is persuasive that shrimp exceed the threshold, OR there are gradations of consciousness and shrimp are BARELY conscious (this is what I believe, without evidence), which militates against your percentage and arithmetic analyses. How do we assign weight to different levels of consciousness in deciding how to invest limited resources in minimizing suffering? What is the difference in value of the suffering of a conscious human vs a less conscious cow vs a barely conscious or non-conscious shrimp? Is the capacity to suffer of a human 2x a shrimp? 10x? A million x? Does it even make sense to attempt the comparison?

Expand full comment
The Solar Princess's avatar

I wonder how you should anthropically weigh all minds when you do the veil of ignorance exercise. Definitely not by entity-counting, as there is no sharp discontinuity between conscious and non-conscious matter. Is there a "standard" answer?

Expand full comment
Noah Birnbaum's avatar

The same arguments can be made for future people!!

Expand full comment
تبریزؔ • Tabrez • तबरेज़'s avatar

There's a 0% chance that analytical philosophers have any consciousness, given the brain vomit I've been reading on Analytical philosophy Substack.

Expand full comment
David Rosania's avatar

Committed Rawlsian and vegetarian here. Your argument stands on whether animals/insects/fish are conscious, even partly (whatever that means). First, despite your previous posts with data supporting this claim, we remain unsure if they are conscious in a way that allows for suffering. I have been conscious but under local anesthesia and had no suffering despite being injured, cut into, or theoretically even killed. Other times I have been conscious and in pain but not suffering - e.g. hard exercise, headaches, picking a zit ;). I have been with my dog many times while she was getting a vaccine and her tail wagged the whole time.

I would contend that it is not consciousness but rather the capacity to suffer (a la JB) that matters. So here's why I'm replying:

Suppose suffering is analogous to quantum mechanics. You can fire a red photon at an atom a zillion times and never eject an electron, but fire a blue one and you'll get one almost every time. Now suppose that suffering requires pain, aversiveness (aka emotion), and awareness (short term memory). Below a threshold, no suffering. Maybe a nuisance, or nothing. So shrimp swim away from an electric charge, and learn not to go back, but have no aversiveness/emotion and no memory of what happened. They know not to go there, but have no idea why. No fear of the charge. They just know not to go there. Their experience(s) never cross the threshold to suffering, and their behavior change is due to implicit learning - or neuronal rewiring - without any negative residue.

And a zillion times no suffering is no suffering.

Still not eating them though.

Expand full comment
Bentham's Bulldog's avatar

It doesn't depend on that. Even if you're not sure if, say, fish are conscious or suffering, even if you think there's only a 1% chance they do, you're still vastly likelier to be a suffering animal than human. If you were behind the veil of ignorance, even if you thought that animals had low odds of sentience, you'd still count them for a lot given their sheer numerosity.

Expand full comment
David Rosania's avatar

I think there is 0%. They learn and behave, without consciousness and without suffering. They do not know what they know, or that they know. They are just very complex algorithms with the awareness module left out.

This is or will be a huge issue with AI.

Expand full comment
Bentham's Bulldog's avatar

You think there is a 0% chance animals are conscious? You think it's more likely that if I enter a lottery I'll win it ten times in a row than that fish have experiences? That seems utterly absurd!

Expand full comment
Vikram V.'s avatar

> If you were 100,000 times likelier to be a fish than a person, would you really deny that it’s a big deal when fish suffocate to death in a barrel?

Yes.

Expand full comment
Woolery's avatar

>Our failure to extend empathy to animals is purely a result of selfishness; it would cease immediately if we had any chance of being them.

And by extending empathy to wild animals you in part mean paving over their habitats, sterilizing them, and driving many of their species into extinction?

Expand full comment
Bentham's Bulldog's avatar

Well once you start thinking they matter, there are further questions about what you should do to safeguard their interests.

Expand full comment
blank's avatar

If interstellar conquering aliens are real, this will be their excuse for exterminating the human race.

Expand full comment
JerL's avatar

Or they'll say, "are humans even conscious? They're what, 1% likely to be 10% as conscious as us, it would be insane to consider their interests at all".

Or, "any ethics that thinks the interests of any number of humans could ever be more important than even a single Omicron Perseid is obviously completely broken".

Expand full comment
Woolery's avatar

Or it can simply be the excuse offered by people who are too certain in their beliefs for exterminating whomever or whatever they wish.

Expand full comment
Woolery's avatar
8dEdited

Right. And humanely driving many species to extinction is one way in which we might safeguard their interests?

Expand full comment