I do think a lot of the 'arguments' presented against Florence's essay focussed primarily on what they saw as unnecessarily combative tone—calling people irrational in a philosophical sense doesn't really endear anyone to you. I agree with her (and your) conclusions about shrimp, but imo the most important lessons of the drama are that 1. people look for any excuse to get angry at a conclusion they dislike at first exposure and 2. philosophers aren't the best at gauging the emotional reaction of the average person. I've done the same many times before!
"You might worry that this generalizes to bacteria, but bacteria aren’t conscious. There was a first conscious organism, and that organism mattered infinitely more than the one before."
I have a very hard believing this, both that there was a discontinuous threshold where the consciousness was created in 1 generation and that a conscious being should be valued infinitely more than a non-conscious being.
This does not diminish the strength of the argument for shrimp welfare, but I view this as an extraordinary claim among many other relatively ordinary claims.
Yeah that's a coherent view. I find it implausible but that's a view you can adopt. But if you have that view you should think there's a zone where it's vague where the creatures have moral worth, and then a zone below the point where it's vague.
What is implausible about thinking that there is a zone where it's vague where the creatures have moral worth and then there is a zone below the point where it's vague? Additionally, I don't view conscious organisms as automatically being infinitely more valuable than non-conscious organisms because non-conscious organisms can potentially develop into conscious organisms, and large groups of non-conscious organisms can create conscious behavior as an emergent property (as they do for humans).
Do you favor enforcement of laws against animal abuse? Money spent on that could instead be spent on saving a marginally higher number of human lives in expectation.
The crux of the argument is that the ratio of (1 human life / 1 shrimp life) has a finite value. If true, it follows that it there is some magnitude of shrimp suffering that is worth a human life.
Flo convincingly argues that 1 human life does not have an infinite value. But I don't really think the numerator is where most of the disagreement comes from- I think it's that there are many people who really do think shrimp are worth 0.
Just as she discovered in her twitter poll after I asked her this, if you presented this argument with housecats instead of shrimp, I don't think there would be any pushback- thereby demonstrating it is the choice of the denominator that is contentious.
If you say house cats it’s easy to extend that to the people that own those cats (thereby making the question about human suffering). If you said ‘kill your mother or a billion cat like aliens on a distant planet’ I suspect most people would save their mom.
Once you start with calculating value, you already lost. Human lives are a sacred value, not a trade-off value. In other words, you just don’t want to be the one with human blood on your hands, as this pollutes you. Which is why Romans did not allow priests to fight in a war, cannot sacrifice to the gods with polluted hands.
I’m not a philosopher, but I followed your link to the article about making repeated deals with the devil for many more years of life in exchange for a tiny increase in the probability of instant death. It was very interesting. The authors didn’t offer a strong conclusion, but seemed to suggest that while it was obviously good to press the button once, it was just as obviously bad to press it an arbitrarily large number of times. Now again, I’m no expert, but I feel like stuff like this comes up a lot in philosophy. So while I agree that it’s good to press your shrimp-saving button once, I don’t agree that it necessarily follows that it’s good to press it an arbitrarily large number of times.
I guess it seems to me pretty clear that no matter how many times you've already pressed the button, it's still clearly better to press the addditional shrimp-saving button. That's not as clear in the low probabilities case.
I’ve thought about it some more, and to me the cases are actually even more similar than I initially thought. If I found out you stole a millisecond of my life without my permission to save even one shrimp, I wouldn’t particularly care, I will never notice the loss, and yay, now there’s a happy shrimp where there wasn’t one before. But if you stole 50 years of my life to save grahams number of shrimp I would be super mad, because now I can expect to die at any moment rather than living a full life. The milliseconds don’t add up linearly.
Economists appreciate this argument. Most people have an intuitive sense that there are lexicographical preferences -- A and B are both good things, but I maximize my A before even considering getting some B. This tends to break down along similar lines that you argued. Suppose you could get a zillion things of B, but give up one femto-speck of A? Well, OK, I'll do that. If so, then there is a tradeoff between A and B, albeit it at a high ratio.
The same should apply to ethical questions. Would you give 1 cent to prevent horrible pain for all the shrimp in the world? Most people would say yes. But if you can save a human life by buying mosquito nets for $5,000 (I don't know the current price per life, but it is somewhere around there, I have heard), then the shrimp to person ratio is 500,000 to 1. As long as we value something positively, there can be a tradeoff between human lives and that thing.
You can see this most clearly in incremental risk -- driving someone to accomplish something but incurring the small risk of an auto accident.
What's next a trillion anchovies vs a baby? - Welcome to moral escapism.
I think it would be worse to torture a trillion anchovies than one baby.
Serious question: have you ever met a baby in person?
I get it- the more fantastical and paradoxical, the better.
I do think a lot of the 'arguments' presented against Florence's essay focussed primarily on what they saw as unnecessarily combative tone—calling people irrational in a philosophical sense doesn't really endear anyone to you. I agree with her (and your) conclusions about shrimp, but imo the most important lessons of the drama are that 1. people look for any excuse to get angry at a conclusion they dislike at first exposure and 2. philosophers aren't the best at gauging the emotional reaction of the average person. I've done the same many times before!
2 is so true
Well done! I love this sort of stuff from you. I wish you were as good in politics related stuff.
"You might worry that this generalizes to bacteria, but bacteria aren’t conscious. There was a first conscious organism, and that organism mattered infinitely more than the one before."
I have a very hard believing this, both that there was a discontinuous threshold where the consciousness was created in 1 generation and that a conscious being should be valued infinitely more than a non-conscious being.
This does not diminish the strength of the argument for shrimp welfare, but I view this as an extraordinary claim among many other relatively ordinary claims.
Yeah that's a coherent view. I find it implausible but that's a view you can adopt. But if you have that view you should think there's a zone where it's vague where the creatures have moral worth, and then a zone below the point where it's vague.
What is implausible about thinking that there is a zone where it's vague where the creatures have moral worth and then there is a zone below the point where it's vague? Additionally, I don't view conscious organisms as automatically being infinitely more valuable than non-conscious organisms because non-conscious organisms can potentially develop into conscious organisms, and large groups of non-conscious organisms can create conscious behavior as an emergent property (as they do for humans).
Infinite? Definitely
"It is better to prevent Graham’s number shrimp from being tortured than to extend a person’s life by a single millisecond. "
I disagree with this 'obvious' principle.
Nuts!
Why? I have a different principle: Human life is infinitely more valuable than lesser life.
Do you favor enforcement of laws against animal abuse? Money spent on that could instead be spent on saving a marginally higher number of human lives in expectation.
In what context? I think its cruel that the laws against child abuse are less enforced than the laws against animal abuse.
And I am more than fine for abusing animals via drug testing for human purposes.
In the context of, say, someone torturing their pet for fun and posting the video on their website for all to see.
The crux of the argument is that the ratio of (1 human life / 1 shrimp life) has a finite value. If true, it follows that it there is some magnitude of shrimp suffering that is worth a human life.
Flo convincingly argues that 1 human life does not have an infinite value. But I don't really think the numerator is where most of the disagreement comes from- I think it's that there are many people who really do think shrimp are worth 0.
Just as she discovered in her twitter poll after I asked her this, if you presented this argument with housecats instead of shrimp, I don't think there would be any pushback- thereby demonstrating it is the choice of the denominator that is contentious.
Replied on notes.
If you say house cats it’s easy to extend that to the people that own those cats (thereby making the question about human suffering). If you said ‘kill your mother or a billion cat like aliens on a distant planet’ I suspect most people would save their mom.
What if I don’t give a shit about shrimp? Why do you assume people should care about shrimp suffering? Why are you prioritizing shrimp?
Bentham, please kill Shoe on head in 1 on 1 gladiatorial combat to the death. For the honor of the utilitarian cause.
lol
Once you start with calculating value, you already lost. Human lives are a sacred value, not a trade-off value. In other words, you just don’t want to be the one with human blood on your hands, as this pollutes you. Which is why Romans did not allow priests to fight in a war, cannot sacrifice to the gods with polluted hands.
But I always inherently value shrimp suffering at 1/(n+1) times that of human suffering where n is the number of shrimps in the thought experiment.
I’m not a philosopher, but I followed your link to the article about making repeated deals with the devil for many more years of life in exchange for a tiny increase in the probability of instant death. It was very interesting. The authors didn’t offer a strong conclusion, but seemed to suggest that while it was obviously good to press the button once, it was just as obviously bad to press it an arbitrarily large number of times. Now again, I’m no expert, but I feel like stuff like this comes up a lot in philosophy. So while I agree that it’s good to press your shrimp-saving button once, I don’t agree that it necessarily follows that it’s good to press it an arbitrarily large number of times.
I guess it seems to me pretty clear that no matter how many times you've already pressed the button, it's still clearly better to press the addditional shrimp-saving button. That's not as clear in the low probabilities case.
I’ve thought about it some more, and to me the cases are actually even more similar than I initially thought. If I found out you stole a millisecond of my life without my permission to save even one shrimp, I wouldn’t particularly care, I will never notice the loss, and yay, now there’s a happy shrimp where there wasn’t one before. But if you stole 50 years of my life to save grahams number of shrimp I would be super mad, because now I can expect to die at any moment rather than living a full life. The milliseconds don’t add up linearly.
This argument is so silly I've actually started eating shrimp again in protest.
When are you going to save 10^100 shrimp? How does this relate to the real world?
Fascinating rebuttal:
https://open.substack.com/pub/davidschulmannn/p/my-theory-of-morality?utm_source=share&utm_medium=android&r=33pit
Economists appreciate this argument. Most people have an intuitive sense that there are lexicographical preferences -- A and B are both good things, but I maximize my A before even considering getting some B. This tends to break down along similar lines that you argued. Suppose you could get a zillion things of B, but give up one femto-speck of A? Well, OK, I'll do that. If so, then there is a tradeoff between A and B, albeit it at a high ratio.
The same should apply to ethical questions. Would you give 1 cent to prevent horrible pain for all the shrimp in the world? Most people would say yes. But if you can save a human life by buying mosquito nets for $5,000 (I don't know the current price per life, but it is somewhere around there, I have heard), then the shrimp to person ratio is 500,000 to 1. As long as we value something positively, there can be a tradeoff between human lives and that thing.
You can see this most clearly in incremental risk -- driving someone to accomplish something but incurring the small risk of an auto accident.
"There was a first conscious organism, and that organism mattered infinitely more than the one before." Good one!