I've never encountered SIA before, I'm trying to understand why it would be of value in ethics, is it because it can be used like expected value for an individual, except that it can be applied to a population?
How does it handle a simple case of immoral behaviour such as stealing? We understand that stealing is wrong a priori, although perhaps there are cases where it is morally permitted. For my purposes, I am just presupposing that generally speaking, we agree stealing is morally wrong.
So, using my understanding of SIA, if I steal something of value from someone else when there is an opportunity where I can reasonably believe it will never be known that I was the thief, I benefit from the action , and therefore can reason, like the reasoning in SIA, the belief that I will benefit implies there exist, in expectation, more people who will benefit. It serves the greater good.
That Person Directed Harm Principle you mention, which you say SIA supports, would give the opposite result. I am harming the person whose property I am stealing, by depriving them of something of value to them, which would then lead me to reason that there exist, in expectation, other people who would be worse off because of my action. It is contrary to the greater good.
Just in case my example sounds too hypothetical, perhaps I can state the example in more specific way. Let's say I am a taxi driver. Some very drunk person leaves their new iPhone in my car. The next day when I find it, I discover it is not locked with a pin code, and realise I could just reset the phone to factory, take the sim card out, put my own sim card in, deny that the phone was left in my cab and suggest it was lost elsewhere, and I've probably just got a free upgrade to my clunky old phone. SIA suggests, to my understanding, my predicted benefit from this action would actually allow me to reason lots of people will probably benefit, making it something that is really good to do.
I understand SIA to mean: I observe that I exist, and I am more likely to exist if other people exist, therefore I expect other people to exist.
In my example: I observe that I will benefit, and I am more likely to benefit if other people will benefit, therefore I expect other people to benefit.
In your post you say that Presumptuous Philosopher is false because SIA doesn’t require you to reject any physical test which implies fewer observers. It’s just some evidence for the contrary.
Yet I think the problem, which you’re hedging away from, is that SIA says we should be *infinitely* sure that there are a boundless infinity of people. This obviously would outweigh any empirical test, because it’s a priori *infinite* evidence.
If your position implies infinite a priori reason to reject physical evidence, this is probably a sign that your philosophy has gone wrong somewhere!
But if this is true, then it almost seems like you're arguing it's somehow lucky or fortuitous that we happen to be a little unsure about it. Otherwise, we'd be justified in adopting patterns of reasoning that seem obviously fallacious. Or do you agree that, if someone somehow *was* 100% sure of SIA - maybe God came down and whispered into their ear or something - then there'd basically be no reason whatsoever to do the experiment at all?
Okay, that makes sense. Although that's only true if you assume the number of possible people is infinite (which I know you do) - if the number of possible people is in fact limited by some unknown bound, then the SIA doesn't necessarily tell you anything definite here. So you could see the experiment perhaps as a way of testing that hypothesis instead!
I've never encountered SIA before, I'm trying to understand why it would be of value in ethics, is it because it can be used like expected value for an individual, except that it can be applied to a population?
How does it handle a simple case of immoral behaviour such as stealing? We understand that stealing is wrong a priori, although perhaps there are cases where it is morally permitted. For my purposes, I am just presupposing that generally speaking, we agree stealing is morally wrong.
So, using my understanding of SIA, if I steal something of value from someone else when there is an opportunity where I can reasonably believe it will never be known that I was the thief, I benefit from the action , and therefore can reason, like the reasoning in SIA, the belief that I will benefit implies there exist, in expectation, more people who will benefit. It serves the greater good.
That Person Directed Harm Principle you mention, which you say SIA supports, would give the opposite result. I am harming the person whose property I am stealing, by depriving them of something of value to them, which would then lead me to reason that there exist, in expectation, other people who would be worse off because of my action. It is contrary to the greater good.
Just in case my example sounds too hypothetical, perhaps I can state the example in more specific way. Let's say I am a taxi driver. Some very drunk person leaves their new iPhone in my car. The next day when I find it, I discover it is not locked with a pin code, and realise I could just reset the phone to factory, take the sim card out, put my own sim card in, deny that the phone was left in my cab and suggest it was lost elsewhere, and I've probably just got a free upgrade to my clunky old phone. SIA suggests, to my understanding, my predicted benefit from this action would actually allow me to reason lots of people will probably benefit, making it something that is really good to do.
I understand SIA to mean: I observe that I exist, and I am more likely to exist if other people exist, therefore I expect other people to exist.
In my example: I observe that I will benefit, and I am more likely to benefit if other people will benefit, therefore I expect other people to benefit.
How convincing would an empirical test have to be to convince you that less than infinite people exist?
Not sure.
In your post you say that Presumptuous Philosopher is false because SIA doesn’t require you to reject any physical test which implies fewer observers. It’s just some evidence for the contrary.
Yet I think the problem, which you’re hedging away from, is that SIA says we should be *infinitely* sure that there are a boundless infinity of people. This obviously would outweigh any empirical test, because it’s a priori *infinite* evidence.
If your position implies infinite a priori reason to reject physical evidence, this is probably a sign that your philosophy has gone wrong somewhere!
No, you're assuming that one is 100% sure of SIA. If you're not then you should take empirical evidence for a small universe seriously
But if this is true, then it almost seems like you're arguing it's somehow lucky or fortuitous that we happen to be a little unsure about it. Otherwise, we'd be justified in adopting patterns of reasoning that seem obviously fallacious. Or do you agree that, if someone somehow *was* 100% sure of SIA - maybe God came down and whispered into their ear or something - then there'd basically be no reason whatsoever to do the experiment at all?
If you're 100% sure of SIA then there is no reason to do the experiment.
Okay, that makes sense. Although that's only true if you assume the number of possible people is infinite (which I know you do) - if the number of possible people is in fact limited by some unknown bound, then the SIA doesn't necessarily tell you anything definite here. So you could see the experiment perhaps as a way of testing that hypothesis instead!