1 Population ethics
If you’re one of those people that finds the sort of
bashing of religion andanthropic stuff tiresome or offensive…just give me five more minutes.
The self-indication assumption (SIA) claims that all else equal you should think there are more people because if more people exist it’s more likely that you’d be one of them. I’m extremely sympathetic to SIA—in fact, I think that the truth of SIA is one of the biggest no-brainers on a hotly contested philosophical topic.
True things tend to make sense of a lot of areas. Having the true view of fundamental reality, for instance, should clear up a lot of puzzles in other areas. Here, I’ll argue SIA makes the best sense of population ethics. Here’s a plausible—yet confusing—principle in population ethics that, I shall argue, only makes sense if SIA is true:
Person Directed Harm Principle: For any action that is wrong because of its negative effects on people, there exist some people who would be worse off, in expectation, because of the action, even if they didn’t know who was affected by the action.
For instance, suppose that I shoot at 5 random people. That action is wrong because—at least in part—of its negative effects on people. If no one knew who I shot, then people should expect to be worse off because of it—because they might get shot.
I’ve intentionally written the principle in a very reserved way. This only applies to actions that are wrong because of their negative effects on people—thus, there are no counterexamples about the wrongness of destroying nature or violating special obligations which don’t make anyone worse off in expectation.
Now, imagine the world is filled with very miserable people and consider an action that creates lots of miserable people (though slightly less miserable than the existing people). Seems wrong! In fact, it seems wrong because of its negative effects on people. The thing that makes it wrong isn’t that you’re violating a promise or destroying nature—it’s that you’re making people badly off.
But suppose SIA is false: who is worse off in expectation? The answer is no one. No one should think they’re more likely to be harmed by the action, because one shouldn’t think they’re more likely to exist in a world with more people. If one isn’t sure which action was taken, they have no reason to think that the wrong action made them worse off in expectation, for they don’t conclude that more people made them likelier to exist.
In fact—weirdly—alternatives to SIA would say that this makes them better off in expectation, for if there are more people then I am no likelier to exist, but now have a less great chance of being the maximally miserable person. Thus, it seems only SIA can avoid thinking something like average utilitarianism is the right way to reason about morality while accepting the Person Directed Harm Principle.
2 The presumptuous philosopher
(Credit to Amos Wollen for this idea).
The presumptuous philosopher objection, as commonly stated, is that SIA implies that a presumptuous philosopher could be confident that the results of physics experiments are wrong if they imply that there aren’t many observers. Now, as stated, this is false: SIA says that more observers existing is more likely, not that one should be super confident in this judgment even in the face of apparently convincing contrary empirical evidence.
But, as Amos noted, this is very much like Williams’ integrity objection to utilitarianism. Williams gives two cases—one where someone has to kill one person to prevent that person and others from dying and another where someone can take a job that is immoral to prevent another worse person from getting it—and claims “Utilitarianism … is committed to saying that the right responses to these cases are obvious. But, he contends, they are not.”
But utilitarianism entails no such thing. It’s true that if utilitarianism is right then there are right answers in those cases that follow trivially from utilitarianism. But the truth of utilitarianism says nothing about how obvious those judgments are. One could similarly argue against any ethical theory—or any philosophical view more broadly—they all imply that their results are obvious, but their results are non-obvious. “I reject Fermat’s last theorem,” cries the hypothetical Williams, “for it implies that it’s obvious that for any A, B, and C A^13+B^13 is not equal to C^13, but clearly that’s not obvious.”
The integrity objection is thus a terrible argument! It’s not the slightest bit convincing. To argue against utilitarianism, one must point to cases where it gets the wrong result, not where it gets a result that is right but non-obvious. But this is exactly the way the presumptuous philosopher argument works! It tasks SIA not with getting the wrong result but with being too confident in the result. But SIA per se has nothing to say about how confident one should be that SIA is right.
Of course, one can always formulate it more precisely, talking about whether an ideal reasoner would buy the presumptuous philosopher’s reasoning, or something like that. Then I think the argument fails for other reasons. But, as standardly formulated, it’s just the integrity objection 2.0—and just as unconvincing. We can compare it to the following argument:
Presumptuous ethicist: it’s the year 2100 and scientists discover (somehow) that there are two theories. Super duper symmetry is true if and only if moral platonism is true and super duper uper symmetry is true if and only if moral platonism is true. About to perform the experiment to figure out which is true, a presumptuos philosopher cries out, “wait, no need to do the test. I have all these a priori arguments for moral platonism.” The philosopher is making a fool of himself!
But this isn’t an objection to moral platonism, it’s an objection to dogmatism. You should think your best views can be overturned by sufficiently convincing empirical evidence. Similarly, the presumptuous philosopher argument, as normally stated, is just an objection to dogmatic certainty in SIA, not to SIA.
Those who raise this objection seem to just want a theory of anthropics that does nothing—where you can’t learn any new things from anthropic reasoning. They want anthropics to be meek and obedient, never giving you strong reasons to believe anything, on which anthropic reasoning is, like children of the past, to be seen and not heard. Yet if anthropics is to qualify as reasoning of any kind, it must be able to give us reasons to believe things. This is the dreaded presumptuousness, but it follows inevitably from taking anthropics seriously (in fact, alternatives to SIA imply similar presumptuousness.
The presumptuous philosopher result is the most common objection to SIA. Bostrom, after spending about a page discussing SIA, concludes it’s clearly false because of the presumptuous philosopher result. So then, the fact that it’s completely bunk, as standardly formulated, and not the slightest bit convincing is a big problem for critics of SIA.
I've never encountered SIA before, I'm trying to understand why it would be of value in ethics, is it because it can be used like expected value for an individual, except that it can be applied to a population?
How does it handle a simple case of immoral behaviour such as stealing? We understand that stealing is wrong a priori, although perhaps there are cases where it is morally permitted. For my purposes, I am just presupposing that generally speaking, we agree stealing is morally wrong.
So, using my understanding of SIA, if I steal something of value from someone else when there is an opportunity where I can reasonably believe it will never be known that I was the thief, I benefit from the action , and therefore can reason, like the reasoning in SIA, the belief that I will benefit implies there exist, in expectation, more people who will benefit. It serves the greater good.
That Person Directed Harm Principle you mention, which you say SIA supports, would give the opposite result. I am harming the person whose property I am stealing, by depriving them of something of value to them, which would then lead me to reason that there exist, in expectation, other people who would be worse off because of my action. It is contrary to the greater good.
Just in case my example sounds too hypothetical, perhaps I can state the example in more specific way. Let's say I am a taxi driver. Some very drunk person leaves their new iPhone in my car. The next day when I find it, I discover it is not locked with a pin code, and realise I could just reset the phone to factory, take the sim card out, put my own sim card in, deny that the phone was left in my cab and suggest it was lost elsewhere, and I've probably just got a free upgrade to my clunky old phone. SIA suggests, to my understanding, my predicted benefit from this action would actually allow me to reason lots of people will probably benefit, making it something that is really good to do.
I understand SIA to mean: I observe that I exist, and I am more likely to exist if other people exist, therefore I expect other people to exist.
In my example: I observe that I will benefit, and I am more likely to benefit if other people will benefit, therefore I expect other people to benefit.
How convincing would an empirical test have to be to convince you that less than infinite people exist?