This article will assume a bit of background knowledge about the self-indication assumption and self-sampling assumptions. If you want to learn what the self-indication assumption (SIA) is and to get a good introduction to the topic, see here.
A famous counterexample to SSA, the main rival to SIA, (which can be shown to apply to all views other than SIA is:
First experiment: Serpent’s Advice Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery.12 One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”
This comes from a paper by Bostrom. He presents another similar case later called Lazy Adam:
Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by.
Any view that says that Adam and Eve can have sex with impunity with virtually no risk of pregnancy or that Adam can be confident that, by agreeing to procreate unless a deer drops dead at his feet, a deer will, in fact, drop dead at his feet, is clearly absurd. Yet Bostrom seems to accept this seemingly absurd conclusion. Of course, he’s sure to note that there’s no anomalous causation (this is false) but he ends up, in the paper presenting the cases, just essentially biting the bullet.
Here’s a plausible argument for why one shouldn’t bite the bullet. Suppose that there are two theories that differ only in their implications for things you haven’t yet observed. Your credence in them should not be affected by their implications for the evidence you haven’t observed. For instance, if one theory predicts that there are a lot of extra particles that you haven’t seen in the Andromeda galaxy, but you don’t know if there are such particles, then you shouldn’t update against the extra particles in the Andromeda galaxy theory.
This is quite plausible. Some piece of evidence E is only evidence for a hypothesis H if it’s more strongly predicted on H than if H is false. But by definition, if you haven’t observed the evidence, and have no idea how it turns out, you can’t update on it. But this entails that SIA is right about Lazy Adam and Serpent’s Advice. You can’t update against a theory based on it entailing that a lot of people will exist in the future because the future hasn’t happened yet—you haven’t observed the results yet, so you can’t update on it. So this means that you can’t treat the fact that some theory predicts more people in the future as a reason to think it’s more likely.
Of course, there are cases where you can update based on a theory predicting future events. But that’s only when you have evidence that you’ve observed that’s less likely if the future turns out a certain way than if it doesn’t. For example, if God tells you that you’ll win the lottery tomorrow, you’ll get evidence for winning the lottery tomorrow, because that more strongly predicts the past evidence. But in a case where two theories only differ in their future consequences, you can’t treat that as evidence.
Bostrom’s view also violates the principal principle, but that’s another story.
I agree that "Biting The Bullet On Adam and Eve" is silly.
But doesn't SIA has it's own similar bullet? Let's call it "Asexual Eve".
Suppose that Eve doesn't want to have sex with Adam. Adam insists that if they don't have children human race will go extinct, but Eve dismisses it. Surely human race will be fine regardless of whether they have sex or not! After all, under SIA it's extremely unlikely that they are the only people in the universe. And the only alternative to SIA is SSA which claims that she can't possibly get pregnant, so there is no point in even trying.
I don’t think that the Adam and Eve hypotheticals are actually that absurd in a way that helps SIA. The absurdity is being smuggled in from other sources.
1. Adam having sex with Eve is said to have no risk of procreation because doing so would create large numbers of people. But this is only absurd because we’re assuming for the sake of the hypothetical that having sex once will somehow create oodles of people.
I believe the Bible gets around this by saying that Adam loved for 930 years and had plenty of children.
So it seems like the real source of the absurdity is not that SSA creates magical contracpeption, but the fact that we assume having sex once will create a massive number of people. Conflating the two artificially makes SSA seem worse.
The Deer dropping dead at feet hypothetical is even worse. Not only does it inject the same artificial doubt as merely having sex, it also injects additional absurdity by artificially making Adam’s commitment to way more ironclad than need be. The much more loudly scenario than a deer spontaneously dropping dead at Adam’s feet is that Adam just doesn’t follow through on his promise, or is somehow blocked from doing so. The absurdity comes from arbitrarily positing that his strength of will to create massive numbers of beings (and not get randomly killed or distracted) is so strong that the most likely event is actually a deer dropping dead at his feet.
Once again. Assuming absurd stuff in your hypothetical and then claiming that your hypothetical is absurd… is not an argument!