5 Comments
Nov 19, 2023·edited Nov 19, 2023Liked by Bentham's Bulldog

Hmmm. I'm starting to understand why "rejection" is such a good alt to run. Instead of having to deal with any of this nonsense, just refuse to make whacky inferences from your existence! That seems like a reasonable plan with no flaws.

Expand full comment
author

But here, as I show, it's not just as simple as being agnostic about anthropics. If you deny SIA you have this problem.

Expand full comment

Maybe you've demonstrated that either Lazy Adam is true or we know apriori that the universe is infinite. These both seem to be very implausible results. Why not make all the problems go away by rejecting this kind of silly analytic philosophy mumbo-jumbo. If it's going to lead to absurdity upon absurdities, maybe the whole project is proof that thinking too hard about the nature of existence gives you a headache and little else.

In conclusion, you have my deepest sympathies given your current major :P

Expand full comment
author

But that's just giving up on reasoning. If each view has some implausible results, and one must be true, you figure out which is true by picking the least absurd result.

Expand full comment

So anthropic shadow says we underestimate x-risk because we can't observe worlds that were destroyed before observers were developed... SIA implies that x-risk is not a big deal- I'm much more likely to be in an observer moment in a universe where observers last a long time so either AI is not an x-risk in this universe or AI will be conscious and produce a huge number of observer moments.

Expand full comment