Discussion about this post

User's avatar
Vikram V.'s avatar

Hmmm. I'm starting to understand why "rejection" is such a good alt to run. Instead of having to deal with any of this nonsense, just refuse to make whacky inferences from your existence! That seems like a reasonable plan with no flaws.

Expand full comment
ColdButtonIssues's avatar

So anthropic shadow says we underestimate x-risk because we can't observe worlds that were destroyed before observers were developed... SIA implies that x-risk is not a big deal- I'm much more likely to be in an observer moment in a universe where observers last a long time so either AI is not an x-risk in this universe or AI will be conscious and produce a huge number of observer moments.

Expand full comment
4 more comments...

No posts