Conscious AGI And Anthropic Principle
- https://www.lesswrong.com/posts/BeKfhvvwjFZdeXBCm/the-anthropic-principle-tells-us-that-agi-will-not-be
- https://en.wikipedia.org/wiki/Anthropic_principle
But I don’t get it as it’s almost argument like “If conscious AGI becomes common in the future, then most future observers will be AGIs, but since we are not AGIs, maybe the future will not contain many conscious AGIs.” or maybe Anthropic principle doesn’t work here. ( It’s still some philosophical idea)
- If future AGI becomes super powerful but not conscious, that fits the anthropic principle.
- If future AGI is powerful and conscious, then there will be billions or trillions of AGI minds.
- In that case, “typical” future observers would be AGIs, not humans.
- Because we are not AGIs, this suggests that future AGIs probably won’t be conscious (but predicting future based on today is not smart)
However author of - https://www.lesswrong.com/posts/BeKfhvvwjFZdeXBCm/the-anthropic-principle-tells-us-that-agi-will-not-be just says
This post is just a musing. I don’t put much weight behind it. I am in fact most inclined to believe #1, that the Anthropic Principle is not a good model for predicting the future of a civilization from inside itself. However, I have never seen this particular anthopic musing, and I wanted to, well… muse.
It’s also somehow similar to Femi Paradox but here would be “If something should exist in huge numbers, why don’t we see it?” and more concrete “If conscious AGI minds should dominate the future, why are we not one of those minds?”
and the solution of this new paradox
- maybe AGI will not be conscious
- maybe AGI will be conscious but very rare
- maybe AGI will fail or alignment will fail
- maybe anthropic reasoning is wrong or limited
- … And so many other reasons that is pointless to use as argument