A recap of the Annihilation Index threats leading into launch week, but pivots to the deeper philosophical question: what kind of future are we building?
Introduces the logic behind the Annihilation Index and why we need structured thinking around extinction-level AI threats—not just optimism or fear.
Aligning AI isn’t just hard—it’s fractured. This post explores how differing cultural values complicate global safety, especially under authoritarian regimes.
This post tells the personal story behind the Index—how small signals added up, and why naming the threats became the starting point for real awareness.