The Five Threats—and the Real Question Behind Them

The Five Threats—and the Real Question Behind ThemBy: Marty Suidgeest Published on: 05/06/2025

A recap of the Annihilation Index threats leading into launch week, but pivots to the deeper philosophical question: what kind of future are we building?

Mind HackAutonomous AnnihilationSystem SeizureResource ReckoningShutdown Safeguard
The Five Threats—and the Real Question Behind Them

What Fiction Got Wrong About the AI Apocalypse

What Fiction Got Wrong About the AI ApocalypseBy: Marty Suidgeest Published on: 04/29/2025

Breaks down popular myths about how AI might “go wrong,” and contrasts fiction with the real, quieter, systemic threats we face today.

Mind HackResource Reckoning
What Fiction Got Wrong About the AI Apocalypse

Not If, But When: The Case for an Extinction Framework

Not If, But When: The Case for an Extinction FrameworkBy: Marty Suidgeest Published on: 04/22/2025

Introduces the logic behind the Annihilation Index and why we need structured thinking around extinction-level AI threats—not just optimism or fear.

Mind HackAutonomous AnnihilationSystem SeizureResource ReckoningShutdown Safeguard
Not If, But When: The Case for an Extinction Framework

The Alignment Trap: Why Well-Intentioned AI Can Still Kill Us

The Alignment Trap: Why Well-Intentioned AI Can Still Kill UsBy: Marty Suidgeest Published on: 03/25/2025

Even perfectly aligned goals can backfire. This post explains why optimization without ethics is the real extinction risk—no hatred required.

Resource Reckoning
The Alignment Trap: Why Well-Intentioned AI Can Still Kill Us