The Alignment Trap: Why Well-Intentioned AI Can Still Kill Us

The Alignment Trap: Why Well-Intentioned AI Can Still Kill UsBy: Marty Suidgeest Published on: 03/25/2025

Even perfectly aligned goals can backfire. This post explains why optimization without ethics is the real extinction risk—no hatred required.

Resource Reckoning
The Alignment Trap: Why Well-Intentioned AI Can Still Kill Us

When AI Becomes the Author of Our Conflicts

When AI Becomes the Author of Our ConflictsBy: Marty Suidgeest Published on: 03/11/2025

A real AI-generated misinformation campaign targeting European elections reveals the new front in information warfare—narrative manipulation at scale.

Mind Hack
When AI Becomes the Author of Our Conflicts

How Global Values Shape the AI Alignment Problem

How Global Values Shape the AI Alignment ProblemBy: Marty Suidgeest Published on: 02/25/2025

Aligning AI isn’t just hard—it’s fractured. This post explores how differing cultural values complicate global safety, especially under authoritarian regimes.

Shutdown Safeguard
How Global Values Shape the AI Alignment Problem

The Moment the Annihilation Index Was Born

The Moment the Annihilation Index Was BornBy: Marty Suidgeest Published on: 02/11/2025

This post tells the personal story behind the Index—how small signals added up, and why naming the threats became the starting point for real awareness.

Mind HackAutonomous AnnihilationSystem SeizureResource ReckoningShutdown Safeguard
The Moment the Annihilation Index Was Born