Why “If Anyone Builds It, Everyone Dies” matters—and what’s next in AI safety

To readers of “If Anyone Builds It, Everyone Dies”:
Join us to get a summary of Eliezer Yudkowsky and Nate Soares' new book, and for insights on the latest developments in AI and AI safety delivered to your inbox.

Key books on superintelligence, AI safety, and catastrophic / existential risk from artificial intelligence

If Anyone Builds It, Everyone Dies (IABIED)

A distillation of decades of research by Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute on why building artificial superintelligence would lead to human extinction.

Introduction to AI Safety, Ethics, and Society

A textbook developed by the Center for AI Safety which aims to provide an accessible introduction to students, practitioners and others looking to better understand these issues.

Superintelligence

The book that launched modern AI safety concerns. Explores paths to machine superintelligence and why it poses an existential threat to humanity.

Life 3.0: Being Human in the Age of Artificial Intelligence

MIT physicist Max Tegmark's exploration of scenarios for life with artificial general intelligence, from utopian to extinction outcomes.