To readers of “If Anyone Builds It, Everyone Dies”:
Join us to get insights on the latest developments in AI and AI safety delivered to your inbox. No technical background required.
A distillation of decades of research by Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute on why building artificial superintelligence would lead to human extinction.
A textbook developed by the Center for AI Safety which aims to provide an accessible introduction to students, practitioners and others looking to better understand these issues.
The book that launched modern AI safety concerns. Explores paths to machine superintelligence and why it poses an existential threat to humanity.
MIT physicist Max Tegmark explores scenarios for life with artificial general intelligence, from utopian to extinction outcomes.