Resources to stay up to date with ML safety research and learn more about the field.
ML Safety Newsletter
Read about the latest technical ML safety research! View past issues here.
ML safety course
In this course, we discuss how researchers can shape the process that will lead to strong AI systems and steer that process in a safer direction. We’ll cover various technical topics to reduce existential risks (X-Risks) from strong AI, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), reducing inherent ML system hazards (“Alignment”), and reducing systemic hazards (“Systemic Safety”).
Learn more
Topics of interest

We are especially concerned about long-tail risks from AI systems as they become more capable. Refer to the resources below for a more detailed discussion of these risks: