Resources
Resources to stay up to date with ML safety research and learn more about the field.

ML safety course
In this course, we discuss how researchers can shape the process that will lead to strong AI systems and steer that process in a safer direction. We’ll cover various technical topics to reduce existential risks (X-Risks) from strong AI, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), reducing inherent ML system hazards (“Alignment”), and reducing systemic hazards (“Systemic Safety”).
Learn moreTopics of interest
We are especially concerned about long-tail risks from AI systems as they become more capable. Refer to the resources below for a more detailed discussion of these risks:
- X-risk analysis for AI research
- AI risk executive summary <insert later>