AI safety is a challenge that is beyond any single research lab. We run a variety of initiatives to support and empower the existing research community while lowering barriers to entry and further expanding the community. Our efforts include providing infrastructure and resources for the AI safety research ecosystem, initiating multi-disciplinary projects to explore the societal effects of AI from new perspectives, and creating educational resources to encourage newcomers to join.
All of our currently active projects:
/
AI Safety, Ethics and Society is a textbook and online course providing a non-technical introduction to how current AI systems work, why many experts are concerned that continued advances in AI could pose severe societal-scale risks, and how society can manage and mitigate these risks.
/
Together, we are collecting the hardest and broadest set of questions ever. Can you think of something you know that would stump current artificial intelligence (AI) systems? This will help us better evaluate the capabilities of AI systems in the years to come.
/
The SafeBench competition stimulates research on new benchmarks which assess and reduce risks associated with artificial intelligence. We are providing $250,000 in prizes: five $20,000 prizes and three $50,000 prizes for top benchmarks.
To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.
Lowering the barriers to entry in studying ML safety.
Intro to ML Safety is a comprehensive training program designed for individuals seeking additional support, community, and accountability while completing the ML safety course. Accepted participants receive access to peer discussion groups, mentorship, and a small stipend.
A $2000 scholarship for undergraduates and masters students who secure ML Safety research mentorship.
An online course which offers a comprehensive introduction to ML safety.
All of CAIS's past projects.
Hundreds of AI experts and public figures express their concern about AI risk in this open letter. It was covered globally in publications like the New York Times, the Wall Street Journal, and the Washington Post.
The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI.
/
The ML Safety Workshop at NeurIPS 2022 brought together researchers from various fields to discuss and advance the field of ML safety.
/
Neural Trojans are a growing concern for the security of ML systems, but little is known about the fundamental offense-defense balance of Trojan detection. The Trojan Detection Competition at NeurIPS 2022 poses the question: How hard is it to detect hidden functionality that is trying to stay hidden?
/
Three best paper awards to study model robustness to threats beyond small l_p perturbations, including attacks that are perceptible and attacks with specifications not known beforehand and are unforeseen.