CAIS’ mission is to reduce societal-scale risks from AI. We do this through research and field-building. To learn more, please see our Work & Projects Summary.
By field-building, we mean expanding the research field of AI safety by providing funding, research infrastructure, and educational resources. Our goal is to create a thriving research ecosystem that will drive progress towards safe AI. You can see examples of our projects on our field-building page.
We publish our research in ML conferences such as ICML, ICLR, and NeurIPS. Additionally, we promote safety research within the wider machine learning community by offering resources such as our compute cluster and providing high-quality infrastructure (e.g., workshops, socials, and competitions etc). To collaborate with us on research, see this form.
Our work is driven by three main pillars: advancing safety research, building the safety community, and promoting safety standards. We understand that technical work will not solve AI safety alone, and prioritize having a real-world positive impact. You can see more on our mission page.
CAIS’ main offices are located in San Francisco, California.
On May 30, 2023, CAIS released a statement signed by a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists. You can read the statement and find the list of signatories here.
Yes! We send a verification email to each signatory and verify each person before adding their name to the statement.
In recent months, more AI experts and public figures have been speaking out about AI as an existential risk. An academic at the University of Cambridge, David Krueger, came up with the idea of developing a one-sentence statement (as opposed to a longer letter) to draw broader consensus, and we thought the time was right to make this happen. The statement was created independent of any input from the AI industry.
Donations to CAIS support our operating expenses and allow us to scale up our research and field-building efforts as an independent lab. Learn more about our ongoing work.
The Center for AI Safety is a US federally recognized 501c(3) non-profit organization. US donors can take tax deductions for donations to CAIS to the extent permitted by law. If you need our organization number (EIN) for your tax return, it’s 88-1751310.
CAIS is a nonprofit organization. We would not accept funding from stakeholders which would compromise our mission of reducing AI risk. We are currently bottlenecked by funding and seeking to diversify our funding sources, so please consider donating here.
As a technical research laboratory, CAIS develops foundational benchmarks and methods which concretize the problem and progress towards technical solutions. You can see examples of our work on our research page.
CAIS focuses on research which will have a large research impact and will contribute towards reducing societal-scale risks. For further information, see this paper or this paper.
Machine learning research progresses through well-defined metrics for progress towards well-defined goals. Once a goal is defined empirically, is tractable, and is incentivized properly, the field is well-equipped to make progress towards it. We focus on benchmarks and metrics to help concretize research directions in ML safety and enable others to further build upon our research.
To apply for compute support, see the compute cluster page. Professors looking for funding may be interested in this NSF grant opportunity.
AI has numerous and increasing capabilities today, including:
AI currently impacts numerous aspects of our lives, including predictive policing and the way search engines curate our news. As newer, more advanced AI models are developed, we can anticipate an even greater presence of AI in our daily lives.
AI's application in warfare can be extremely harmful, with machine learning enhancing aerial combat and AI-powered drug discovery tools potentially being used for developing chemical weapons. CAIS is also concerned about other risks, including increased inequality due to AI-related power imbalances, the spread of misinformation, and power-seeking behavior. To learn more about large-scale societal risks posed by AI, click here.
For those with previous machine learning experience, we offer a course focused on AI safety and risk. CAIS is also working on additional educational content about AI risks, especially for individuals without a machine learning background. To stay updated on these developments, subscribe to our newsletter.