About us

Our mission is to reduce societal-scale risks from artificial intelligence.

Why we exist

CAIS exists to ensure the safe development and deployment of AI

AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

What we do

AI safety is highly neglected. CAIS reduces societal-scale risks from AI through research, field-building, and advocacy.

CAIS Research Icon

Research

CAIS conducts research solely focused on improving the safety of AIs. Through our research initiatives, we aim to identify and address AI safety issues before they become significant concerns.

Activities:

  • Identifying and removing dangerous behaviors in AIs
  • Studying deceptive, machiavellian, and other unethical behavior in AIs
  • Training AIs to behave morally
  • Improving the reliability of AIs
  • Improving the security of AIs
CAIS Field Building Icon

Field-building

CAIS grows the AI safety research field through funding, research infrastructure, and educational resources. We aim to create a thriving research ecosystem that will drive progress towards safe AI.

Activities:

  • Providing top researchers with compute and technical infrastructure
  • Running multidisciplinary fellowships focused on AI safety 
  • Interfacing with the global research community
  • Running competitions and workshops
  • Creating educational materials

Advocacy

CAIS advises industry leaders, policymakers, and other labs to bring AI safety research into the real-world. We aim to build awareness and establish guidelines for the safe and responsible deployment of AI.

Activities:

  • Raising public awareness of AI risks and safety
  • Providing technical expertise to inform policymaking at governmental bodies
  • Advising industry leaders on structures and practices to prioritize AI safety

CAIS Impact

CAIS is accelerating research on AI safety and raising the profile of AI safety in public discussions. Here are some highlights from our work so far:

1

Global Statement on AI Risk signed by 600 leading AI researchers and public figures

100

AI safety researchers using CAIS’ cutting-edge computing infrastructure

170

AI safety research papers produced across our programs

300

Students trained in technical AI safety

500

Over 500 machine learning researchers taking part in AI safety events

1,200

Submissions from over 70 teams to our AI safety research competition

Our Approach

We systematically assess our projects so we can quickly scale what works and stop what doesn’t.

1. Prioritize

Prioritize by estimating the expected impact of each project. ↴

2. Pilot

Pilot the top projects to a point where impact can be assessed  ↴

3. Evaluate

Evaluate the impact compared to our projections and other projects. ↴

4a. Scale

Scale the successful projects. We implement structures to make the project efficient and repeatable. ⟳

4b. Pivot

Stop the less successful projects and pursue other ideas.

Center for AI Safety Teams

CAIS is organized into functional teams that support our work and approach to AI safety

Research

Performs conceptual and empirical AI safety research.

Cloud and DevOps

Oversees the compute cluster, which supports c. 20 research labs.

Projects

Runs field-building projects and manages our collaborations and advising opportunities.

Operations

Supports the organization by ensuring we have proper tools, processes, and personnel.

Leadership

CAIS Process

Dan Hendrycks

Executive & Research Director

Oliver Zhang

Co-founder