We strongly support the voluntary commitments by AI companies announced today by the White House. These commitments include: red-teaming AI systems’ dangerous capabilities (bio, cyber, self-replication, and so on) and sharing information between organizations regarding safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. Such commitments are essential to establishing a safe AI ecosystem.
We also commend the seven companies that have voluntarily made these commitments. Their participation illustrates how corporations can take steps toward public accountability and improved oversight while remaining competitive. These commitments will hopefully set the stage for binding and detailed commitments.
The Center for AI Safety is a nonpartisan institution dedicated to doing original research that enables and promotes safe and ethical AI development and research. We collaborate with a global network of stakeholders to ensure that safe, reliable, and secure AI and machine learning technologies are developed and deployed to benefit humanity.