Competitions
A compilation of competitions and prizes that relate to AI risk.

Competitions supported by people affiliated with the Center for AI Safety are starred.

($100K) NeurIPS ML Safety Workshop*

This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.

($500K) SafeBench: prizes for benchmark ideas*

We want to encourage the development of new benchmarks that will motivate impactful safety research. Submit a paper or a write-up of your idea. Submitting a dataset is not required, though we may provide funding to build your benchmark if it is especially promising.

($100K) Moral Uncertainty Competition*

As ML systems automate more aspects of our lives, they may encounter ethical dilemmas. If they can reliably identify moral ambiguity, they are more likely to proceed cautiously or indicate an operator should intervene. The objective of this competition is to detect whether a text

($625K) Autocast Forecasting Competition*

The Autocast competition tests machine learning models’ ability to make accurate and calibrated forecasts of future real-world events. From predicting how COVID-19 will spread, to anticipating conflicts, using ML to help inform decision-makers could have far-reaching positive effects on the world.

($50K) NeurIPS Trojan Detection Challenge*

This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.

($250K) Inverse Scaling Law Prize

This competition asks participants to find new examples of tasks where pretrained language models exhibit inverse scaling: that is, models get worse at the task as they are scaled up. Notably, you do not need to train your own models to participate: a submission consists solely of a dataset giving at least 300 examples of the task. Note: this competition is not organized by the Center for AI Safety.

($30K) ECCV Adversarial Robustness Workshop*

The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.

Adversarial Robustness*

Details to be announced.