Competitions
A compilation of competitions and prizes that relate to AI risk

Competitions supported by people affiliated with the Center for AI Safety are starred

($100K) NeurIPS ML Safety Workshop*

This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.

($100K) Moral Uncertainty Competition*

As ML systems automate more aspects of our lives, they may encounter ethical dilemmas. If they can reliably identify moral ambiguity, they are more likely to proceed cautiously or indicate an operator should intervene. The objective of this competition is to detect whether a text

($625K) Autocast Forecasting Competition*

The Autocast competition tests machine learning models’ ability to make accurate and calibrated forecasts of future real-world events. From predicting how COVID-19 will spread, to anticipating conflicts, using ML to help inform decision-makers could have far-reaching positive effects on the world.

($50K) NeurIPS Trojan Detection Challenge*

This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.

($10K) OOD Robustness Challenge*

This competition aims to tackle typical computer vision tasks (i.e. Multi-class Classification, Object Detection, ...) on OOD images that follow a different distribution from the training images. There's an accuracy ceiling in order to put the emphasis on robustness.

($250K) Inverse Scaling Law Prize

This competition asks participants to find new examples of tasks where pretrained language models exhibit inverse scaling: that is, models get worse at the task as they are scaled up. Notably, you do not need to train your own models to participate: a submission consists solely of a dataset giving at least 300 examples of the task. Note: this competition is not organized by the Center for AI Safety.

($30K) ECCV Adversarial Robustness Workshop*

The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.

Adversarial Robustness*

Details to be announced.

($100K) NeurIPS ML Safety Workshop

This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.

Learn more

($50K) NeurIPS Trojan Detection Challenge

This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.

Learn more

($30K) ECCV Adversarial Robustness Workshop

The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.

Learn more

($10K) OOD Robustness Workshop

This competition aims to tackle typical computer vision tasks (i.e. Multi-class Classification, Object Detection, ...) on OOD images that follow a different distribution than the training images.

Learn more

($600K) Autocast Forecasting Competition

This competition is centered around the Autocast task, which consists of thousands of Forecasting competition multiple choice questions coupled with a news corpus. Details to be determined.

RobustBench

A competition for the RobustBench benchmark. $10K will be awarded per percentage point increase in best-known robust accuracy. Details to be determined.