Competitions
A compilation of competitions and prizes that relate to AI risk

Competitions supported by people affiliated with the Center for AI Safety are starred

($100K) NeurIPS ML Safety Workshop*

This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.

($100K) Moral Uncertainty Competition*

Detecting when a decision involves moral ambiguity.

($50K) NeurIPS Trojan Detection Challenge*

This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.

($10K) OOD Robustness Challenge*

This competition aims to tackle typical computer vision tasks (i.e. Multi-class Classification, Object Detection, ...) on OOD images that follow a different distribution from the training images. There's an accuracy ceiling in order to put the emphasis on robustness.

($250K) Inverse Scaling Law Prize

This competition asks participants to find new examples of tasks where pretrained language models exhibit inverse scaling: that is, models get worse at the task as they are scaled up. Notably, you do not need to train your own models to participate: a submission consists solely of a dataset giving at least 300 examples of the task.

($30K) ECCV Adversarial Robustness Workshop*

The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.

($622K) Autocast Forecasting Competition*

This competition is centered around the Autocast task, which consists of thousands of Forecasting competition multiple choice questions coupled with a news corpus. Details to be announced.

Adversarial Robustness*

Details to be announced.

($100K) NeurIPS ML Safety Workshop

This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.

Learn more

($50K) NeurIPS Trojan Detection Challenge

This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.

Learn more

($30K) ECCV Adversarial Robustness Workshop

The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.

Learn more

($10K) OOD Robustness Workshop

This competition aims to tackle typical computer vision tasks (i.e. Multi-class Classification, Object Detection, ...) on OOD images that follow a different distribution than the training images.

Learn more

($600K) Autocast Forecasting Competition

This competition is centered around the Autocast task, which consists of thousands of Forecasting competition multiple choice questions coupled with a news corpus. Details to be determined.

RobustBench

A competition for the RobustBench benchmark. $10K will be awarded per percentage point increase in best-known robust accuracy. Details to be determined.