Competitions supported by people affiliated with the Center for AI Safety are starred
This workshop will bring together researchers from machine learning communities to focus on Robustness, Monitoring, Alignment, and Systemic Safety.
As ML systems automate more aspects of our lives, they may encounter ethical dilemmas. If they can reliably identify moral ambiguity, they are more likely to proceed cautiously or indicate an operator should intervene. The objective of this competition is to detect whether a text
The Autocast competition tests machine learning models’ ability to make accurate and calibrated forecasts of future real-world events. From predicting how COVID-19 will spread, to anticipating conflicts, using ML to help inform decision-makers could have far-reaching positive effects on the world.
This competition challenges contestants to detect and analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.
This competition aims to tackle typical computer vision tasks (i.e. Multi-class Classification, Object Detection, ...) on OOD images that follow a different distribution from the training images. There's an accuracy ceiling in order to put the emphasis on robustness.
This competition asks participants to find new examples of tasks where pretrained language models exhibit inverse scaling: that is, models get worse at the task as they are scaled up. Notably, you do not need to train your own models to participate: a submission consists solely of a dataset giving at least 300 examples of the task. Note: this competition is not organized by the Center for AI Safety.
The AROW workshop aims to explore adversarial examples as well as evaluate and improve the adversarial robustness of computer vision systems.
Details to be announced.