Our work on validating language-vision learning by collecting large-scale text-image LAION-5B dataset and training various scale CLIP models, evaluating those on different downstream tasks like zero-shot classification, won Outstanding Paper Award at NeurIPS 2022. Work was performed at Scalable Learning & Multi-Purpose AI Lab (Mehdi Cherti & Jenia Jitsev) in collaboration with LAION, UC Berkeley, TU Darmstadt, TU München, University of Washington, Allen AI Institute.<br>
### Reservoir Computing and Beyond
We present LAION-5B, an open, publically available dataset of 5.8B image-text pairs and validate it by reproducing results of training state-of-the-art CLIP models of different scale.<br>