The information is subject to change during the first days after launch, as we continue updating and refining the details.
Welcome to the UNICORN Challenge 🦄
Unified beNchmark for Imaging in COmputational pathology, Radiology and Natural language
Background
In recent years, large multimodal foundation models have emerged as powerful tools capable of handling diverse tasks across vision and language domains. Pre-trained on large datasets, these models hold the promise of becoming "generalist" systems, able to solve a variety of tasks often solely based on a few visual and/or textual examples (e.g., “prompts”, or “shots”), without requiring extensive task-specific training or large-scale annotated data. This potential has drawn substantial attention, particularly in the field of medical imaging, where data scarcity and the need for annotations are major hurdles. However, despite the growing interest and development efforts, the field is experiencing a ”benchmarking crisis” due to the lack of comprehensive, publicly available benchmarks to assesses the performance of these models across multiple clinical tasks.
The goal of UNICORN is to establish a public benchmark to measure how large-scale generalist models can perform across multiple tasks and handle the complexity of medical data in different domains.
Announcements
📰 04/04/2024: Register for the kick-off webinar on April 15th, during which the tasks and submission process will be discussed and any initial questions will be answered 👉 Register here
📰 7/02/2025: The first batch of public few-shot examples for some of the UNICORN tasks has been released. Check it out on Zenodo!