Welcome to the UNICORN Challenge 🦄¶

Unified beNchmark for Imaging in COmputational pathology, Radiology and Natural language


Background¶

In recent years, large multimodal foundation models have emerged as powerful tools capable of handling diverse tasks across vision and language domains. Pre-trained on large datasets, these models hold the promise of becoming "generalist" systems, able to solve a variety of tasks without requiring extensive task-specific training or large-scale annotated data. This potential has drawn significant attention, particularly in the field of medical imaging, where data scarcity and the need for annotations are major hurdles. However, despite the growing interest and development efforts, the field lacks a comprehensive, publicly available benchmark that assesses the performance of these models across multiple clinical tasks.

As part of the MICCAI 2025 Lighthouse Challenges, the UNICORN challenge aims to fill this gap by providing a unified set of 20 tasks to assess the performance of multimodal foundation models in medical imaging. In contrast to traditional "many-to-one" challenges, where models focus on a single task, UNICORN follows a "one-to-many" approach, assessing how well a single model can adapt to a variety of tasks spanning vision and language in the field of radiology and digital pathology. The goal is to explore how large-scale models can handle the complexities of medical data across different domains, paving the way for generalist AI systems in healthcare.


Announcements 📣¶

📰 7/02/2025: The first batch of public few-shot examples for some of the UNICORN tasks has been released. Check it out on Zenodo!

📰 8/11/2024: Sign up to receive important updates on the UNICORN Challenge, including data releases, baseline models, and participation details 👉 Sign up here