Challenge Phases 🧭


To ensure a fair and comprehensive evaluation of foundation models, the UNICORN Challenge is organized into four sequential phases, each with specific goals and criteria.
⚠️ The Challenge is provided entirely free of charge to participants. However, running foundation models, especially across diverse modalities, requires significant computational costs. For this reason, we expect all participants to strictly adhere to the rules and submission guidelines to ensure a fair use of shared resources and support the successful execution of the challenge for everyone.

Off-platform development-phase

During the off-platform development phase, participants can develop and validate you algorithm Docker containers and adaptation methods outside the Grand Challenge platform. Publicly available data are provided on Zenodo to mimic the platform's data formats. Participants need to make sure to:
  • Include all model weights and necessary resources in the Dockerfile
  • Perform end-to-end testing of the algorithm on local machines. We strongly recommend debugging the pipeline thoroughly before any upload.

Sanity check phase

🗓️ April 8th, 2025 – July 31st, 2025

To help participants quickly identify any potential issues with submission format, each leaderboard comes with a small set of cases for sanity checking. After the algorithm docker container has been built and thoroughly tested locally, it can be exported as .tar.gz file and submitted to the Sanity Check leaderboards. This phase is designed to validate that the pipeline runs smoothly on the platform, identify potential issues, and familiarize with the submission process. Submissions to this phase will not contribute towards the final leaderboard, nor will the results reflect accurate model performance on a task. Successful algorithm execution on the Sanity check phase is required before being able to submit to the Validation phase.

Validation phase

🗓️ April 15th, 2025 – July 31st, 2025

The validation phase is scheduled to open on April 15th, 2025 and will remain active until July 31st, 2025. During this period, participants are invited to submit their algorithm on the different hidden validation sets which are representative of the final test distribution. If there are no errors and inference completes successfully, the score will be up on the corresponding validation leaderboards (typically in less than 24 hours). The number of submissions to this phase during the entire challenge is limited to 5 submissions per team. The results on this leaderboard may also be used by participants to select which model to submit for the final test phase. Please double-check all requirements to make sure that your submission is compliant. Invalid submissions will be removed and teams violating any/multiple rules will be disqualified.

Test phase

🗓️ July 1st, 2025 – July 31st, 2025

The testing phase is scheduled to open on July 1st, 2025 and will remain active until July 31st, 2025. During this period, participants are invited to submit their best-performing algorithm, which will be executed on the more extensive hidden test sets. This phase determines the final rankings of the UNICORN leaderboard and offers insight into the relative strengths and weaknesses of different solutions.