Challenge resultsΒΆ

The main challenge has concluded, and we are proud to recognise the following winning teams:

Test ALLΒΆ

Baseline UNICORN Score: 0.378

πŸ₯‡ MEVIS - 0.456

πŸ₯ˆ AIMHI-MEDAI - 0.447

πŸ₯‰ kaiko - 0.273


Test Radiology VisionΒΆ

Baseline Radiology Score: 0.077

πŸ₯‡ MEVIS - 0.458

πŸ₯ˆ AIMHI-MEDAI - 0.341

πŸ₯‰ Curia - 0.038


Test Pathology VisionΒΆ

Baseline Pathology Score: 0.334

πŸ₯‡ MEVIS - 0.482

πŸ₯ˆ AIMHI-MEDAI - 0.356

πŸ₯‰ AIVIS - 0.348


Test LanguageΒΆ

Baseline Language Score: 0.629

πŸ₯‡ AIMHI-MEDAI - 0.622

πŸ₯ˆ Exynos - 0.595

πŸ₯‰ kaiko - 0.544


Early BirdΒΆ

πŸ₯‡Early Bird prize to: AIMHI-MEDAI


✨ Congratulations to all participating teams for their outstanding efforts! The final test leaderboards are now also visible, you can view them here.

The UNICORN Score reflects performance across all 20 tasks. If a submission covers only a subset of tasks, the worst possible score is assigned for the tasks not included. This ensures that the UNICORN score remains comparable across all leaderboards. The Radiology, Pathology, and Language scores represent the average performance across tasks within each specific leaderboard. These averages include only the tasks belonging to that leaderboard. The normalization logic used to compute the UNICORN Score is available in our unicorn_eval repository here.

We would like to thank our official sponsors β€” ImFusion GmbH, ScreenPoint Medical, Lunit Oncology, and Amazon Web Services (AWS) β€” for their generous support! ImFusion GmbH, ScreenPoint Medical, and Lunit Oncology made it possible for us to award the monetary awards to the top-performing teams πŸ†

Special thanks goes to Amazon Web Services (AWS) for providing the compute resources that made this benchmark possible and Grand-Challenge.org for their continued technical support in hosting and running the competition, helping to make the UNICORN Challenge a true success.