This table contains the Challenge scoring metric and other traditional scoring metrics for the 2025 Challenge entries on the hidden validation and test sets. This table contains the Challenge scoring metric and other traditional scoring metrics for the 2025 Hackathon entries on the hidden validation and test sets.
During the course of the Challenge, teams submitted entries with their approaches, and we trained and evaluated each entry on the Challenge data. The training data were public, and the validation and test sets were hidden. In particular, during the unofficial and official phases of the Challenge, we trained each entry on the training set, and we scored each entry the validation set. After the official phase, each team selected one entry for evaluation on the test set, and we evaluated these entries on the test set.
Teams that satisfied all of the Challenge rules were eligible for rankings and prizes as official Challenge entries. Teams that did not satisfy one or more of the rules were not eligible for rankings or prizes as unofficial Challenge entries. (Both the official and unofficial entries are from the official phase of the Challenge.) Hackathon participants only needed to attend the Hackathon, but they are not ranked as part of the Challenge.
The official Challenge entries are sorted and ranked by the mean Challenge score across the three data sources in the test set: the REDS-II dataset, the SaMi-Trop 3 dataset, and the ELSA-Brasil dataset. The unofficial entries were sorted alphabetically by team name and not ranked.
The Challenge webpage and Challenge description papers describe the Challenge. Please cite the Challenge when describing the Challenge.
Supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under NIH grant number R01EB030362.
© PhysioNet Challenges. Website content licensed under the Creative Commons Attribution 4.0 International Public License.