This page provides specific FAQs for the 2024 Challenge. Please read the general Challenge FAQs for more general questions about the Challenges.
Yes, you can still participate. An accepted CinC abstract is required for prizes, rankings, and the opportunity to present your work at CinC, but you can still submit algorithms to the official phase without an accepted abstract.
A ‘wildcard’ entry is reserved for a high-scoring team who submitted an abstract that was not accepted or who were unable to submit an abstract by the deadline. Please read here for more details, including the deadline.
Yes, the philosophy of the Challenge is to encourage researchers to make their code free to use for research. We hope that companies will approach you to license the code, too! If you do not specify any license, then we will assume that the license is the BSD 3-Clause License.
Yes, this is a required (and exciting) part of this year’s Challenge.
No, the training code is an important part of this year’s Challenge.
Yes, most certainly. We encourage you to do this. You do not need to include your data in the code stack for training the algorithm, but you do need to include the pre-trained model in the code and provide code to retrain (continue training) on the training data we provide. You must also thoroughly document the content of the database you used to pre-train. You cannot use transfer learning to avoid training your model.
Yes, to a degree, but the validation and test sets do not have all of the information included in the training data, specifically the labels!
This is deliberate - it reflects the real world. Your algorithms will be used on humans in the future, and they will look different to humans now. One major challenge is to create an algorithm that generalizes to new data. We check this by evaluating the algorithms on held-out data with somewhat different populations. You can also do this by sourcing more of your own data, or building in physiological and clinical knowledge.
No, we have the training, validation, and test data as well as the evaluation code.
The Challenge organizers are always looking for feedback from the teams/public, particularly during the first phase of the Challenge from January to April. After that period, we update the Challenge rules to address the public commentary. The metrics are never perfect, but we do try to create meaningful metrics that are relevant to healthcare, which is why we rarely use the usual information retrieval metrics. We often attempt to build in the clinical response, the medical resources available, and the relative (health and financial) costs of false vs. true positives and negatives. In general, the top 5-10 teams are often comparable. We believe the discussion that we generate around the problem, the heterogeneity of approaches, and the optimization of domain-aware metrics is more important than any single winner.
No, the leaderboard contains scores on the validation data during the unofficial and official phases of the Challenge. The final scores on the test data will be released after the conference for the preferred model selected by each team.
You will be able to choose which model you would like to have scored on the test set. We will ask for teams to choose their preferred model shortly before the end of the official phase of the Challenge. If you do not choose a model, or if there is any ambiguity about your choice, then we will use the model with the highest score on the validation data.
We are using a
g4dn.4xlarge instance on AWS to run your code. It has 16 vCPUs, 64 GB RAM (60 GB available to your code), 300 GB of local storage (in addition to the data), and an optional NVIDIA T4 GPU.
For training your model on the training data, we impose a 48 hour time limit for submissions that request a GPU and a 72 hour time limit for submissions that do not request a GPU. For running your trained model on the validation or test data, we impose a 24 hour time limit whether or not a submission requests a GPU.
No, please only submit your code to the submission system.
No, please only submit an entry after you have finished and tested your code.
No, please use the submission form to submit your entry through a repository.
No, not yet. If you change your code after submitting, then we may or may not run the updated version of your code. If you want to update your code but do not want us to run the updates (yet), then please make changes in a subdirectory or in another branch of your repository.
If you used Python for your entry, then please test it in Docker. See the submissions page for details.
No, only scored entries (submitted entries that receive a score) count against the total number of allowed entries.
For more general Challenge FAQs, please visit here.
Supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under NIH grant number R01EB030362.
© PhysioNet Challenges. Website content licensed under the Creative Commons Attribution 4.0 International Public License.