Frequently Asked Questions (FAQ) for the 2023 Challenge

This page provides specific FAQs for the 2023 Challenge. Please read the general Challenge FAQs for more general questions about the Challenges.







I missed the abstract deadline. Can I still participate in the Challenge?

Yes, you can still participate. An accepted CinC abstract is required for prizes, rankings, and the opportunity to present your work at CinC, but you can still submit algorithms to the official phase without an accepted abstract.

A ‘wildcard’ entry is reserved for a high-scoring team who submitted an abstract that was not accepted or who were unable to submit an abstract by the deadline. Please read here for more details, including the deadline.

Can I make the license open source but restrict commercial use?

Yes, the philosophy of the Challenge is to encourage researchers to make their code free to use for research. We hope that companies will approach you to license the code, too! If you do not specify any license, then we will assume that the license is the BSD 3-Clause License.

Do we need to provide our training code?

Yes, this is a required (and exciting) part of this year’s Challenge.

Can I provide my training code but request that you not use/run it?

No, the training code is an important part of this year’s Challenge.

Am I allowed to do transfer learning using pre-trained networks?

Yes, most certainly. We encourage you to do this. You do not need to include your data in the code stack for training the algorithm, but you do need to include the pre-trained model in the code and provide code to retrain (continue training) on the training data we provide. You must also thoroughly document the content of the database you used to pre-train. You cannot use transfer learning to avoid training your model.


Are the training data representative of the validation and test data?

Yes, to a degree, but the validation and test sets do not have all of the information included in the training data, specifically the labels!

Do I need to upload your training data? What about the code for evaluating my algorithm?

No, we have the training, validation, and test data as well as the evaluation code.


Are the scores currently on the leaderboard the final scores for the Challenge?

No, the leaderboard contains scores on the validation data during the unofficial and official phases of the Challenge. The final scores on the test data will be released after the conference for the preferred model selected by each team.

How do I choose which my submission for evaluation on the test data?

You will be able to choose which model you would like to have scored on the test set. We will ask for teams to choose their preferred model shortly before the end of the official phase of the Challenge. If you do not choose a model, or if there is any ambiguity about your choice, then we will use the model with the highest score on the validation data.


What computational resources do you provide for our code?

We are using a g4dn.4xlarge instance on AWS to run your code. It has 16 vCPUs, 64 GB RAM (60 GB available to your code), 300 GB of local storage (in addition to the data), and an optional NVIDIA T4 GPU.

For training your model on the training data, we impose a 48 hour time limit for submissions that request a GPU and a 72 hour time limit for submissions that do not request a GPU. For running your trained model on the validation or test data, we impose a 24 hour time limit whether or not a submission requests a GPU.


Should I submit your example code to test the submission system?

No, please only submit your code to the submission system.

Should I submit an empty repository to test the submission system?

No, please only submit an entry after you have finished and tested your code.

I left out a file, or I missed the deadline, or something else. Can I email you my code?

No, please use the submission form to submit your entry through a repository.

Do you run the code that was in my repository at the time of submission?

No, not yet. If you change your code after submitting, then we may or may not run the updated version of your code. If you want to update your code but do not want us to run the updates (yet), then please make changes in a subdirectory or in another branch of your repository.

Why is my entry unsuccessful on your submission system? It works on my computer.

If you used Python for your entry, then please test it in Docker. See the submissions page for details.

My entry had some kind of error. Did I lose one of my total entries?

No, only scored entries (submitted entries that receive a score) count against the total number of allowed entries.

For more general Challenge FAQs, please visit here.

Supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under NIH grant number R01EB030362.

© PhysioNet Challenges. Website content licensed under the Creative Commons Attribution 4.0 International Public License.