Hi all,
The test data has been pushed to the GitHub repository and submissions have been enabled!
Our original policy was to keep all scores private until after the deadline in order to avoid fine-tuning on the test set. However, as was recently discussed, we have recieved considerable feedback both publicly and privately asking that we make at least some scores available to the participants prior to the deadline. We understand the concern here, and we certainly want to avoid a situation where due to a book-keeping error, a team effectively misses their chance to compete for the official MICCAI leaderboard.
As a compromise, we will be allowing teams to see the approximate scores of at most two submissions prior to the deadline. By approximate, I mean the aggregate score for a randomly selected 45 of the test cases. We hope that this will ease some concerns while maintaining a high-integrity leaderboard.
The mechanism for seeing these scores is as follows: Within 24 hours of your submissions, you will receive an email with the unique Submission ID, asking whether you would like to hear an approximate score. A response of “yes” will be honored no more than twice, but you are free to keep submitting even after hearing your second score. Your most recent submission will be used for the leaderboard, regardless of whether you have been told the score or not.
Thank you to everyone who provided us this valuable feedback. Once again, if you haven’t already, please fill out the survey! It really helps us to better serve the participants of this challenge and will inform our challenge design in the future.
Thanks, and happy inference!
Nick