Test Data Release Time

Hi,
May I know, when would you release the test data !

Best,

I’m thinking I’ll have it up around 15:00 UTC

I was wondering why we would not see the results on test-data? This means that we have only one chance for test-data submission not as many as stated in the rules page. Suppose, our model over-fitted on the test data for any reason, how we can get aware of that? Normally, in other challenges you have some limited number of submissions like 2-3 on test-data to ensure your model correctness. In this way, the participants can get feedback of their results and probably tune their model, on the other hand it also avoid any leader-board probing or any other type of heave hyperparamaters tuning.

I’ve seen both methods used in past challenges. This is by far the issue that we’ve gotten the most pushback on for the challenge design. I understand the anxiety on the part of the competitors, since if there’s some book-keeping error, you effectively “miss your shot” at competing in the official challenge.

Unfortunately, the infrastructure at grand-challenge.org only allows for a daily submission limit, so allowing only 2-3 submissions over the course of two weeks is not possible.

Perhaps what we could do is privately contact competitors after each submission and ask whether they would like to hear their score. Scores would be provided only twice, and they could keep submitting after that, but they would not be told their score.

Does that sound like a reasonable solution?

2 Likes

@neheller this is an awesome solution. Twice is more than enough I guess to be sure everything works well.

@Aramis_Vesal, as a small compromise, the scores we provide will be for a randomly selected 45 of the test set. This should still serve the purpose while keeping us in adherence with the recommended practices from this 2018 MICCAI paper on the topic.

Thanks for your feedback on this.

Hellow, I want to ask a question of the final prediction submiting. Which size of the prediction we can submite?For the image size is too large to train the model,we must resize the image to smaller.So we get the final prediction is smaller than the original image.Can we submite the small size result?

No, you’ll have to transform your predictions to the original size of the data. Something like bi/trilinear interpolation would probably work just fine for this.