Good evening Nick,
I’m currently running some tests using the KiTS19 dataset. I have all segmentation results now saved in necessary .nii.gz files. I would appreciate any guidance in how I can go about the best route for computing my evaluation metrics.
Distilling from the thread here I notice discussions about an evaluator in Docker and the use of evaluation.py. Since I’m using nnU-Net there is also a different evaluator.py script.
My question is; is there a location where I can find the Ground Truth segmentations of the test cases for KiTS19 (case_00210 - case_00299)?