Region of interest & segmentation

Based on the annotation process referenced on the challenge page, I assume that the segmentation is only performed on the ROI propossed by the annotators.

But it seams that some of the segmentation are abruptly stopped at the boundary of the ROI ?
(see figure below)

image

So, this could lead to ambiguous segmentation using the whole image ?

A very similar question was just asked in another thread. To summarize our response: We used consistent anatomical landmarks to clip our annotations above and below in order to focus on the perirenal region, and we don’t think the networks will have much trouble replicating this. That said, it’s something we’re actively looking into and might change before the “final” version of the training set is released on July 1.