González Germán, Washko George R, José Estépar Raúl San
Sierra Research S.L., Alicante, Spain.
Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Womens Hospital, Harvard Medical School, Boston, MA, USA.
Image Anal Mov Organ Breast Thorac Images (2018). 2018 Sep;11040:215-224. doi: 10.1007/978-3-030-00946-5_22. Epub 2018 Sep 12.
Labeled data is the current bottleneck of medical image research. Substantial efforts are made to generate segmentation masks to characterize a given organ. The community ends up with multiple label maps of individual structures in different cases, not suitable for current multi-organ segmentation frameworks. Our objective is to leverage segmentations from multiple organs in different cases to generate a robust multi-organ deep learning segmentation network. We propose a modified cost-function that takes into account only the voxels labeled in the image, ignoring unlabeled structures. We evaluate the proposed methodology in the context of pectoralis muscle and subcutaneous fat segmentation on chest CT scans. Six different structures are segmented from an axial slice centered on the transversal aorta. We compare the performance of a network trained on 3,000 images where only one structure has been annotated (PUNet) against six UNets (one per structure) and a multi-class UNet trained on 500 completely annotated images, showing equivalence between the three methods (Dice coefficients of 0.909, 0.906 and 0.909 respectively). We further propose a modification of the architecture by adding convolutions to the skip connections (CUNet). When trained with partially labeled images, it outperforms statistically significantly the other three methods (Dice 0.916, < 0.0001). We, therefore, show that (a) when keeping the number of organ annotation constant, training with partially labeled images is equivalent to training with wholly labeled data and (b) adding convolutions in the skip connections improves performance.
标注数据是当前医学图像研究的瓶颈。人们付出了巨大努力来生成分割掩码以表征特定器官。最终,该领域得到了不同病例中各个结构的多个标签图,但这些标签图并不适用于当前的多器官分割框架。我们的目标是利用不同病例中多个器官的分割结果来生成一个强大的多器官深度学习分割网络。我们提出了一种改进的代价函数,该函数仅考虑图像中已标注的体素,而忽略未标注的结构。我们在胸部CT扫描的胸大肌和皮下脂肪分割背景下评估了所提出的方法。从以横向主动脉为中心的轴向切片中分割出六种不同的结构。我们将在仅标注了一种结构的3000张图像上训练的网络(PUNet)的性能与六个UNet(每个结构一个)以及在500张完全标注图像上训练的多类UNet的性能进行了比较,结果表明这三种方法相当(Dice系数分别为0.909、0.906和0.909)。我们进一步通过在跳跃连接中添加卷积对架构进行了改进(CUNet)。当使用部分标注图像进行训练时,它在统计上显著优于其他三种方法(Dice系数为0.916,p < 0.0001)。因此,我们表明:(a)在保持器官标注数量不变的情况下,使用部分标注图像进行训练与使用完全标注数据进行训练相当;(b)在跳跃连接中添加卷积可提高性能。