Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Engineering, Southeast University, Nanjing, 210000, China.
Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China.
Med Image Anal. 2021 May;70:101979. doi: 10.1016/j.media.2021.101979. Epub 2021 Feb 3.
Annotating multiple organs in medical images is both costly and time-consuming; therefore, existing multi-organ datasets with labels are often low in sample size and mostly partially labeled, that is, a dataset has a few organs labeled but not all organs. In this paper, we investigate how to learn a single multi-organ segmentation network from a union of such datasets. To this end, we propose two types of novel loss function, particularly designed for this scenario: (i) marginal loss and (ii) exclusion loss. Because the background label for a partially labeled image is, in fact, a 'merged' label of all unlabelled organs and 'true' background (in the sense of full labels), the probability of this 'merged' background label is a marginal probability, summing the relevant probabilities before merging. This marginal probability can be plugged into any existing loss function (such as cross entropy loss, Dice loss, etc.) to form a marginal loss. Leveraging the fact that the organs are non-overlapping, we propose the exclusion loss to gauge the dissimilarity between labeled organs and the estimated segmentation of unlabelled organs. Experiments on a union of five benchmark datasets in multi-organ segmentation of liver, spleen, left and right kidneys, and pancreas demonstrate that using our newly proposed loss functions brings a conspicuous performance improvement for state-of-the-art methods without introducing any extra computation.
在医学图像中注释多个器官既昂贵又耗时;因此,现有的带标签的多器官数据集通常样本量较小,并且大多是部分标记的,也就是说,一个数据集只有少数几个器官被标记,而不是所有器官。在本文中,我们研究了如何从这样的数据集联盟中学习单个多器官分割网络。为此,我们提出了两种新的损失函数类型,特别是为这种情况设计的:(i)边缘损失和(ii)排除损失。由于部分标记图像的背景标签实际上是所有未标记器官和“真实”背景(在全标签意义上)的“合并”标签,因此这个“合并”背景标签的概率是边缘概率,在合并之前对相关概率进行求和。这个边缘概率可以插入任何现有的损失函数(如交叉熵损失、Dice 损失等)中,形成边缘损失。利用器官不重叠的事实,我们提出了排除损失,以衡量标记器官和未标记器官的估计分割之间的差异。在肝脏、脾脏、左右肾脏和胰腺的多器官分割的五个基准数据集的联合实验中,我们的实验表明,使用我们新提出的损失函数可以在不引入任何额外计算的情况下,为最先进的方法带来显著的性能提升。