Diao Zhaoshuo, Jiang Huiyan, Han Xian-Hua, Yao Yu-Dong, Shi Tianyu
Software College, Northeastern University, Shenyang 110819, People's Republic of China.
Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China.
Phys Med Biol. 2021 Oct 8;66(20). doi: 10.1088/1361-6560/ac299a.
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
从正电子发射断层扫描-计算机断层扫描(PET-CT)中精确勾勒出目标肿瘤是临床实践和放射治疗中的关键步骤。PET-CT联合分割实际上利用了两种模态的互补信息来减少单模态分割的不确定性,从而获得更准确的分割结果。目前,基于全卷积神经网络(FCN)的PET-CT分割方法主要采用图像融合和特征融合。当前的融合策略没有考虑多模态分割的不确定性,并且复杂的特征融合消耗更多计算资源,尤其是在处理三维体数据时。在这项工作中,我们从不确定性的角度分析PET-CT联合分割,并提出证据融合网络(EFNet)。该网络通过提出的证据损失分别输出包含不确定性的PET结果和CT结果,将其用作PET证据和CT证据。然后我们使用证据融合来减少单模态证据的不确定性。最终的分割结果基于PET证据和CT证据的证据融合得到。EFNet以基本的三维U-Net作为主干,仅使用简单的单向特征融合。此外,EFNet可以分别训练和预测PET证据和CT证据,无需对两个分支网络进行并行训练。我们在软组织肉瘤和淋巴瘤数据集上进行了实验。与三维U-Net相比,我们提出的方法的Dice系数分别提高了8%和5%。与复杂特征融合方法相比,我们提出的方法的Dice系数分别提高了7%和2%。我们的结果表明,在基于FCN的PET-CT分割方法中,通过输出不确定性证据和证据融合,可以简化网络并提高分割结果。