Suppr超能文献

基于模态加权 UNet 融合多模态图像的医学病灶分割。

Medical lesion segmentation by combining multimodal images with modality weighted UNet.

机构信息

College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China.

Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou, China.

出版信息

Med Phys. 2022 Jun;49(6):3692-3704. doi: 10.1002/mp.15610. Epub 2022 Apr 7.

Abstract

PURPOSE

Automatic segmentation of medical lesions is a prerequisite for efficient clinic analysis. Segmentation algorithms for multimodal medical images have received much attention in recent years. Different strategies for multimodal combination (or fusion), such as probability theory, fuzzy models, belief functions, and deep neural networks, have also been developed. In this paper, we propose the modality weighted UNet (MW-UNet) and attention-based fusion method to combine multimodal images for medical lesion segmentation.

METHODS

MW-UNet is a multimodal fusion method which is based on UNet, but we use a shallower layer and fewer feature map channels to reduce the amount of network parameters, and our method uses the new multimodal fusion method called fusion attention. It uses weighted sum rule and fusion attention to combine feature maps in intermediate layers. During training, all the weight parameters are updated through backpropagation like other parameters in the network. We also incorporate residual blocks into MW-UNet to further improve segmentation performance. The comparison between the automatic multimodal lesion segmentations and the manual contours was quantified by (1) five metrics including Dice, 95% Hausdorff Distance (HD95), volumetric overlap error (VOE), relative volume difference (RVD), and mean-Intersection-over-Union (mIoU); (2) Number of parameters and flops to calculate the complexity of the network.

RESULTS

The proposed method is verified on ZJCHD, which is the data set of contrast-enhanced computed tomography (CECT) for Liver Lesion Segmentation taken from Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China. For accuracy evaluation, we use 120 patients with liver lesions from ZJCHD, of which 100 are used for fourfold cross-validation (CV) and 20 are used for hold-out (HO) test. The mean Dice was and for HO and CV tests, respectively. The corresponding HD95, VOE, RVD, and mIoU of the two tests are 1.95 ± 1.83 and 2.67 ± 3.35 mm, 13.11 ± 15.83 and , 12.20 ± 18.20 and , and 83.79 ± 15.83 and . The parameters and flops of our method is 4.04 M and 18.36 G, respectively.

CONCLUSIONS

The results show that our method performs well on multimodal liver lesion segmentation. It can be easily extended to other multimodal data sets and other networks for multimodal fusion. Our method is the potential to provide doctors with multimodal annotations and assist them with clinical diagnosis.

摘要

目的

医学病变的自动分割是高效临床分析的前提。近年来,多模态医学图像的分割算法受到了广泛关注。已经开发了用于多模态组合(或融合)的不同策略,例如概率论、模糊模型、置信函数和深度神经网络。在本文中,我们提出了模态加权 UNet(MW-UNet)和基于注意力的融合方法来组合多模态图像以进行医学病变分割。

方法

MW-UNet 是一种基于 UNet 的多模态融合方法,但我们使用较浅的层和较少的特征图通道来减少网络参数的数量,并且我们的方法使用了一种称为融合注意力的新的多模态融合方法。它使用加权和规则和融合注意力来组合中间层的特征图。在训练过程中,所有权重参数都像网络中的其他参数一样通过反向传播进行更新。我们还将残差块合并到 MW-UNet 中,以进一步提高分割性能。通过以下方式量化自动多模态病变分割与手动轮廓之间的差异:(1) 五个指标,包括 Dice、95% Hausdorff 距离 (HD95)、体积重叠误差 (VOE)、相对体积差异 (RVD) 和平均交并率 (mIoU);(2) 参数数量和 FLOPs 以计算网络的复杂性。

结果

该方法在 ZJCHD 上进行了验证,ZJCHD 是来自中国科学院大学附属癌症医院(浙江省肿瘤医院)的对比增强计算机断层扫描 (CECT) 肝脏病变分割数据集。对于准确性评估,我们使用了来自 ZJCHD 的 120 名肝脏病变患者,其中 100 名用于四折交叉验证 (CV),20 名用于保留 (HO) 测试。HO 和 CV 测试的平均 Dice 分别为 和 。这两种测试的相应 HD95、VOE、RVD 和 mIoU 分别为 1.95±1.83 和 2.67±3.35mm、13.11±15.83 和 、12.20±18.20 和 、以及 83.79±15.83 和 。我们方法的参数和 FLOPs 分别为 4.04M 和 18.36G。

结论

结果表明,我们的方法在多模态肝脏病变分割方面表现良好。它可以很容易地扩展到其他多模态数据集和其他用于多模态融合的网络。我们的方法有可能为医生提供多模态注释,并协助他们进行临床诊断。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验