Suppr超能文献

用于多模态医学图像分割的模态保留U型网络。

Modality preserving U-Net for segmentation of multimodal medical images.

作者信息

Wu Bingxuan, Zhang Fan, Xu Liang, Shen Shuwei, Shao Pengfei, Sun Mingzhai, Liu Peng, Yao Peng, Xu Ronald X

机构信息

Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China.

Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China.

出版信息

Quant Imaging Med Surg. 2023 Aug 1;13(8):5242-5257. doi: 10.21037/qims-22-1367. Epub 2023 Jun 14.

Abstract

BACKGROUND

Recent advances in artificial intelligence and digital image processing have inspired the use of deep neural networks for segmentation tasks in multimodal medical imaging. Unlike natural images, multimodal medical images contain much richer information regarding different modal properties and therefore present more challenges for semantic segmentation. However, there is no report on systematic research that integrates multi-scaled and structured analysis of single-modal and multimodal medical images.

METHODS

We propose a deep neural network, named as Modality Preserving U-Net (MPU-Net), for modality-preserving analysis and segmentation of medical targets from multimodal medical images. The proposed MPU-Net consists of a modality preservation encoder (MPE) module that preserves the feature independency among the modalities and a modality fusion decoder (MFD) module that performs a multiscale feature fusion analysis for each modality in order to provide a rich feature representation for the final task. The effectiveness of such a single-modal preservation and multimodal fusion feature extraction approach is verified by multimodal segmentation experiments and an ablation study using brain tumor and prostate datasets from Medical Segmentation Decathlon (MSD).

RESULTS

The segmentation experiments demonstrated the superiority of MPU-Net over other methods in the segmentation tasks for multimodal medical images. In the brain tumor segmentation tasks, the Dice scores (DSCs) for the whole tumor (WT), the tumor core (TC) and the enhancing tumor (ET) regions were 89.42%, 86.92%, and 84.59%, respectively. In the meanwhile, the 95% Hausdorff distance (HD95) results were 3.530, 4.899 and 2.555, respectively. In the prostate segmentation tasks, the DSCs for the peripheral zone (PZ) and the transitional zone (TZ) of the prostate were 71.20% and 90.38%, respectively. In the meanwhile, the 95% HD95 results were 6.367 and 4.766, respectively. The ablation study showed that the combination of single-modal preservation and multimodal fusion methods improved the performance of multimodal medical image feature analysis.

CONCLUSIONS

In the segmentation tasks using brain tumor and prostate datasets, the MPU-Net method has achieved the improved performance in comparison with the conventional methods, indicating its potential application for other segmentation tasks in multimodal medical images.

摘要

背景

人工智能和数字图像处理的最新进展激发了人们在多模态医学成像的分割任务中使用深度神经网络。与自然图像不同,多模态医学图像包含有关不同模态属性的更丰富信息,因此在语义分割方面带来了更多挑战。然而,尚无关于整合单模态和多模态医学图像的多尺度和结构化分析的系统研究报告。

方法

我们提出了一种名为模态保留U-Net(MPU-Net)的深度神经网络,用于对多模态医学图像中的医学目标进行模态保留分析和分割。所提出的MPU-Net由一个模态保留编码器(MPE)模块和一个模态融合解码器(MFD)模块组成,MPE模块保留模态之间的特征独立性,MFD模块对每个模态进行多尺度特征融合分析,以便为最终任务提供丰富的特征表示。通过多模态分割实验以及使用医学分割十项全能赛(MSD)的脑肿瘤和前列腺数据集进行的消融研究,验证了这种单模态保留和多模态融合特征提取方法的有效性。

结果

分割实验证明了MPU-Net在多模态医学图像分割任务中优于其他方法。在脑肿瘤分割任务中,整个肿瘤(WT)、肿瘤核心(TC)和增强肿瘤(ET)区域的骰子系数(DSC)分别为89.42%、86.92%和84.59%。同时,95%豪斯多夫距离(HD95)结果分别为3.530、4.899和2.555。在前列腺分割任务中,前列腺外周区(PZ)和移行区(TZ)的DSC分别为71.20%和90.38%。同时,95% HD95结果分别为6.367和4.766。消融研究表明,单模态保留和多模态融合方法的结合提高了多模态医学图像特征分析的性能。

结论

在使用脑肿瘤和前列腺数据集的分割任务中,MPU-Net方法与传统方法相比性能有所提高,表明其在多模态医学图像的其他分割任务中的潜在应用价值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8005/10423364/4a3d71bcf6a5/qims-13-08-5242-f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验