Suppr超能文献

基于无监督多领域自适应和空间神经注意力结构的跨模态医学图像心脏自动分割。

Automated cardiac segmentation of cross-modal medical images using unsupervised multi-domain adaptation and spatial neural attention structure.

机构信息

Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China; Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education), Hunan Normal University, Changsha 410081, China; Hunan Xiangjiang Artificial Intelligence Academy, Hunan Normal University, Changsha 410081, China.

Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China; Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education), Hunan Normal University, Changsha 410081, China.

出版信息

Med Image Anal. 2021 Aug;72:102135. doi: 10.1016/j.media.2021.102135. Epub 2021 Jun 17.

Abstract

Accurate cardiac segmentation of multimodal images, e.g., magnetic resonance (MR), computed tomography (CT) images, plays a pivot role in auxiliary diagnoses, treatments and postoperative assessments of cardiovascular diseases. However, training a well-behaved segmentation model for the cross-modal cardiac image analysis is challenging, due to their diverse appearances/distributions from different devices and acquisition conditions. For instance, a well-trained segmentation model based on the source domain of MR images is often failed in the segmentation of CT images. In this work, a cross-modal images-oriented cardiac segmentation scheme is proposed using a symmetric full convolutional neural network (SFCNN) with the unsupervised multi-domain adaptation (UMDA) and a spatial neural attention (SNA) structure, termed UMDA-SNA-SFCNN, having the merits of without the requirement of any annotation on the test domain. Specifically, UMDA-SNA-SFCNN incorporates SNA to the classic adversarial domain adaptation network to highlight the relevant regions, while restraining the irrelevant areas in the cross-modal images, so as to suppress the negative transfer in the process of unsupervised domain adaptation. In addition, the multi-layer feature discriminators and a predictive segmentation-mask discriminator are established to connect the multi-layer features and segmentation mask of the backbone network, SFCNN, to realize the fine-grained alignment of unsupervised cross-modal feature domains. Extensive confirmative and comparative experiments on the benchmark Multi-Modality Whole Heart Challenge dataset show that the proposed model is superior to the state-of-the-art cross-modal segmentation methods.

摘要

准确的多模态图像(如磁共振(MR)、计算机断层扫描(CT)图像)的心脏分割在心血管疾病的辅助诊断、治疗和术后评估中起着关键作用。然而,由于不同设备和采集条件下的模态间心脏图像外观和分布存在差异,训练出性能良好的跨模态心脏图像分析分割模型具有挑战性。例如,基于 MR 图像源域训练的分割模型通常无法对 CT 图像进行分割。在这项工作中,提出了一种使用具有无监督多域自适应(UMDA)和空间神经注意力(SNA)结构的对称全卷积神经网络(SFCNN)的面向多模态图像的心脏分割方案,称为 UMDA-SNA-SFCNN,具有无需对测试域进行任何注释的优点。具体来说,UMDA-SNA-SFCNN 将 SNA 集成到经典对抗性域自适应网络中,以突出相关区域,同时抑制跨模态图像中的不相关区域,从而抑制无监督域自适应过程中的负迁移。此外,建立了多层特征鉴别器和预测分割掩模鉴别器,以连接骨干网络 SFCNN 的多层特征和分割掩模,实现无监督跨模态特征域的细粒度对齐。在基准多模态全心脏挑战赛数据集上进行的广泛验证和对比实验表明,所提出的模型优于最先进的跨模态分割方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验