Suppr超能文献

一种用于脑肿瘤和组织分割的具有体积特征对齐的3D跨模态特征交互网络。

A 3D Cross-Modality Feature Interaction Network With Volumetric Feature Alignment for Brain Tumor and Tissue Segmentation.

作者信息

Zhuang Yuzhou, Liu Hong, Song Enmin, Hung Chih-Cheng

出版信息

IEEE J Biomed Health Inform. 2023 Jan;27(1):75-86. doi: 10.1109/JBHI.2022.3214999. Epub 2023 Jan 4.

Abstract

Accurate volumetric segmentation of brain tumors and tissues is beneficial for quantitative brain analysis and brain disease identification in multi-modal Magnetic Resonance (MR) images. Nevertheless, due to the complex relationship between modalities, 3D Fully Convolutional Networks (3D FCNs) using simple multi-modal fusion strategies hardly learn the complex and nonlinear complementary information between modalities. Meanwhile, the indiscriminative feature aggregation between low-level and high-level features easily causes volumetric feature misalignment in 3D FCNs. On the other hand, the 3D convolution operations of 3D FCNs are excellent at modeling local relations but typically inefficient at capturing global relations between distant regions in volumetric images. To tackle these issues, we propose an Aligned Cross-Modality Interaction Network (ACMINet) for segmenting the regions of brain tumors and tissues from MR images. In this network, the cross-modality feature interaction module is first designed to adaptively and efficiently fuse and refine multi-modal features. Secondly, the volumetric feature alignment module is developed for dynamically aligning low-level and high-level features by the learnable volumetric feature deformation field. Thirdly, we propose the volumetric dual interaction graph reasoning module for graph-based global context modeling in spatial and channel dimensions. Our proposed method is applied to brain glioma, vestibular schwannoma, and brain tissue segmentation tasks, and we performed extensive experiments on BraTS2018, BraTS2020, Vestibular Schwannoma, and iSeg-2017 datasets. Experimental results show that ACMINet achieves state-of-the-art segmentation performance on all four benchmark datasets and obtains the highest DSC score of hard-segmented enhanced tumor region on the validation leaderboard of the BraTS2020 challenge.

摘要

在多模态磁共振(MR)图像中,准确地对脑肿瘤和组织进行体积分割,有利于定量脑分析和脑疾病识别。然而,由于模态之间的复杂关系,采用简单多模态融合策略的3D全卷积网络(3D FCN)很难学习模态之间复杂的非线性互补信息。同时,低层次和高层次特征之间不加区分的特征聚合容易导致3D FCN中体积特征的错位。另一方面,3D FCN的3D卷积运算在对局部关系建模方面表现出色,但在捕捉体积图像中远距离区域之间的全局关系时通常效率低下。为了解决这些问题,我们提出了一种对齐跨模态交互网络(ACMINet),用于从MR图像中分割脑肿瘤和组织区域。在这个网络中,首先设计了跨模态特征交互模块,以自适应且高效地融合和细化多模态特征。其次,开发了体积特征对齐模块,通过可学习的体积特征变形场动态对齐低层次和高层次特征。第三,我们提出了体积双交互图推理模块,用于在空间和通道维度上基于图的全局上下文建模。我们提出的方法应用于脑胶质瘤、前庭神经鞘瘤和脑组织分割任务,并在BraTS2018、BraTS2020、前庭神经鞘瘤和iSeg - 2017数据集上进行了广泛实验。实验结果表明,ACMINet在所有四个基准数据集上均实现了最优的分割性能,并且在BraTS2020挑战赛验证排行榜上,硬分割增强肿瘤区域的DSC得分最高。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验