Suppr超能文献

具有不一致性引导的自适应跨特征融合网络用于多模态脑肿瘤分割

Adaptive Cross-Feature Fusion Network With Inconsistency Guidance for Multi-Modal Brain Tumor Segmentation.

作者信息

Yue Guanghui, Zhuo Guibin, Zhou Tianwei, Liu Weide, Wang Tianfu, Jiang Qiuping

出版信息

IEEE J Biomed Health Inform. 2025 May;29(5):3148-3158. doi: 10.1109/JBHI.2023.3347556. Epub 2025 May 6.

Abstract

In the context of contemporary artificial intelligence, increasing deep learning (DL) based segmentation methods have been recently proposed for brain tumor segmentation (BraTS) via analysis of multi-modal MRI. However, known DL-based works usually directly fuse the information of different modalities at multiple stages without considering the gap between modalities, leaving much room for performance improvement. In this paper, we introduce a novel deep neural network, termed ACFNet, for accurately segmenting brain tumor in multi-modal MRI. Specifically, ACFNet has a parallel structure with three encoder-decoder streams. The upper and lower streams generate coarse predictions from individual modality, while the middle stream integrates the complementary knowledge of different modalities and bridges the gap between them to yield fine prediction. To effectively integrate the complementary information, we propose an adaptive cross-feature fusion (ACF) module at the encoder that first explores the correlation information between the feature representations from upper and lower streams and then refines the fused correlation information. To bridge the gap between the information from multi-modal data, we propose a prediction inconsistency guidance (PIG) module at the decoder that helps the network focus more on error-prone regions through a guidance strategy when incorporating the features from the encoder. The guidance is obtained by calculating the prediction inconsistency between upper and lower streams and highlights the gap between multi-modal data. Extensive experiments on the BraTS 2020 dataset show that ACFNet is competent for the BraTS task with promising results and outperforms six mainstream competing methods.

摘要

在当代人工智能的背景下,最近通过对多模态磁共振成像(MRI)的分析,提出了越来越多基于深度学习(DL)的脑肿瘤分割(BraTS)方法。然而,已知的基于DL的工作通常在多个阶段直接融合不同模态的信息,而不考虑模态之间的差异,这使得性能提升空间很大。在本文中,我们引入了一种新颖的深度神经网络,称为ACFNet,用于在多模态MRI中准确分割脑肿瘤。具体而言,ACFNet具有一个并行结构,由三个编码器 - 解码器流组成。上部和下部流从单个模态生成粗略预测,而中间流整合不同模态的互补知识并弥合它们之间的差距以产生精细预测。为了有效地整合互补信息,我们在编码器处提出了一种自适应交叉特征融合(ACF)模块,该模块首先探索上部和下部流的特征表示之间的相关信息,然后细化融合后的相关信息。为了弥合多模态数据信息之间的差距,我们在解码器处提出了一种预测不一致引导(PIG)模块,该模块在合并来自编码器的特征时,通过引导策略帮助网络更多地关注容易出错的区域。该引导是通过计算上部和下部流之间的预测不一致来获得的,并突出了多模态数据之间的差距。在BraTS 2020数据集上进行的大量实验表明,ACFNet能够胜任BraTS任务,取得了有前景的结果,并且优于六种主流竞争方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验