Suppr超能文献

用于不完整多模态脑肿瘤分割的模态掩码融合变换器(M FTrans)

M FTrans: Modality-Masked Fusion Transformer for Incomplete Multi-Modality Brain Tumor Segmentation.

作者信息

Shi Junjie, Yu Li, Cheng Qimin, Yang Xin, Cheng Kwang-Ting, Yan Zengqiang

出版信息

IEEE J Biomed Health Inform. 2023 Oct 20;PP. doi: 10.1109/JBHI.2023.3326151.

Abstract

Brain tumor segmentation is a fundamental task and existing approaches usually rely on multi-modality magnetic resonance imaging (MRI) images for accurate segmentation. However, the common problem of missing/incomplete modalities in clinical practice would severely degrade their segmentation performance, and existing fusion strategies for incomplete multi-modality brain tumor segmentation are far from ideal. In this work, we propose a novel framework named M FTrans to explore and fuse cross-modality features through modality-masked fusion transformers under various incomplete multi-modality settings. Considering vanilla self-attention is sensitive to missing tokens/inputs, both learnable fusion tokens and masked self-attention are introduced to stably build long-range dependency across modalities while being more flexible to learn from incomplete modalities. In addition, to avoid being biased toward certain dominant modalities, modality-specific features are further re-weighted through spatial weight attention and channel- wise fusion transformers for feature redundancy reduction and modality re-balancing. In this way, the fusion strategy in M FTrans is more robust to missing modalities. Experimental results on the widely-used BraTS2018, BraTS2020, and BraTS2021 datasets demonstrate the effectiveness of M FTrans, outperforming the state-of-the-art approaches with large margins under various incomplete modalities for brain tumor segmentation. Code is available at https://github.com/Jun-Jie-Shi/M2FTrans.

摘要

脑肿瘤分割是一项基础任务,现有方法通常依赖多模态磁共振成像(MRI)图像进行精确分割。然而,临床实践中常见的模态缺失/不完整问题会严重降低其分割性能,并且现有的用于不完整多模态脑肿瘤分割的融合策略远非理想。在这项工作中,我们提出了一种名为M FTrans的新颖框架,以在各种不完整多模态设置下通过模态掩码融合变换器探索和融合跨模态特征。考虑到普通的自注意力对缺失的令牌/输入敏感,我们引入了可学习的融合令牌和掩码自注意力,以在跨模态中稳定地建立长距离依赖关系,同时在从不完整模态中学习时更加灵活。此外,为避免偏向某些主导模态,通过空间权重注意力和通道级融合变换器对特定模态特征进一步重新加权,以减少特征冗余并进行模态重新平衡。通过这种方式,M FTrans中的融合策略对缺失模态更具鲁棒性。在广泛使用的BraTS2018、BraTS2020和BraTS2021数据集上的实验结果证明了M FTrans的有效性,在各种不完整模态下进行脑肿瘤分割时,其性能大幅优于现有最先进的方法。代码可在https://github.com/Jun-Jie-Shi/M2FTrans获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验