Suppr超能文献

用于加速磁共振成像的多模态变压器

Multimodal Transformer for Accelerated MR Imaging.

作者信息

Feng Chun-Mei, Yan Yunlu, Chen Geng, Xu Yong, Hu Ying, Shao Ling, Fu Huazhu

出版信息

IEEE Trans Med Imaging. 2023 Oct;42(10):2804-2816. doi: 10.1109/TMI.2022.3180228. Epub 2023 Oct 2.

Abstract

Accelerated multi-modal magnetic resonance (MR) imaging is a new and effective solution for fast MR imaging, providing superior performance in restoring the target modality from its undersampled counterpart with guidance from an auxiliary modality. However, existing works simply combine the auxiliary modality as prior information, lacking in-depth investigations on the potential mechanisms for fusing different modalities. Further, they usually rely on the convolutional neural networks (CNNs), which is limited by the intrinsic locality in capturing the long-distance dependency. To this end, we propose a multi-modal transformer (MTrans), which is capable of transferring multi-scale features from the target modality to the auxiliary modality, for accelerated MR imaging. To capture deep multi-modal information, our MTrans utilizes an improved multi-head attention mechanism, named cross attention module, which absorbs features from the auxiliary modality that contribute to the target modality. Our framework provides three appealing benefits: (i) Our MTrans use an improved transformers for multi-modal MR imaging, affording more global information compared with existing CNN-based methods. (ii) A new cross attention module is proposed to exploit the useful information in each modality at different scales. The small patch in the target modality aims to keep more fine details, the large patch in the auxiliary modality aims to obtain high-level context features from the larger region and supplement the target modality effectively. (iii) We evaluate MTrans with various accelerated multi-modal MR imaging tasks, e.g., MR image reconstruction and super-resolution, where MTrans outperforms state-of-the-art methods on fastMRI and real-world clinical datasets.

摘要

加速多模态磁共振(MR)成像是快速MR成像的一种新的有效解决方案,在辅助模态的引导下,从欠采样的对应模态中恢复目标模态方面具有卓越性能。然而,现有工作只是简单地将辅助模态作为先验信息进行组合,缺乏对不同模态融合潜在机制的深入研究。此外,它们通常依赖卷积神经网络(CNN),而CNN在捕捉长距离依赖性方面受限于其固有的局部性。为此,我们提出了一种多模态变换器(MTrans),用于加速MR成像,它能够将多尺度特征从目标模态传递到辅助模态。为了捕捉深度多模态信息,我们的MTrans利用了一种改进的多头注意力机制,即交叉注意力模块,该模块从辅助模态中吸收有助于目标模态的特征。我们的框架提供了三个吸引人的优点:(i)我们的MTrans将改进的变换器用于多模态MR成像,与现有的基于CNN的方法相比,能提供更多全局信息。(ii)提出了一种新的交叉注意力模块,以在不同尺度上利用各模态中的有用信息。目标模态中的小补丁旨在保留更多精细细节,辅助模态中的大补丁旨在从更大区域获取高级上下文特征并有效补充目标模态。(iii)我们在各种加速多模态MR成像任务中评估MTrans,例如MR图像重建和超分辨率,在fastMRI和真实世界临床数据集上,MTrans的性能优于现有方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验