School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China.
Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China.
Comput Biol Med. 2023 May;157:106769. doi: 10.1016/j.compbiomed.2023.106769. Epub 2023 Mar 9.
Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods.
图像融合技术已广泛应用于多模态医学图像融合任务。大多数现有方法旨在提高融合图像的整体质量,而不关注感兴趣区域 (ROI) 中病变组织的更重要的纹理细节和对比度。这可能导致重要肿瘤 ROI 信息的失真,从而限制了融合图像在临床实践中的适用性。为了提高与医学意义相关的 ROI 的融合质量,我们针对脑肿瘤的多模态 MRI 融合任务提出了一种多模态 MRI 融合生成对抗网络 (BTMF-GAN)。与现有的专注于提高融合图像整体质量的深度学习方法不同,所提出的 BTMF-GAN 旨在实现脑肿瘤组织细节和结构对比度之间的平衡,这是对许多医学应用至关重要的 ROI。具体来说,我们采用了具有 U 型嵌套结构和残差 U 块 (RSU) 的生成器来增强多尺度特征提取。为了增强和重新校准编码器的特征,在编码器和解码器之间使用了多感知场自适应变换特征增强模块 (MRF-ATFE),而不是跳过连接。为了增加融合图像中肿瘤组织之间的对比度,引入了掩模分割块来分割源图像和融合图像,在此基础上,我们提出了一种新颖的显著损失函数。在公共和临床数据集上的定性和定量分析结果表明,该方法优于许多其他常用的融合方法。