Liang Nannan
School of Informatics and Engineering, Suzhou University, Suzhou, 234000, China.
Sci Rep. 2024 Apr 4;14(1):7972. doi: 10.1038/s41598-024-58665-9.
Medical image fusion aims to fuse multiple images from a single or multiple imaging modes to enhance their corresponding clinical applications in diagnosing and evaluating medical problems, a trend that has attracted increasing attention. However, most recent medical image fusion methods require prior knowledge, making it difficult to select image features. In this paper, we propose a novel deep medical image fusion method based on a deep convolutional neural network (DCNN) for directly learning image features from original images. Specifically, source images are first decomposed by low rank representation to obtain the principal and salient components, respectively. Following that, the deep features are extracted from the decomposed principal components via DCNN and fused by a weighted-average rule. Then, considering the complementary between the salient components obtained by the low rank representation, a simple yet effective sum rule is designed to fuse the salient components. Finally, the fused result is obtained by reconstructing the principal and salient components. The experimental results demonstrate that the proposed method outperforms several state-of-the-art medical image fusion approaches in terms of both objective indices and visual quality.
医学图像融合旨在融合来自单一或多种成像模式的多幅图像,以增强其在诊断和评估医学问题方面相应的临床应用,这一趋势已引起越来越多的关注。然而,最近的大多数医学图像融合方法都需要先验知识,这使得选择图像特征变得困难。在本文中,我们提出了一种基于深度卷积神经网络(DCNN)的新型深度医学图像融合方法,用于直接从原始图像中学习图像特征。具体而言,首先通过低秩表示对源图像进行分解,分别获得主成分和显著成分。随后,通过DCNN从分解后的主成分中提取深度特征,并通过加权平均规则进行融合。然后,考虑到通过低秩表示获得的显著成分之间的互补性,设计了一种简单而有效的求和规则来融合显著成分。最后,通过重构主成分和显著成分获得融合结果。实验结果表明,所提出的方法在客观指标和视觉质量方面均优于几种先进的医学图像融合方法。