Sengan Sudhakar, Gugulothu Praveen, Alroobaea Roobaea, Webber Julian L, Mehbodniya Abolfazl, Yousef Amr
Department of Computer Science and Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamil Nadu, 627152, India.
Department of Computer Science and Engineering, Siddhartha Institute of Technology & Science (SITS), Narapally, Hyderabad, Telangana, 500088, India.
Sci Rep. 2025 Aug 12;15(1):29472. doi: 10.1038/s41598-025-13862-y.
Multi-Modal Medical Image Fusion (MMMIF) has become increasingly important in clinical applications, as it enables the integration of complementary information from different imaging modalities to support more accurate diagnosis and treatment planning. The primary objective of Medical Image Fusion (MIF) is to generate a fused image that retains the most informative features from the Source Images (SI), thereby enhancing the reliability of clinical decision-making systems. However, due to inherent limitations in individual imaging modalities-such as poor spatial resolution in functional images or low contrast in anatomical scans-fused images can suffer from information degradation or distortion. To address these limitations, this study proposes a novel fusion framework that integrates the Non-Subsampled Shearlet Transform (NSST) with a Convolutional Neural Network (CNN) for effective sub-band enhancement and image reconstruction. Initially, each source image is decomposed into Low-Frequency Coefficients (LFC) and multiple High-Frequency Coefficients (HFC) using NSST. The proposed Concurrent Denoising and Enhancement Network (CDEN) is then applied to these sub-bands to suppress noise and enhance critical structural details. The enhanced LFCs are fused using an AlexNet-based activity-level fusion model, while the enhanced HFCs are combined using a Pulse Coupled Neural Network (PCNN) guided by a Novel Sum-Modified Laplacian (NSML) metric. Finally, the fused image is reconstructed via Inverse-NSST (I-NSST). Experimental results prove that the proposed method outperforms existing fusion algorithms, achieving approximately 16.5% higher performance in terms of the QAB/F (edge preservation) metric, along with strong results across both subjective visual assessments and objective quality indices.
多模态医学图像融合(MMMIF)在临床应用中变得越来越重要,因为它能够整合来自不同成像模态的互补信息,以支持更准确的诊断和治疗规划。医学图像融合(MIF)的主要目标是生成一幅融合图像,该图像保留源图像(SI)中最具信息性的特征,从而提高临床决策系统的可靠性。然而,由于个体成像模态存在固有局限性,例如功能图像的空间分辨率差或解剖扫描中的对比度低,融合图像可能会出现信息退化或失真。为了解决这些局限性,本研究提出了一种新颖的融合框架,该框架将非下采样剪切波变换(NSST)与卷积神经网络(CNN)相结合,以实现有效的子带增强和图像重建。首先,使用NSST将每个源图像分解为低频系数(LFC)和多个高频系数(HFC)。然后,将所提出的并发去噪和增强网络(CDEN)应用于这些子带,以抑制噪声并增强关键的结构细节。增强后的LFC使用基于AlexNet的活动水平融合模型进行融合,而增强后的HFC使用由新型和修正拉普拉斯(NSML)度量引导的脉冲耦合神经网络(PCNN)进行组合。最后,通过逆NSST(I-NSST)重建融合图像。实验结果证明,所提出的方法优于现有的融合算法,在QAB/F(边缘保留)度量方面性能提高了约16.5%,并且在主观视觉评估和客观质量指标方面都取得了优异的结果。