Suppr超能文献

结合显著性感知与生成对抗网络的多模态医学图像融合

Multimodal medical image fusion combining saliency perception and generative adversarial network.

作者信息

Albekairi Mohammed, Mohamed Mohamed Vall O, Kaaniche Khaled, Abbas Ghulam, Alanazi Meshari D, Alanazi Turki M, Emara Ahmed

机构信息

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah, 72388, Saudi Arabia.

Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakakah, 72388, Saudi Arabia.

出版信息

Sci Rep. 2025 Mar 27;15(1):10609. doi: 10.1038/s41598-025-95147-y.

Abstract

Multimodal medical image fusion is crucial for enhancing diagnostic accuracy by integrating complementary information from different imaging modalities. Current fusion techniques face challenges in effectively combining heterogeneous features while preserving critical diagnostic information. This paper presents a Temporal Decomposition Network (TDN), a novel deep learning architecture that optimizes multimodal medical image fusion through feature-level temporal analysis and adversarial learning mechanisms. The TDN architecture incorporates two key components: a salient perception model for discriminative feature extraction and a generative adversarial network for temporal feature matching. The salient perception model identifies and classifies distinct pixel distributions across different imaging modalities, while the adversarial component facilitates accurate feature mapping and fusion. This approach enables precise temporal Decomposition of heterogeneous features and robust quality assessment of fused regions. Experimental validation on diverse medical image datasets, encompassing multiple modalities and image dimensions, demonstrates the TDN's superior performance. Compared to state-of-the-art methods, the framework achieves an 11.378% improvement in fusion accuracy and a 12.441% enhancement in precision. These results indicate significant potential for clinical applications, particularly in radiological diagnosis, surgical planning, and medical image analysis, where multimodal visualization is critical for accurate interpretation and decision-making.

摘要

多模态医学图像融合对于通过整合来自不同成像模态的互补信息来提高诊断准确性至关重要。当前的融合技术在有效结合异构特征同时保留关键诊断信息方面面临挑战。本文提出了一种时间分解网络(TDN),这是一种新颖的深度学习架构,通过特征级时间分析和对抗学习机制优化多模态医学图像融合。TDN架构包含两个关键组件:用于判别特征提取的显著感知模型和用于时间特征匹配的生成对抗网络。显著感知模型识别并分类不同成像模态之间不同的像素分布,而对抗组件促进准确的特征映射和融合。这种方法能够对异构特征进行精确的时间分解,并对融合区域进行稳健的质量评估。在包含多种模态和图像维度的不同医学图像数据集上的实验验证证明了TDN的卓越性能。与现有方法相比,该框架在融合准确性上提高了11.378%,在精度上提高了12.441%。这些结果表明其在临床应用中具有巨大潜力,特别是在放射诊断、手术规划和医学图像分析中,多模态可视化对于准确解读和决策至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2d3c/11950352/60c341b7e42a/41598_2025_95147_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验