Suppr超能文献

交互式多尺度融合:通过跨交互式多尺度融合模型推进脑肿瘤检测

Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model.

作者信息

Durairaj Vasanthi, Uthirapathy Palani

机构信息

Department of ECE, IFET College of Engineering, Villupuram, Tamil Nadu, India.

出版信息

J Imaging Inform Med. 2025 Apr;38(2):757-774. doi: 10.1007/s10278-024-01222-7. Epub 2024 Aug 15.

Abstract

Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.

摘要

多模态医学图像(MI)融合有助于通过几种病症的不同图像生成收集互补特征的协作图像。这些图像有助于医生准确诊断疾病。因此,本研究提出了一种新颖的多模态MI融合模式,即基于引导滤波器的交互式多尺度多模态变压器(Trans-IMSM)融合方法,以开发用于脑肿瘤检测的高质量计算机断层扫描-磁共振成像(CT-MRI)融合图像。本研究利用CT和MRI脑部扫描数据集来收集输入的CT和MRI图像。首先,进行数据预处理以对这些输入图像进行预处理,以提高图像质量和泛化能力,以便进一步分析。然后,利用基于引导滤波器的MI分解方法将这些预处理后的CT和MRI分解为细节分量和基础分量。该方法包括两个阶段:例如获取图像引导并利用引导滤波器分解图像。采用Canny算子获取包含CT和MRI图像鲁棒边缘的图像引导,并应用引导滤波器分解引导图像和预处理后的图像。然后,通过应用Trans-IMSM模型融合细节分量,而对基础分量采用加权方法。随后,融合后的细节分量和基础分量通过门控融合和重建网络进行处理,生成用于脑肿瘤检测的最终融合图像。进行了广泛的测试以计算Trans-IMSM方法的有效性。评估结果证明了该方法的稳健性和有效性,准确率达到98.64%,结构相似性指数(SSIM)为0.94。

相似文献

5
Multi-modal medical image fusion using improved dual-channel PCNN.基于改进双通道 PCNN 的多模态医学图像融合。
Med Biol Eng Comput. 2024 Sep;62(9):2629-2651. doi: 10.1007/s11517-024-03089-w. Epub 2024 Apr 24.

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验