Moghtaderi Shiva, Einlou Mokarrameh, Wahid Khan A, Lukong Kiven Erique
Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9, Canada.
Department of Biochemistry, Microbiology and Immunology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada.
R Soc Open Sci. 2024 Apr 10;11(4). doi: 10.1098/rsos.231762. eCollection 2024 Apr.
With the rapid development of medical imaging methods, multimodal medical image fusion techniques have caught the interest of researchers. The aim is to preserve information from diverse sensors using various models to generate a single informative image. The main challenge is to derive a trade-off between the spatial and spectral qualities of the resulting fused image and the computing efficiency. This article proposes a fast and reliable method for medical image fusion depending on multilevel Guided edge-preserving filtering (MLGEPF) decomposition rule. First, each multimodal medical image was divided into three sublayer categories using an MLGEPF decomposition scheme: small-scale component, large-scale component and background component. Secondly, two fusion strategies-pulse-coupled neural network based on the structure tensor and maximum based-are applied to combine the three types of layers, based on the layers' various properties. The three different types of fused sublayers are combined to create the fused image at the end. A total of 40 pairs of brain images from four separate categories of medical conditions were tested in experiments. The pair of images includes various case studies including magnetic resonance imaging (MRI) , TITc, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). We included qualitative analysis to demonstrate that the visual contrast between the structure and the surrounding tissue is increased in our proposed method. To further enhance the visual comparison, we asked a group of observers to compare our method's outputs with other methods and score them. Overall, our proposed fusion scheme increased the visual contrast and received positive subjective review. Moreover, objective assessment indicators for each category of medical conditions are also included. Our method achieves a high evaluation outcome on feature mutual information (FMI), the sum of correlation of differences (SCD), Qabf and Qy indexes. This implies that our fusion algorithm has better performance in information preservation and efficient structural and visual transferring.
随着医学成像方法的快速发展,多模态医学图像融合技术引起了研究人员的关注。其目的是使用各种模型保留来自不同传感器的信息,以生成单个信息丰富的图像。主要挑战在于在所得融合图像的空间和光谱质量与计算效率之间进行权衡。本文提出了一种基于多级引导边缘保留滤波(MLGEPF)分解规则的快速可靠的医学图像融合方法。首先,使用MLGEPF分解方案将每个多模态医学图像分为三个子层类别:小尺度分量、大尺度分量和背景分量。其次,基于这些层的不同属性,应用两种融合策略——基于结构张量的脉冲耦合神经网络和基于最大值的策略——来组合这三种类型的层。最后,将三种不同类型的融合子层组合以创建融合图像。在实验中测试了来自四个不同医学状况类别的总共40对脑图像。这对图像包括各种案例研究,包括磁共振成像(MRI)、TITc、单光子发射计算机断层扫描(SPECT)和正电子发射断层扫描(PET)。我们进行了定性分析,以证明我们提出的方法增加了结构与周围组织之间的视觉对比度。为了进一步加强视觉比较,我们让一组观察者将我们方法的输出与其他方法进行比较并评分。总体而言,我们提出的融合方案增加了视觉对比度并获得了积极的主观评价。此外,还包括针对每个医学状况类别的客观评估指标。我们的方法在特征互信息(FMI)、差异相关性总和(SCD)、Qabf和Qy指标方面取得了很高的评估结果。这意味着我们的融合算法在信息保留以及高效的结构和视觉传递方面具有更好的性能。