Odusami Modupe, Maskeliūnas Rytis, Damaševičius Robertas
Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania.
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland.
Brain Sci. 2023 Jul 8;13(7):1045. doi: 10.3390/brainsci13071045.
Alzheimer's disease (AD) is a neurological condition that gradually weakens the brain and impairs cognition and memory. Multimodal imaging techniques have become increasingly important in the diagnosis of AD because they can help monitor disease progression over time by providing a more complete picture of the changes in the brain that occur over time in AD. Medical image fusion is crucial in that it combines data from various image modalities into a single, better-understood output. The present study explores the feasibility of employing Pareto optimized deep learning methodologies to integrate Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images through the utilization of pre-existing models, namely the Visual Geometry Group (VGG) 11, VGG16, and VGG19 architectures. Morphological operations are carried out on MRI and PET images using Analyze 14.0 software and after which PET images are manipulated for the desired angle of alignment with MRI image using GNU Image Manipulation Program (GIMP). To enhance the network's performance, transposed convolution layer is incorporated into the previously extracted feature maps before image fusion. This process generates feature maps and fusion weights that facilitate the fusion process. This investigation concerns the assessment of the efficacy of three VGG models in capturing significant features from the MRI and PET data. The hyperparameters of the models are tuned using Pareto optimization. The models' performance is evaluated on the ADNI dataset utilizing the Structure Similarity Index Method (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean-Square Error (MSE), and Entropy (E). Experimental results show that VGG19 outperforms VGG16 and VGG11 with an average of 0.668, 0.802, and 0.664 SSIM for CN, AD, and MCI stages from ADNI (MRI modality) respectively. Likewise, an average of 0.669, 0.815, and 0.660 SSIM for CN, AD, and MCI stages from ADNI (PET modality) respectively.
阿尔茨海默病(AD)是一种神经疾病,会逐渐损害大脑并影响认知和记忆。多模态成像技术在AD诊断中变得越来越重要,因为它们可以通过更全面地呈现AD患者大脑随时间发生的变化,帮助监测疾病进展。医学图像融合至关重要,因为它将来自各种图像模态的数据组合成一个更易于理解的单一输出。本研究探讨了采用帕累托优化深度学习方法,通过利用现有模型,即视觉几何组(VGG)11、VGG16和VGG19架构,来整合磁共振成像(MRI)和正电子发射断层扫描(PET)图像的可行性。使用Analyze 14.0软件对MRI和PET图像进行形态学操作,然后使用GNU图像处理程序(GIMP)将PET图像调整到与MRI图像所需的对齐角度。为了提高网络性能,在图像融合之前,将转置卷积层合并到先前提取的特征图中。这个过程生成了有助于融合过程的特征图和融合权重。本研究旨在评估三种VGG模型从MRI和PET数据中捕捉显著特征的效果。使用帕累托优化对模型的超参数进行调整。利用结构相似性指数方法(SSIM)、峰值信噪比(PSNR)、均方误差(MSE)和熵(E)在阿尔茨海默病神经影像学倡议(ADNI)数据集上评估模型的性能。实验结果表明,VGG19的表现优于VGG16和VGG11,在ADNI(MRI模态)的CN、AD和MCI阶段,SSIM平均值分别为0.668、0.802和0.664。同样,在ADNI(PET模态)的CN、AD和MCI阶段,SSIM平均值分别为0.669、0.815和0.660。