Odusami Modupe, Damaševičius Robertas, Milieškaitė-Belousovienė Egle, Maskeliūnas Rytis
Faculty of Informatics, Kaunas University of Technology, Kaunas, Lithuania.
Department of Intensive Care, University of Health Sciences, Kaunas, Lithuania.
Heliyon. 2024 Jul 15;10(15):e34402. doi: 10.1016/j.heliyon.2024.e34402. eCollection 2024 Aug 15.
The threat posed by Alzheimer's disease (AD) to human health has grown significantly. However, the precise diagnosis and classification of AD stages remain a challenge. Neuroimaging methods such as structural magnetic resonance imaging (sMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) have been used to diagnose and categorize AD. However, feature selection approaches that are frequently used to extract additional data from multimodal imaging are prone to errors. This paper suggests using a static pulse-coupled neural network and a Laplacian pyramid to combine sMRI and FDG-PET data. After that, the fused images are used to train the Mobile Vision Transformer (MViT), optimized with Pareto-Optimal Quantum Dynamic Optimization for Neural Architecture Search, while the fused images are augmented to avoid overfitting and then classify unfused MRI and FDG-PET images obtained from the AD Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets into various stages of AD. The architectural hyperparameters of MViT are optimized using Quantum Dynamic Optimization, which ensures a Pareto-optimal solution. The Peak Signal-to-Noise Ratio (PSNR), the Mean Squared Error (MSE), and the Structured Similarity Indexing Method (SSIM) are used to measure the quality of the fused image. We found that the fused image was consistent in all metrics, having 0.64 SIMM, 35.60 PSNR, and 0.21 MSE for the FDG-PET image. In the classification of AD vs. cognitive normal (CN), AD vs. mild cognitive impairment (MCI), and CN vs. MCI, the precision of the proposed method is 94.73%, 92.98% and 89.36%, respectively. The sensitivity is 90. 70%, 90. 70%, and 90. 91% while the specificity is 100%, 100%, and 85. 71%, respectively, in the ADNI MRI test data.
阿尔茨海默病(AD)对人类健康构成的威胁已显著增加。然而,AD阶段的精确诊断和分类仍然是一项挑战。诸如结构磁共振成像(sMRI)和氟脱氧葡萄糖正电子发射断层扫描(FDG-PET)等神经成像方法已被用于AD的诊断和分类。然而,经常用于从多模态成像中提取额外数据的特征选择方法容易出错。本文建议使用静态脉冲耦合神经网络和拉普拉斯金字塔来融合sMRI和FDG-PET数据。之后,使用融合图像训练移动视觉Transformer(MViT),通过帕累托最优量子动态优化进行神经架构搜索优化,同时对融合图像进行增强以避免过拟合,然后将从AD神经成像计划(ADNI)和开放获取影像研究系列(OASIS)数据集中获得的未融合MRI和FDG-PET图像分类为AD的各个阶段。使用量子动态优化对MViT的架构超参数进行优化,确保得到帕累托最优解。使用峰值信噪比(PSNR)、均方误差(MSE)和结构相似性指数方法(SSIM)来衡量融合图像的质量。我们发现融合图像在所有指标上都是一致的,FDG-PET图像的SSIM为0.64、PSNR为35.60、MSE为0.21。在AD与认知正常(CN)、AD与轻度认知障碍(MCI)以及CN与MCI的分类中,所提出方法的精度分别为94.73%、92.98%和89.36%。在ADNI MRI测试数据中,灵敏度分别为90.70%、90.70%和90.91%,而特异性分别为100%、100%和85.71%。