• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于元学习的可解释阿尔茨海默病诊断集成模型。

A Meta-Learning-Based Ensemble Model for Explainable Alzheimer's Disease Diagnosis.

作者信息

Al-Bakri Fatima Hasan, Bejuri Wan Mohd Yaakob Wan, Al-Andoli Mohamed Nasser, Ikram Raja Rina Raja, Khor Hui Min, Tahir Zulkifli

机构信息

Faculty of Information and Communication Technology, Universiti Teknikal Malaysia Melaka, Melaka 76100, Malaysia.

Faculty of Artificial Intelligence and Cyber Security, Universiti Teknikal Malaysia Melaka, Melaka 76100, Malaysia.

出版信息

Diagnostics (Basel). 2025 Jun 27;15(13):1642. doi: 10.3390/diagnostics15131642.

DOI:10.3390/diagnostics15131642
PMID:40647641
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12248535/
Abstract

Artificial intelligence (AI) models for Alzheimer's disease (AD) diagnosis often face the challenge of limited explainability, hindering their clinical adoption. Previous studies have relied on full-scale MRI, which increases unnecessary features, creating a "black-box" problem in current XAI models. This study proposes an explainable ensemble-based diagnostic framework trained on both clinical data and mid-slice axial MRI from the ADNI and OASIS datasets. The methodology involves training an ensemble model that integrates Random Forest, Support Vector Machine, XGBoost, and Gradient Boosting classifiers, with meta-logistic regression used for the final decision. The core contribution lies in the exclusive use of mid-slice MRI images, which highlight the lateral ventricles, thus improving the transparency and clinical relevance of the decision-making process. Our mid-slice approach minimizes unnecessary features and enhances model explainability by design. We achieved state-of-the-art diagnostic accuracy: 99% on OASIS and 97.61% on ADNI using clinical data alone; 99.38% on OASIS and 98.62% on ADNI using only mid-slice MRI; and 99% accuracy when combining both modalities. The findings demonstrated significant progress in diagnostic transparency, as the algorithm consistently linked predictions to observed structural changes in the dilated lateral ventricles of the brain, which serve as a clinically reliable biomarker for AD and can be easily verified by medical professionals. This research presents a step toward more transparent AI-driven diagnostics, bridging the gap between accuracy and explainability in XAI.

摘要

用于阿尔茨海默病(AD)诊断的人工智能(AI)模型常常面临可解释性有限的挑战,这阻碍了它们在临床中的应用。先前的研究依赖于全尺寸磁共振成像(MRI),这增加了不必要的特征,在当前的可解释人工智能(XAI)模型中造成了“黑箱”问题。本研究提出了一种基于可解释集成的诊断框架,该框架在来自阿尔茨海默病神经成像计划(ADNI)和老年人脑成像数据集(OASIS)的临床数据和中矢状面轴向MRI上进行训练。该方法包括训练一个集成模型,该模型整合了随机森林、支持向量机、极端梯度提升(XGBoost)和梯度提升分类器,并使用元逻辑回归进行最终决策。核心贡献在于专门使用突出侧脑室的中矢状面MRI图像,从而提高了决策过程的透明度和临床相关性。我们的中矢状面方法通过设计最小化了不必要的特征并增强了模型的可解释性。我们取得了领先的诊断准确率:仅使用临床数据时,在OASIS数据集上为99%,在ADNI数据集上为97.61%;仅使用中矢状面MRI时,在OASIS数据集上为99.38%,在ADNI数据集上为98.62%;两种模态结合时准确率为99%。研究结果表明在诊断透明度方面取得了显著进展,因为该算法始终将预测与大脑扩张侧脑室中观察到的结构变化联系起来,这些变化是AD临床上可靠的生物标志物,并且可以很容易地由医学专业人员进行验证。这项研究朝着更透明的人工智能驱动诊断迈出了一步,弥合了可解释人工智能中准确性和可解释性之间的差距。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/7eac41650cc4/diagnostics-15-01642-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/c149e885f1ba/diagnostics-15-01642-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/24a1def3ee6e/diagnostics-15-01642-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/ae610ae1f1a0/diagnostics-15-01642-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/dc7e818c96a3/diagnostics-15-01642-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/99bb1bb784d3/diagnostics-15-01642-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/75dac7fb2cb2/diagnostics-15-01642-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/645099f0b3d4/diagnostics-15-01642-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/ca3c65ef09ce/diagnostics-15-01642-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/7eac41650cc4/diagnostics-15-01642-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/c149e885f1ba/diagnostics-15-01642-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/24a1def3ee6e/diagnostics-15-01642-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/ae610ae1f1a0/diagnostics-15-01642-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/dc7e818c96a3/diagnostics-15-01642-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/99bb1bb784d3/diagnostics-15-01642-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/75dac7fb2cb2/diagnostics-15-01642-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/645099f0b3d4/diagnostics-15-01642-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/ca3c65ef09ce/diagnostics-15-01642-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa89/12248535/7eac41650cc4/diagnostics-15-01642-g009.jpg

相似文献

1
A Meta-Learning-Based Ensemble Model for Explainable Alzheimer's Disease Diagnosis.一种基于元学习的可解释阿尔茨海默病诊断集成模型。
Diagnostics (Basel). 2025 Jun 27;15(13):1642. doi: 10.3390/diagnostics15131642.
2
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
3
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
4
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.将可解释人工智能的先进算法与混合模型相结合,以增强医疗保健中脑肿瘤的检测。
Sci Rep. 2025 Jul 1;15(1):20489. doi: 10.1038/s41598-025-07524-2.
5
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
6
Advancing personalized healthcare: leveraging explainable AI for BPPV risk assessment.推进个性化医疗:利用可解释人工智能进行良性阵发性位置性眩晕风险评估。
Health Inf Sci Syst. 2024 Nov 24;13(1):1. doi: 10.1007/s13755-024-00317-3. eCollection 2025 Dec.
7
A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study.用于评估、选择和解释2型糖尿病患者心血管疾病结局机器学习模型的责任框架:方法与验证研究
JMIR Med Inform. 2025 Jun 27;13:e66200. doi: 10.2196/66200.
8
Artificial intelligence for diagnosing exudative age-related macular degeneration.人工智能在渗出性年龄相关性黄斑变性诊断中的应用。
Cochrane Database Syst Rev. 2024 Oct 17;10(10):CD015522. doi: 10.1002/14651858.CD015522.pub2.
9
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
10
Predicting cognitive decline: Deep-learning reveals subtle brain changes in pre-MCI stage.预测认知衰退:深度学习揭示轻度认知障碍前阶段大脑的细微变化。
J Prev Alzheimers Dis. 2025 May;12(5):100079. doi: 10.1016/j.tjpad.2025.100079. Epub 2025 Feb 6.

引用本文的文献

1
A Feature-Augmented Explainable Artificial Intelligence Model for Diagnosing Alzheimer's Disease from Multimodal Clinical and Neuroimaging Data.一种用于从多模态临床和神经影像数据中诊断阿尔茨海默病的特征增强可解释人工智能模型。
Diagnostics (Basel). 2025 Aug 17;15(16):2060. doi: 10.3390/diagnostics15162060.

本文引用的文献

1
AI-based tool for early detection of Alzheimer's disease.用于早期检测阿尔茨海默病的基于人工智能的工具。
Heliyon. 2024 Apr 9;10(8):e29375. doi: 10.1016/j.heliyon.2024.e29375. eCollection 2024 Apr 30.
2
Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection.解读人工智能模型:关于局部可解释模型无关性解释(LIME)和SHapley值解释(SHAP)在阿尔茨海默病检测中应用的系统综述
Brain Inform. 2024 Apr 5;11(1):10. doi: 10.1186/s40708-024-00222-1.
3
Neuron-level explainable AI for Alzheimer's Disease assessment from fundus images.
基于眼底图像的阿尔茨海默病评估的神经元可解释人工智能。
Sci Rep. 2024 Apr 2;14(1):7710. doi: 10.1038/s41598-024-58121-8.
4
Diet Pattern Analysis in Alzheimer's Disease Implicates Gender Differences in Folate-B12-Homocysteine Axis on Cognitive Outcomes.阿尔茨海默病的饮食模式分析表明,叶酸 - B12 - 同型半胱氨酸轴上的性别差异与认知结果有关。
Nutrients. 2024 Mar 4;16(5):733. doi: 10.3390/nu16050733.
5
Artificial Intelligence and Technology Collaboratories: Innovating aging research and Alzheimer's care.人工智能与技术协作实验室:创新老龄化研究与阿尔茨海默病照护
Alzheimers Dement. 2024 Apr;20(4):3074-3079. doi: 10.1002/alz.13710. Epub 2024 Feb 7.
6
Explainable AI-based Alzheimer's prediction and management using multimodal data.基于可解释人工智能的多模态数据阿尔茨海默病预测与管理。
PLoS One. 2023 Nov 16;18(11):e0294253. doi: 10.1371/journal.pone.0294253. eCollection 2023.
7
Optimized Dropkey-Based Grad-CAM: Toward Accurate Image Feature Localization.基于优化Dropkey的Grad-CAM:迈向精确的图像特征定位
Sensors (Basel). 2023 Oct 10;23(20):8351. doi: 10.3390/s23208351.
8
XGBoost-SHAP-based interpretable diagnostic framework for alzheimer's disease.基于 XGBoost-SHAP 的阿尔茨海默病可解释诊断框架。
BMC Med Inform Decis Mak. 2023 Jul 25;23(1):137. doi: 10.1186/s12911-023-02238-9.
9
PPAD: a deep learning architecture to predict progression of Alzheimer's disease.PPAD:一种用于预测阿尔茨海默病进展的深度学习架构。
Bioinformatics. 2023 Jun 30;39(39 Suppl 1):i149-i157. doi: 10.1093/bioinformatics/btad249.
10
AHANet: Adaptive Hybrid Attention Network for Alzheimer's Disease Classification Using Brain Magnetic Resonance Imaging.AHANet:用于利用脑磁共振成像进行阿尔茨海默病分类的自适应混合注意力网络
Bioengineering (Basel). 2023 Jun 12;10(6):714. doi: 10.3390/bioengineering10060714.