Suppr超能文献

一种使用对比解释方法(CEM)进行帕金森病诊断的可解释方法。

An Explainable Approach to Parkinson's Diagnosis Using the Contrastive Explanation Method-CEM.

作者信息

Balikci Cicek Ipek, Kucukakcali Zeynep, Deniz Birgul, Algül Fatma Ebru

机构信息

Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280 Malatya, Turkey.

Department of Hematology, Faculty of Medicine, Inonu University, 44280 Malatya, Turkey.

出版信息

Diagnostics (Basel). 2025 Aug 18;15(16):2069. doi: 10.3390/diagnostics15162069.

Abstract

Parkinson's disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve model interpretability using explainable artificial intelligence (XAI) techniques. This retrospective study included 79 participants (39 PD patients, 40 controls) recruited at Inonu University Turgut Ozal Medical Center between 2013 and 2025. A deep neural network (DNN) was developed using a multilayer perceptron architecture with six hidden layers and ReLU activation functions. Seventeen volumetric brain features were used as the input. To ensure robust evaluation and prevent overfitting, a stratified five-fold cross-validation was applied, maintaining class balance in each fold. Model transparency was explored using two complementary XAI techniques: the Contrastive Explanation Method (CEM) and Local Interpretable Model-Agnostic Explanations (LIME). CEM highlights features that support or could alter the current classification, while LIME provides instance-based feature attributions. The DNN model achieved high diagnostic performance with 94.1% accuracy, 98.3% specificity, 90.2% sensitivity, and an AUC of 0.97. The CEM analysis suggested that reduced hippocampal volume was a key contributor to PD classification (-0.156 PP), whereas higher volumes in the brainstem and hippocampus were associated with the control class (+0.035 and +0.150 PP, respectively). The LIME results aligned with these findings, revealing consistent feature importance (mean = 0.1945) and faithfulness (0.0269). Comparative analyses showed different volumetric patterns between groups and confirmed the DNN's superiority over conventional machine learning models such as SVM, logistic regression, KNN, and AdaBoost. This study demonstrates that a deep learning model, enhanced with CEM and LIME, can provide both high diagnostic accuracy and interpretable insights for PD classification, supporting the integration of explainable AI in clinical neuroimaging.

摘要

帕金森病(PD)是一种进行性神经退行性疾病,需要早期准确诊断。本研究旨在利用脑部容积磁共振成像(MRI)数据对有无帕金森病的个体进行分类,并使用可解释人工智能(XAI)技术提高模型的可解释性。这项回顾性研究纳入了2013年至2025年间在伊诺努大学图尔古特·奥扎尔医学中心招募的79名参与者(39名帕金森病患者,40名对照)。使用具有六个隐藏层和ReLU激活函数的多层感知器架构开发了一个深度神经网络(DNN)。17个脑部容积特征被用作输入。为确保稳健评估并防止过拟合,应用了分层五折交叉验证,在每一折中保持类别平衡。使用两种互补的XAI技术探索模型透明度:对比解释方法(CEM)和局部可解释模型无关解释(LIME)。CEM突出支持或可能改变当前分类的特征,而LIME提供基于实例的特征归因。DNN模型实现了较高的诊断性能,准确率为94.1%,特异性为98.3%,灵敏度为90.2%,曲线下面积(AUC)为0.97。CEM分析表明,海马体体积减小是帕金森病分类的关键因素(-0.156预测概率),而脑干和海马体体积增大与对照组相关(分别为+0.035和+0.150预测概率)。LIME结果与这些发现一致,揭示了一致的特征重要性(均值=0.1945)和忠实度(0.0269)。比较分析显示了组间不同的容积模式,并证实了DNN优于支持向量机(SVM)、逻辑回归、K近邻(KNN)和自适应增强(AdaBoost)等传统机器学习模型。本研究表明,结合CEM和LIME的深度学习模型可为帕金森病分类提供高诊断准确性和可解释的见解,支持将可解释人工智能整合到临床神经影像学中。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5bf1/12385503/2e33b7ef70e5/diagnostics-15-02069-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验