Suppr超能文献

通过神经符号整合实现可解释的诊断预测。

Explainable Diagnosis Prediction through Neuro-Symbolic Integration.

作者信息

Lu Qiuhao, Li Rui, Sagheb Elham, Wen Andrew, Wang Jinlian, Wang Liwei, Fan Jungwei W, Liu Hongfang

机构信息

McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA.

Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA.

出版信息

AMIA Jt Summits Transl Sci Proc. 2025 Jun 10;2025:332-341. eCollection 2025.

Abstract

Diagnosis prediction is a critical task in healthcare, where timely and accurate identification of medical conditions can significantly impact patient outcomes. Traditional machine learning and deep learning models have achieved notable success in this domain but often lack interpretability which is a crucial requirement in clinical settings. In this study, we explore the use of neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction. Essentially, we design and implement LNN-based models that integrate domain-specific knowledge through logical rules with learnable weights and thresholds. Our models, particularly Mmulti-pathway and Mcomprehensive, demonstrate superior performance over traditional models such as Logistic Regression, SVM, and Random Forest, achieving higher accuracy (up to 80.52%) and AUROC scores (up to 0.8457) in the case study of diabetes prediction. The learned weights and thresholds within the LNN models provide direct insights into feature contributions, enhancing interpretability without compromising predictive power. These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications. By offering transparent and adaptable diagnostic models, our work contributes to the advancement ofprecision medicine and supports the development of equitable healthcare solutions. Future research will focus on extending these methods to larger and more diverse datasets to further validate their applicability across different medical conditions and populations.

摘要

诊断预测是医疗保健中的一项关键任务,及时准确地识别医疗状况会对患者的治疗结果产生重大影响。传统的机器学习和深度学习模型在这一领域取得了显著成功,但往往缺乏可解释性,而这在临床环境中是一项至关重要的要求。在本研究中,我们探索使用神经符号方法,特别是逻辑神经网络(LNN),来开发用于诊断预测的可解释模型。本质上,我们设计并实现了基于LNN的模型,该模型通过具有可学习权重和阈值的逻辑规则整合特定领域的知识。我们的模型,特别是多路径模型和综合模型,在糖尿病预测的案例研究中表现优于逻辑回归、支持向量机和随机森林等传统模型,实现了更高的准确率(高达80.52%)和曲线下面积(AUROC)分数(高达0.8457)。LNN模型中学习到的权重和阈值直接揭示了特征贡献,在不影响预测能力的情况下增强了可解释性。这些发现凸显了神经符号方法在弥合医疗保健人工智能应用中准确性和可解释性之间差距方面的潜力。通过提供透明且可适应的诊断模型,我们的工作有助于推动精准医学的发展,并支持公平医疗保健解决方案的开发。未来的研究将集中于将这些方法扩展到更大、更多样化的数据集,以进一步验证它们在不同医疗状况和人群中的适用性。

相似文献

本文引用的文献

1
CPLLM: Clinical prediction with large language models.CPLLM:基于大语言模型的临床预测
PLOS Digit Health. 2024 Dec 6;3(12):e0000680. doi: 10.1371/journal.pdig.0000680. eCollection 2024 Dec.
5
Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction.深度学习与医学图像分析在 COVID-19 诊断与预测中的应用。
Annu Rev Biomed Eng. 2022 Jun 6;24:179-201. doi: 10.1146/annurev-bioeng-110220-012203. Epub 2022 Mar 22.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验