Lu Qiuhao, Li Rui, Sagheb Elham, Wen Andrew, Wang Jinlian, Wang Liwei, Fan Jungwei W, Liu Hongfang
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA.
Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA.
AMIA Jt Summits Transl Sci Proc. 2025 Jun 10;2025:332-341. eCollection 2025.
Diagnosis prediction is a critical task in healthcare, where timely and accurate identification of medical conditions can significantly impact patient outcomes. Traditional machine learning and deep learning models have achieved notable success in this domain but often lack interpretability which is a crucial requirement in clinical settings. In this study, we explore the use of neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction. Essentially, we design and implement LNN-based models that integrate domain-specific knowledge through logical rules with learnable weights and thresholds. Our models, particularly Mmulti-pathway and Mcomprehensive, demonstrate superior performance over traditional models such as Logistic Regression, SVM, and Random Forest, achieving higher accuracy (up to 80.52%) and AUROC scores (up to 0.8457) in the case study of diabetes prediction. The learned weights and thresholds within the LNN models provide direct insights into feature contributions, enhancing interpretability without compromising predictive power. These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications. By offering transparent and adaptable diagnostic models, our work contributes to the advancement ofprecision medicine and supports the development of equitable healthcare solutions. Future research will focus on extending these methods to larger and more diverse datasets to further validate their applicability across different medical conditions and populations.
诊断预测是医疗保健中的一项关键任务,及时准确地识别医疗状况会对患者的治疗结果产生重大影响。传统的机器学习和深度学习模型在这一领域取得了显著成功,但往往缺乏可解释性,而这在临床环境中是一项至关重要的要求。在本研究中,我们探索使用神经符号方法,特别是逻辑神经网络(LNN),来开发用于诊断预测的可解释模型。本质上,我们设计并实现了基于LNN的模型,该模型通过具有可学习权重和阈值的逻辑规则整合特定领域的知识。我们的模型,特别是多路径模型和综合模型,在糖尿病预测的案例研究中表现优于逻辑回归、支持向量机和随机森林等传统模型,实现了更高的准确率(高达80.52%)和曲线下面积(AUROC)分数(高达0.8457)。LNN模型中学习到的权重和阈值直接揭示了特征贡献,在不影响预测能力的情况下增强了可解释性。这些发现凸显了神经符号方法在弥合医疗保健人工智能应用中准确性和可解释性之间差距方面的潜力。通过提供透明且可适应的诊断模型,我们的工作有助于推动精准医学的发展,并支持公平医疗保健解决方案的开发。未来的研究将集中于将这些方法扩展到更大、更多样化的数据集,以进一步验证它们在不同医疗状况和人群中的适用性。