Suppr超能文献

基于注意力的神经网络的可解释临床预测。

Interpretable clinical prediction via attention-based neural network.

机构信息

College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China.

School of Industrial Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.

出版信息

BMC Med Inform Decis Mak. 2020 Jul 9;20(Suppl 3):131. doi: 10.1186/s12911-020-1110-7.

Abstract

BACKGROUND

The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret.

METHODS

To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable.

RESULTS

We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans.

CONCLUSIONS

The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.

摘要

背景

机器学习模型预测结果的可解释性至关重要,尤其是在医疗保健等关键领域。在过去十年中,医疗组织越来越多地采用电子医疗记录 (EHR),积累了大量的电子患者数据,神经网络或深度学习技术逐渐利用 EHR 数据的巨大潜力应用于临床任务。然而,典型的深度学习模型是黑盒,不透明,预测结果难以解释。

方法

为了弥补这一局限性,我们提出了一种用于可解释临床预测的注意力神经网络模型。具体来说,所提出的模型采用注意力机制来捕获具有其在预测结果上的注意力信号的关键/基本特征,使得神经网络模型生成的预测结果具有可解释性。

结果

我们在一个由 736 个样本组成的真实临床数据集上评估了我们提出的模型,以预测心力衰竭患者的再入院情况。所提出模型的性能在准确性和 AUC 方面分别达到了 66.7%和 69.1%,优于基线模型。此外,我们展示了患者特定的注意力权重,不仅可以帮助临床医生理解预测结果,还可以帮助他们选择个体化的治疗策略或干预计划。

结论

实验结果表明,通过为模型配备注意力机制,所提出的模型可以提高预测性能和可解释性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4e9d/7346336/d01f8b9a2318/12911_2020_1110_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验