Suppr超能文献

深度共识:基于共识的可解释深度神经网络及其在死亡率预测中的应用

DeepConsensus: Consensus-based Interpretable Deep Neural Networks with Application to Mortality Prediction.

作者信息

Salman Shaeke, Payrovnaziri Seyedeh Neelufar, Liu Xiuwen, Rengifo-Moreno Pablo, He Zhe

机构信息

Department of Computer Science, Florida State University, FL 32306, USA.

School of Information, Florida State University, FL 32306, USA.

出版信息

Proc Int Jt Conf Neural Netw. 2020 Jul;2020. doi: 10.1109/ijcnn48605.2020.9206678. Epub 2020 Sep 28.

Abstract

Deep neural networks have achieved remarkable success in various challenging tasks. However, the black-box nature of such networks is not acceptable to critical applications, such as healthcare. In particular, the existence of adversarial examples and their overgeneralization to irrelevant, out-of-distribution inputs with high confidence makes it difficult, if not impossible, to explain decisions by such networks. In this paper, we analyze the underlying mechanism of generalization of deep neural networks and propose an (, ) consensus algorithm which is insensitive to adversarial examples and can reliably reject out-of-distribution samples. Furthermore, the consensus algorithm is able to improve classification accuracy by using multiple trained deep neural networks. To handle the complexity of deep neural networks, we cluster linear approximations of individual models and identify highly correlated clusters among different models to capture feature importance robustly, resulting in improved interpretability. Motivated by the importance of building accurate and interpretable prediction models for healthcare, our experimental results on an ICU dataset show the effectiveness of our algorithm in enhancing both the prediction accuracy and the interpretability of deep neural network models on one-year patient mortality prediction. In particular, while the proposed method maintains similar interpretability as conventional shallow models such as logistic regression, it improves the prediction accuracy significantly.

摘要

深度神经网络在各种具有挑战性的任务中取得了显著成功。然而,此类网络的黑箱性质对于诸如医疗保健等关键应用来说是不可接受的。特别是,对抗样本的存在以及它们以高置信度过度泛化到不相关的、分布外的输入,使得解释此类网络的决策变得困难,甚至不可能。在本文中,我们分析了深度神经网络泛化的潜在机制,并提出了一种对对抗样本不敏感且能够可靠地拒绝分布外样本的共识算法。此外,该共识算法能够通过使用多个训练好的深度神经网络来提高分类准确率。为了处理深度神经网络的复杂性,我们对各个模型的线性近似进行聚类,并识别不同模型之间高度相关的聚类,以稳健地捕捉特征重要性,从而提高可解释性。受构建用于医疗保健的准确且可解释的预测模型的重要性的启发,我们在一个重症监护病房(ICU)数据集上的实验结果表明,我们的算法在提高深度神经网络模型对一年期患者死亡率预测的预测准确率和可解释性方面是有效的。特别是,虽然所提出的方法保持了与诸如逻辑回归等传统浅层模型相似的可解释性,但它显著提高了预测准确率。

相似文献

2
Interpretable clinical prediction via attention-based neural network.基于注意力的神经网络的可解释临床预测。
BMC Med Inform Decis Mak. 2020 Jul 9;20(Suppl 3):131. doi: 10.1186/s12911-020-1110-7.
3
Interpretable neural networks: principles and applications.可解释神经网络:原理与应用
Front Artif Intell. 2023 Oct 13;6:974295. doi: 10.3389/frai.2023.974295. eCollection 2023.

本文引用的文献

7
Acute myocardial infarction.急性心肌梗死。
Lancet. 2017 Jan 14;389(10065):197-210. doi: 10.1016/S0140-6736(16)30677-8. Epub 2016 Aug 5.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验