Kompa Benjamin, Snoek Jasper, Beam Andrew L
Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
Google Brain, Cambridge, MA, USA.
NPJ Digit Med. 2021 Jan 5;4(1):4. doi: 10.1038/s41746-020-00367-3.
There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglected even though uncertainty estimates could lead to more principled decision-making and enable machine learning models to automatically or semi-automatically abstain on samples for which there is high uncertainty. In this article, we provide an overview of different approaches to uncertainty quantification and abstention for machine learning and highlight how these techniques could improve the safety and reliability of current ML systems being used in healthcare settings. Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say "I'm not sure" or "I don't know" when uncertain is a necessary capability to enable safe clinical deployment.
基于机器学习(ML)的医学人工智能(AI)能够在各种医疗环境中用于改善患者层面的决策,这引发了极大的兴奋之情。然而,尽管不确定性估计可能导致更有原则的决策制定,并使机器学习模型能够自动或半自动地对不确定性高的样本弃权,但个体预测的不确定性量化和传达往往被忽视。在本文中,我们概述了机器学习中不确定性量化和弃权的不同方法,并强调这些技术如何能够提高当前在医疗环境中使用的ML系统的安全性和可靠性。有效的不确定性量化和传达有助于赢得医护人员的信任,同时防范当前机器学习方法已知的故障模式。随着机器学习进一步融入医疗环境,在不确定时能够说出“我不确定”或“我不知道”是实现安全临床部署的必要能力。