McCrindle Brian, Zukotynski Katherine, Doyle Thomas E, Noseworthy Michael D
Department of Electrical and Computer Engineering (B.M., T.E.D., M.D.N.), Department of Radiology, Faculty of Health Sciences (K.Z., M.D.N.), and School of Biomedical Engineering (K.Z., T.E.D., M.D.N.), McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4L8; and Vector Institute for Artificial Intelligence, Toronto, Canada (T.E.D.).
Radiol Artif Intell. 2021 Sep 15;3(6):e210031. doi: 10.1148/ryai.2021210031. eCollection 2021 Nov.
The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can "communicate" decisions to radiologists and primary care physicians is of particular importance because automated clinical decisions can substantially impact patient outcome. A challenge facing the clinical implementation of AI stems from the potential lack of trust clinicians have in these predictive models. This review will expand on the existing literature on interpretability methods for deep learning and review the state-of-the-art methods for predictive uncertainty estimation for computer-assisted segmentation tasks. Last, we discuss how uncertainty can improve predictive performance and model interpretability and can act as a tool to help foster trust. Segmentation, Quantification, Ethics, Bayesian Network (BN) © RSNA, 2021.
计算机硬件、软件工具和海量数字数据档案的最新进展及可用性推动了人工智能(AI)应用的快速发展。由于自动化临床决策会对患者预后产生重大影响,因此人工智能工具能否向放射科医生和初级保健医生“传达”决策这一问题尤为重要。人工智能临床应用面临的一个挑战源于临床医生对这些预测模型可能缺乏信任。本综述将详述关于深度学习可解释性方法的现有文献,并回顾计算机辅助分割任务中预测不确定性估计的最新方法。最后,我们将讨论不确定性如何提高预测性能和模型可解释性,并可作为一种有助于建立信任的工具。分割、量化、伦理、贝叶斯网络(BN)©RSNA,2021年