Department of Electrical Engineering and Computer Science and Institute for Medical and Evaluative Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Vector Institute, Toronto, ON, Canada.
Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia.
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
当前人工智能(AI)的黑箱性质使得一些人质疑,在医学等高风险场景中,AI 是否必须具有可解释性。有人认为,可解释的 AI 将赢得医疗保健人员的信任,为 AI 决策过程提供透明度,并有可能减轻各种偏见。在本观点中,我们认为,这种观点代表了对可解释 AI 的一种错误期望,并且当前的可解释性方法不太可能为患者级别的决策支持实现这些目标。我们概述了当前的可解释性技术,并强调了各种失败案例如何导致个体患者决策出现问题。在缺乏合适的可解释性方法的情况下,我们提倡对 AI 模型进行严格的内部和外部验证,作为实现通常与可解释性相关的目标的更直接手段,并警告不要将可解释性作为临床部署模型的要求。
Lancet Digit Health. 2021-11
BMC Med Ethics. 2024-10-1
Crit Care. 2024-9-12
Clin Exp Emerg Med. 2023-12
Stud Health Technol Inform. 2022-1-14
Wiley Interdiscip Rev Data Min Knowl Discov. 2019
BMC Med Inform Decis Mak. 2021-12-9
Front Digit Health. 2025-8-13
Sensors (Basel). 2025-7-24