Okada Yohei, Ning Yilin, Ong Marcus Eng Hock
Health Services and Systems Research, Duke-NUS Medical School, Singapore.
Preventive Services, Graduate School of Medicine, Kyoto University, Kyoto, Japan.
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
Artificial intelligence (AI) and machine learning (ML) have potential to revolutionize emergency medical care by enhancing triage systems, improving diagnostic accuracy, refining prognostication, and optimizing various aspects of clinical care. However, as clinicians often lack AI expertise, they might perceive AI as a "black box," leading to trust issues. To address this, "explainable AI," which teaches AI functionalities to end-users, is important. This review presents the definitions, importance, and role of explainable AI, as well as potential challenges in emergency medicine. First, we introduce the terms explainability, interpretability, and transparency of AI models. These terms sound similar but have different roles in discussion of AI. Second, we indicate that explainable AI is required in clinical settings for reasons of justification, control, improvement, and discovery and provide examples. Third, we describe three major categories of explainability: pre-modeling explainability, interpretable models, and post-modeling explainability and present examples (especially for post-modeling explainability), such as visualization, simplification, text justification, and feature relevance. Last, we show the challenges of implementing AI and ML models in clinical settings and highlight the importance of collaboration between clinicians, developers, and researchers. This paper summarizes the concept of "explainable AI" for emergency medicine clinicians. This review may help clinicians understand explainable AI in emergency contexts.
人工智能(AI)和机器学习(ML)有潜力通过改进分诊系统、提高诊断准确性、优化预后以及优化临床护理的各个方面,给紧急医疗带来变革。然而,由于临床医生通常缺乏人工智能专业知识,他们可能将人工智能视为一个“黑匣子”,从而产生信任问题。为了解决这个问题,向终端用户传授人工智能功能的“可解释人工智能”很重要。这篇综述介绍了可解释人工智能的定义、重要性和作用,以及在急诊医学中的潜在挑战。首先,我们介绍人工智能模型的可解释性、可解读性和透明度这些术语。这些术语听起来相似,但在人工智能的讨论中具有不同的作用。其次,我们指出,出于正当性、控制、改进和发现的原因,临床环境中需要可解释人工智能,并提供了示例。第三,我们描述了可解释性的三大类:建模前可解释性、可解释模型和建模后可解释性,并给出示例(特别是建模后可解释性的示例),如可视化、简化、文本论证和特征相关性。最后,我们展示了在临床环境中实施人工智能和机器学习模型的挑战,并强调了临床医生、开发者和研究人员之间合作的重要性。本文为急诊医学临床医生总结了“可解释人工智能”的概念。这篇综述可能有助于临床医生在急诊情况下理解可解释人工智能。