• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

急诊医学中的可解释人工智能:综述

Explainable artificial intelligence in emergency medicine: an overview.

作者信息

Okada Yohei, Ning Yilin, Ong Marcus Eng Hock

机构信息

Health Services and Systems Research, Duke-NUS Medical School, Singapore.

Preventive Services, Graduate School of Medicine, Kyoto University, Kyoto, Japan.

出版信息

Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.

DOI:10.15441/ceem.23.145
PMID:38012816
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10790070/
Abstract

Artificial intelligence (AI) and machine learning (ML) have potential to revolutionize emergency medical care by enhancing triage systems, improving diagnostic accuracy, refining prognostication, and optimizing various aspects of clinical care. However, as clinicians often lack AI expertise, they might perceive AI as a "black box," leading to trust issues. To address this, "explainable AI," which teaches AI functionalities to end-users, is important. This review presents the definitions, importance, and role of explainable AI, as well as potential challenges in emergency medicine. First, we introduce the terms explainability, interpretability, and transparency of AI models. These terms sound similar but have different roles in discussion of AI. Second, we indicate that explainable AI is required in clinical settings for reasons of justification, control, improvement, and discovery and provide examples. Third, we describe three major categories of explainability: pre-modeling explainability, interpretable models, and post-modeling explainability and present examples (especially for post-modeling explainability), such as visualization, simplification, text justification, and feature relevance. Last, we show the challenges of implementing AI and ML models in clinical settings and highlight the importance of collaboration between clinicians, developers, and researchers. This paper summarizes the concept of "explainable AI" for emergency medicine clinicians. This review may help clinicians understand explainable AI in emergency contexts.

摘要

人工智能(AI)和机器学习(ML)有潜力通过改进分诊系统、提高诊断准确性、优化预后以及优化临床护理的各个方面,给紧急医疗带来变革。然而,由于临床医生通常缺乏人工智能专业知识,他们可能将人工智能视为一个“黑匣子”,从而产生信任问题。为了解决这个问题,向终端用户传授人工智能功能的“可解释人工智能”很重要。这篇综述介绍了可解释人工智能的定义、重要性和作用,以及在急诊医学中的潜在挑战。首先,我们介绍人工智能模型的可解释性、可解读性和透明度这些术语。这些术语听起来相似,但在人工智能的讨论中具有不同的作用。其次,我们指出,出于正当性、控制、改进和发现的原因,临床环境中需要可解释人工智能,并提供了示例。第三,我们描述了可解释性的三大类:建模前可解释性、可解释模型和建模后可解释性,并给出示例(特别是建模后可解释性的示例),如可视化、简化、文本论证和特征相关性。最后,我们展示了在临床环境中实施人工智能和机器学习模型的挑战,并强调了临床医生、开发者和研究人员之间合作的重要性。本文为急诊医学临床医生总结了“可解释人工智能”的概念。这篇综述可能有助于临床医生在急诊情况下理解可解释人工智能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/82d5157d415d/ceem-23-145f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/9d2572dde52a/ceem-23-145f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/bf3fc16c34c8/ceem-23-145f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/20e4d2bef7bf/ceem-23-145f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/09234574de01/ceem-23-145f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/82d5157d415d/ceem-23-145f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/9d2572dde52a/ceem-23-145f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/bf3fc16c34c8/ceem-23-145f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/20e4d2bef7bf/ceem-23-145f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/09234574de01/ceem-23-145f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/12d1/10790070/82d5157d415d/ceem-23-145f5.jpg

相似文献

1
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
2
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
3
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
4
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
5
A manifesto on explainability for artificial intelligence in medicine.人工智能在医学中的可解释性宣言
Artif Intell Med. 2022 Nov;133:102423. doi: 10.1016/j.artmed.2022.102423. Epub 2022 Oct 9.
6
A review of explainable and interpretable AI with applications in COVID-19 imaging.可解释和可理解的人工智能综述及其在 COVID-19 影像中的应用。
Med Phys. 2022 Jan;49(1):1-14. doi: 10.1002/mp.15359. Epub 2021 Dec 7.
7
Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges.可解释人工智能和机器学习:应对面部传染病挑战的新方法。
Ann Med. 2023;55(2):2286336. doi: 10.1080/07853890.2023.2286336. Epub 2023 Nov 27.
8
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
9
Explainability as the key ingredient for AI adoption in Industry 5.0 settings.可解释性是人工智能在工业5.0环境中应用的关键要素。
Front Artif Intell. 2023 Dec 11;6:1264372. doi: 10.3389/frai.2023.1264372. eCollection 2023.
10
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.

引用本文的文献

1
Artificial Intelligence Applications in Emergency Toxicology: Advancements and Challenges.人工智能在急诊毒理学中的应用:进展与挑战。
J Med Internet Res. 2025 Aug 22;27:e73121. doi: 10.2196/73121.
2
Artificial Intelligence (AI) and Emergency Medicine: Balancing Opportunities and Challenges.人工智能与急诊医学:机遇与挑战的平衡
JMIR Med Inform. 2025 Aug 13;13:e70903. doi: 10.2196/70903.
3
Rethinking artificial intelligence in medicine: from tools to agents.重新思考医学中的人工智能:从工具到智能体。

本文引用的文献

1
A translational perspective towards clinical AI fairness.临床人工智能公平性的转化视角。
NPJ Digit Med. 2023 Sep 14;6(1):172. doi: 10.1038/s41746-023-00918-4.
2
Outcome assessment for out-of-hospital cardiac arrest patients in Singapore and Japan with initial shockable rhythm.新加坡和日本院外心脏骤停初始可除颤节律患者的结局评估。
Crit Care. 2023 Sep 12;27(1):351. doi: 10.1186/s13054-023-04636-x.
3
AI and machine learning in resuscitation: Ongoing research, new concepts, and key challenges.复苏中的人工智能与机器学习:正在进行的研究、新概念及关键挑战。
Clin Exp Emerg Med. 2025 Jun;12(2):101-103. doi: 10.15441/ceem.25.125. Epub 2025 Jun 30.
4
Development and validation of a transformer model-based early warning score for real-time prediction of adverse outcomes in the emergency department.基于变压器模型的急诊科不良结局实时预测预警评分的开发与验证
Sci Rep. 2025 Jul 2;15(1):23021. doi: 10.1038/s41598-025-07511-7.
5
Machine learning innovations in CPR: a comprehensive survey on enhanced resuscitation techniques.心肺复苏中的机器学习创新:关于强化复苏技术的全面综述
Artif Intell Rev. 2025;58(8):233. doi: 10.1007/s10462-025-11214-w. Epub 2025 May 5.
6
Using machine learning techniques for early prediction of tracheal intubation in patients with septic shock: a multi-center study in South Korea.运用机器学习技术对感染性休克患者进行气管插管的早期预测:韩国的一项多中心研究。
Acute Crit Care. 2025 May;40(2):221-234. doi: 10.4266/acc.004776. Epub 2025 Apr 30.
7
Large language models in critical care.重症监护中的大语言模型
J Intensive Med. 2024 Dec 24;5(2):113-118. doi: 10.1016/j.jointm.2024.12.001. eCollection 2025 Apr.
8
Progress in the application of machine learning in CT diagnosis of acute appendicitis.机器学习在急性阑尾炎CT诊断中的应用进展
Abdom Radiol (NY). 2025 Mar 17. doi: 10.1007/s00261-025-04864-5.
9
Artificial intelligence applied to electrocardiogram to rule out acute myocardial infarction: the ROMIAE multicentre study.应用人工智能解读心电图以排除急性心肌梗死:ROMIAE多中心研究
Eur Heart J. 2025 May 21;46(20):1917-1929. doi: 10.1093/eurheartj/ehaf004.
10
Assessing Risk in Implementing New Artificial Intelligence Triage Tools-How Much Risk is Reasonable in an Already Risky World?评估实施新型人工智能分诊工具的风险——在一个已然充满风险的世界里,多大的风险是合理的?
Asian Bioeth Rev. 2025 Jan 29;17(1):187-205. doi: 10.1007/s41649-024-00348-8. eCollection 2025 Jan.
Resusc Plus. 2023 Jul 28;15:100435. doi: 10.1016/j.resplu.2023.100435. eCollection 2023 Sep.
4
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
5
Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals.通过弥合临床医生的需求与开发者的目标来解决可解释人工智能难题。
NPJ Digit Med. 2023 May 22;6(1):94. doi: 10.1038/s41746-023-00837-4.
6
Current challenges in adopting machine learning to critical care and emergency medicine.将机器学习应用于重症监护和急诊医学的当前挑战。
Clin Exp Emerg Med. 2023 Jun;10(2):132-137. doi: 10.15441/ceem.23.041. Epub 2023 May 15.
7
A universal AutoScore framework to develop interpretable scoring systems for predicting common types of clinical outcomes.一个用于开发可解释评分系统以预测常见类型临床结果的通用自动评分框架。
STAR Protoc. 2023 May 12;4(2):102302. doi: 10.1016/j.xpro.2023.102302.
8
Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine.GPT-4作为医学人工智能聊天机器人的益处、局限性和风险
N Engl J Med. 2023 Mar 30;388(13):1233-1239. doi: 10.1056/NEJMsr2214184.
9
Derivation of Coagulation Phenotypes and the Association with Prognosis in Traumatic Brain Injury: A Cluster Analysis of Nationwide Multicenter Study.创伤性脑损伤中凝血表型的推导及其与预后的关联:一项全国多中心研究的聚类分析
Neurocrit Care. 2024 Feb;40(1):292-302. doi: 10.1007/s12028-023-01712-6. Epub 2023 Mar 28.
10
AI-Generated Medical Advice-GPT and Beyond.人工智能生成的医学建议——GPT及其他。
JAMA. 2023 Apr 25;329(16):1349-1350. doi: 10.1001/jama.2023.5321.