Suppr超能文献

评估实施新型人工智能分诊工具的风险——在一个已然充满风险的世界里,多大的风险是合理的?

Assessing Risk in Implementing New Artificial Intelligence Triage Tools-How Much Risk is Reasonable in an Already Risky World?

作者信息

Nord-Bronzyk Alexa, Savulescu Julian, Ballantyne Angela, Braunack-Mayer Annette, Krishnaswamy Pavitra, Lysaght Tamra, Ong Marcus E H, Liu Nan, Menikoff Jerry, Mertens Mayli, Dunn Michael

机构信息

Centre for Biomedical Ethics, National University of Singapore, Singapore.

Uehiro Oxford Institute, University of Oxford, Oxford, UK.

出版信息

Asian Bioeth Rev. 2025 Jan 29;17(1):187-205. doi: 10.1007/s41649-024-00348-8. eCollection 2025 Jan.

Abstract

Risk prediction in emergency medicine (EM) holds unique challenges due to issues surrounding urgency, blurry research-practise distinctions, and the high-pressure environment in emergency departments (ED). Artificial intelligence (AI) risk prediction tools have been developed with the aim of streamlining triaging processes and mitigating perennial issues affecting EDs globally, such as overcrowding and delays. The implementation of these tools is complicated by the potential risks associated with over-triage and under-triage, untraceable false positives, as well as the potential for the biases of healthcare professionals toward technology leading to the incorrect usage of such tools. This paper explores risk surrounding these issues in an analysis of a case study involving a machine learning triage tool called the Score for Emergency Risk Prediction (SERP) in Singapore. This tool is used for estimating mortality risk in presentation at the ED. After two successful retrospective studies demonstrating SERP's strong predictive accuracy, researchers decided that the pre-implementation randomised controlled trial (RCT) would not be feasible due to how the tool interacts with clinical judgement, complicating the blinded arm of the trial. This led them to consider other methods of testing SERP's real-world capabilities, such as ongoing-evaluation type studies. We discuss the outcomes of a risk-benefit analysis to argue that the proposed implementation strategy is ethically appropriate and aligns with improvement-focused and systemic approaches to implementation, especially the learning health systems framework (LHS) to ensure safety, efficacy, and ongoing learning.

摘要

由于存在诸如紧迫性、研究与实践界限模糊以及急诊科(ED)的高压环境等问题,急诊医学(EM)中的风险预测面临着独特的挑战。人工智能(AI)风险预测工具的开发旨在简化分诊流程,并缓解影响全球急诊科的长期问题,如过度拥挤和延误。然而,这些工具的实施却因与过度分诊和分诊不足相关的潜在风险、难以追踪的假阳性结果,以及医疗保健专业人员对技术存在偏见而导致此类工具使用不当的可能性而变得复杂。本文通过对一个案例研究的分析,探讨了围绕这些问题的风险,该案例研究涉及新加坡一种名为急诊风险预测评分(SERP)的机器学习分诊工具。该工具用于估计在急诊科就诊时的死亡风险。在两项成功的回顾性研究证明SERP具有很强的预测准确性之后,研究人员认为由于该工具与临床判断的交互方式,实施前的随机对照试验(RCT)不可行,这使得试验的盲法分组变得复杂。这促使他们考虑其他测试SERP实际应用能力的方法,如持续评估类型的研究。我们讨论了风险效益分析的结果,以论证所提议的实施策略在伦理上是适当的,并且与以改进为重点的系统性实施方法相一致,特别是学习型健康系统框架(LHS),以确保安全性、有效性和持续学习。

相似文献

本文引用的文献

1
The self-fulfilling prophecy in medicine.医学中的自证预言。
Theor Med Bioeth. 2024 Oct;45(5):363-385. doi: 10.1007/s11017-024-09677-z. Epub 2024 Aug 9.
3
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验