Nord-Bronzyk Alexa, Savulescu Julian, Ballantyne Angela, Braunack-Mayer Annette, Krishnaswamy Pavitra, Lysaght Tamra, Ong Marcus E H, Liu Nan, Menikoff Jerry, Mertens Mayli, Dunn Michael
Centre for Biomedical Ethics, National University of Singapore, Singapore.
Uehiro Oxford Institute, University of Oxford, Oxford, UK.
Asian Bioeth Rev. 2025 Jan 29;17(1):187-205. doi: 10.1007/s41649-024-00348-8. eCollection 2025 Jan.
Risk prediction in emergency medicine (EM) holds unique challenges due to issues surrounding urgency, blurry research-practise distinctions, and the high-pressure environment in emergency departments (ED). Artificial intelligence (AI) risk prediction tools have been developed with the aim of streamlining triaging processes and mitigating perennial issues affecting EDs globally, such as overcrowding and delays. The implementation of these tools is complicated by the potential risks associated with over-triage and under-triage, untraceable false positives, as well as the potential for the biases of healthcare professionals toward technology leading to the incorrect usage of such tools. This paper explores risk surrounding these issues in an analysis of a case study involving a machine learning triage tool called the Score for Emergency Risk Prediction (SERP) in Singapore. This tool is used for estimating mortality risk in presentation at the ED. After two successful retrospective studies demonstrating SERP's strong predictive accuracy, researchers decided that the pre-implementation randomised controlled trial (RCT) would not be feasible due to how the tool interacts with clinical judgement, complicating the blinded arm of the trial. This led them to consider other methods of testing SERP's real-world capabilities, such as ongoing-evaluation type studies. We discuss the outcomes of a risk-benefit analysis to argue that the proposed implementation strategy is ethically appropriate and aligns with improvement-focused and systemic approaches to implementation, especially the learning health systems framework (LHS) to ensure safety, efficacy, and ongoing learning.
由于存在诸如紧迫性、研究与实践界限模糊以及急诊科(ED)的高压环境等问题,急诊医学(EM)中的风险预测面临着独特的挑战。人工智能(AI)风险预测工具的开发旨在简化分诊流程,并缓解影响全球急诊科的长期问题,如过度拥挤和延误。然而,这些工具的实施却因与过度分诊和分诊不足相关的潜在风险、难以追踪的假阳性结果,以及医疗保健专业人员对技术存在偏见而导致此类工具使用不当的可能性而变得复杂。本文通过对一个案例研究的分析,探讨了围绕这些问题的风险,该案例研究涉及新加坡一种名为急诊风险预测评分(SERP)的机器学习分诊工具。该工具用于估计在急诊科就诊时的死亡风险。在两项成功的回顾性研究证明SERP具有很强的预测准确性之后,研究人员认为由于该工具与临床判断的交互方式,实施前的随机对照试验(RCT)不可行,这使得试验的盲法分组变得复杂。这促使他们考虑其他测试SERP实际应用能力的方法,如持续评估类型的研究。我们讨论了风险效益分析的结果,以论证所提议的实施策略在伦理上是适当的,并且与以改进为重点的系统性实施方法相一致,特别是学习型健康系统框架(LHS),以确保安全性、有效性和持续学习。