Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, England.
Bradford Teaching Hospitals NHS Foundation Trust, Bradford, England.
Bull World Health Organ. 2020 Apr 1;98(4):251-256. doi: 10.2471/BLT.19.237487. Epub 2020 Feb 25.
The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.
由基于人工智能的临床工具做出的决策可能会对患者造成伤害,而目前全球的责任和安全实践尚未对此做出调整。我们主要关注两种用于决策的临床人工智能:对患者伤害的道德责任;以及保护患者免受此类伤害的安全保障。基于人工智能的工具正在挑战传统的临床实践,即分配责任和确保安全。人类临床医生和安全工程师对人工智能系统做出的决策的控制能力较弱,对人工智能系统如何做出决策的了解和理解也较少。我们通过将其应用于为脓毒症治疗开发的人工智能系统的一个示例来说明这种分析。本文最后提出了减轻这些担忧的实际建议。我们主张在对人工智能系统导致的患者伤害的道德责任进行评估时,应包括人工智能开发者和系统安全工程师。同时,模型中的任何行为者都无法完全满足人工智能系统决策的传统道德责任条件。因此,我们应该在这种情况下更新我们的道德责任观念。我们还需要从静态模型转向动态模型的保证,接受在人工智能系统部署之前,在设计阶段无法完全解决安全问题的考虑。