Ethox Centre, Nuffield Department of Population Health, University of Oxford, Old Road Campus, Headington, Oxford, OX3 7LF, UK.
Usher Institute, Old Medical School, University of Edinburgh, Teviot Place, Edinburgh, EH8 9AG, UK.
BMC Med Ethics. 2024 Jan 6;25(1):6. doi: 10.1186/s12910-023-00990-1.
Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG must fulfill for clinicians to feel justified in incorporating this assistive technology into their decision-making processes regarding interventions in labor.
This study is based on semi-structured interviews conducted online with eight obstetricians and five midwives based in England. Participants were asked about their current decision-making processes about when to intervene in labor, how AI-driven CTG might enhance or disrupt this process, and what it would take for them to trust this kind of technology. Interviews were transcribed verbatim and analyzed with thematic analysis. NVivo software was used to organize thematic codes that recurred in interviews to identify the issues that mattered most to participants. Topics and themes that were repeated across interviews were identified to form the basis of the analysis and conclusions of this paper.
There were four major themes that emerged from our interviews with obstetricians and midwives regarding the conditions that AI-driven CTG must fulfill: (1) the importance of accurate and efficient risk assessments; (2) the capacity for personalization and individualized medicine; (3) the lack of significance regarding the type of institution that develops technology; and (4) the need for transparency in the development process.
Accuracy, efficiency, personalization abilities, transparency, and clear evidence that it can improve outcomes are conditions that clinicians deem necessary for AI-DSS to meet in order to be considered reliable and therefore worthy of being incorporated into the decision-making process. Importantly, healthcare professionals considered themselves as the epistemic authorities in the clinical context and the bearers of responsibility for delivering appropriate care. Therefore, what mattered to them was being able to evaluate the reliability of AI-DSS on their own terms, and have confidence in implementing them in their practice.
鉴于人工智能驱动的决策支持系统(AI-DSS)旨在协助医疗决策,临床医生愿意将 AI-DSS 纳入其实践至关重要。本研究以人工智能驱动的心电图(CTG)为例,研究了在分娩护理背景下将 AI-DSS 纳入实践中的伦理和信任相关问题。本文重点关注产科医生和助产士的观点,探讨了将人工智能驱动工具纳入其实践时必须满足的条件,以使临床医生有理由将这种辅助技术纳入其关于分娩干预的决策过程。
本研究基于对 8 名驻英国的产科医生和 5 名助产士进行的在线半结构化访谈。参与者被问及他们目前关于何时干预分娩的决策过程、人工智能驱动的 CTG 如何增强或扰乱这个过程,以及他们需要什么才能信任这种技术。采访记录进行了逐字转录,并进行了主题分析。使用 NVivo 软件对访谈中反复出现的主题代码进行组织,以确定对参与者最重要的问题。从访谈中识别出重复的主题和主题,作为分析和本论文结论的基础。
我们对产科医生和助产士进行的访谈中出现了四个主要主题,这些主题涉及 AI 驱动的 CTG 必须满足的条件:(1)准确和高效风险评估的重要性;(2)个性化和个体化医学的能力;(3)开发技术的机构类型的重要性;(4)开发过程中透明度的必要性。
准确性、效率、个性化能力、透明度以及明确的证据表明它可以改善结果,这些条件是临床医生认为 AI-DSS 必须满足的条件,以便被认为是可靠的,因此值得纳入决策过程。重要的是,医疗保健专业人员认为自己是临床环境中的知识权威,并且对提供适当护理负有责任。因此,对他们来说,重要的是能够根据自己的条件评估 AI-DSS 的可靠性,并对在实践中实施它们有信心。