Suppr超能文献

临床医生在使用人工智能方面的角色和必要的理解水平:一项对德国医学生的定性访谈研究。

Clinicians' roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.

机构信息

Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.

Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany.

出版信息

BMC Med Ethics. 2024 Oct 7;25(1):107. doi: 10.1186/s12910-024-01109-w.

Abstract

BACKGROUND

Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders' viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice.

METHODS

Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees' perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians' side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz).

RESULTS

In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees' statements: the first type, the clinician as a systemic trustee (or "the one relying"), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to "the one relying", clinicians should choose those measures that statistically cause the least harm. The second type, the clinician as an individual expert (or "the one controlling"), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician's necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees' different conceptions of the clinician's role and responsibilities cause them to have different requirements regarding the clinician's understanding and explicability of an AI-CDSS beyond the proof of benefit.

CONCLUSIONS

The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.

摘要

背景

人工智能驱动的临床决策支持系统(AI-CDSS)正被越来越多地引入医疗保健的各个领域,用于诊断、预后、治疗和其他目的。关于符合伦理条件的论述的很大一部分与使用 AI-CDSS 时确保负责任的临床决策所需的理解和可解释性水平有关。迄今为止,关于利益相关者对这些问题的观点的实证证据还很少。本研究通过一方面深入研究理解和可解释性的要求及其背后的基本原理,另一方面调查医学专业学生作为利益相关者的观点,从而补充了经验伦理研究的内容。这些学生是数据稀缺的群体,但对于他们来说,AI-CDSS 将是他们医疗实践的重要组成部分。

方法

对 15 名德国医学生进行了 15 次半结构化定性访谈(每次平均持续 56 分钟),以调查他们对使用 AI-CDSS 的看法和态度。以肾病学和外科手术中使用的两个假设 AI-CDSS 案例为例,以问题为中心的访谈引出了受访者对自身临床角色和责任的看法和信念,以及对可解释性以及临床医生方面所需的理解和能力水平的看法。根据定性内容分析的关键原则(Kuckartz)对定性数据进行了分析。

结果

针对关于 AI-CDSS 工具的必要理解以及它们的输出出现的原因的核心问题,以及对它们的要求的原因,从受访者的陈述中可以归纳出两种类型的论证:第一种类型,临床医生作为系统受托人(或“依赖者”),强调需要有经验证据和充分的审批程序,以确保使用 AI-CDSS 最小化伤害并产生临床效益。基于这些要求的证明,使用 AI-CDSS 将是合适的,因为根据“依赖者”的说法,临床医生应该选择那些从统计学上造成最小伤害的措施。第二种类型,临床医生作为个体专家(或“控制者”),设定了超出确保经验证据和充分审批程序的更高要求。这些更高的要求与临床医生必要的能力和理解水平有关,包括了解特定的 AI-CDSS 如何工作以及如何正确使用它以评估其输出并减轻个体患者的潜在风险。这两种类型都统一于对基于证据的临床实践的高度重视以及与患者就使用医学 AI 进行沟通的必要性。然而,由于受访者对临床医生角色和责任的不同理解,他们对临床医生对 AI-CDSS 的理解和可解释性的要求超出了受益的证明。

结论

研究结果突出了(未来)临床医生在对必要的理解和能力水平的看法方面的两种不同类型。这些发现应该为关于适当的培训计划和专业标准(例如临床实践指南)的辩论提供信息,这些计划可以在不同的临床领域安全有效地使用 AI-CDSS。虽然当前的方法都在寻找必要理解和能力所需的适当最低要求,但这里描述的(未来)临床医生之间在信息和理解需求方面的差异可以导致更具差异化的解决方案。

相似文献

本文引用的文献

8
Testimonial injustice in medical machine learning.医学机器学习中的见证偏见。
J Med Ethics. 2023 Aug;49(8):536-540. doi: 10.1136/jme-2022-108630. Epub 2023 Jan 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验