Suppr超能文献

肿瘤学家对人工智能在癌症治疗中的伦理影响的看法。

Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care.

机构信息

Divsion of Population Sciences, Dana-Farber Cancer Institute, Boston, Massachusetts.

Harvard Medical School, Boston, Massachusetts.

出版信息

JAMA Netw Open. 2024 Mar 4;7(3):e244077. doi: 10.1001/jamanetworkopen.2024.4077.

Abstract

IMPORTANCE

Artificial intelligence (AI) tools are rapidly integrating into cancer care. Understanding stakeholder views on ethical issues associated with the implementation of AI in oncology is critical to optimal deployment.

OBJECTIVE

To evaluate oncologists' views on the ethical domains of the use of AI in clinical care, including familiarity, predictions, explainability (the ability to explain how a result was determined), bias, deference, and responsibilities.

DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional, population-based survey study was conducted from November 15, 2022, to July 31, 2023, among 204 US-based oncologists identified using the National Plan & Provider Enumeration System.

MAIN OUTCOMES AND MEASURES

The primary outcome was response to a question asking whether participants agreed or disagreed that patients need to provide informed consent for AI model use during cancer treatment decisions.

RESULTS

Of 387 surveys, 204 were completed (response rate, 52.7%). Participants represented 37 states, 120 (63.7%) identified as male, 128 (62.7%) as non-Hispanic White, and 60 (29.4%) were from academic practices; 95 (46.6%) had received some education on AI use in health care, and 45.3% (92 of 203) reported familiarity with clinical decision models. Most participants (84.8% [173 of 204]) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic; 23.0% (47 of 204) stated they also needed to be explainable by patients. Patient consent for AI model use during treatment decisions was supported by 81.4% of participants (166 of 204). When presented with a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, the most common response was to present both options and let the patient decide (36.8% [75 of 204]); respondents from academic settings were more likely than those from other settings to let the patient decide (OR, 2.56; 95% CI, 1.19-5.51). Most respondents (90.7% [185 of 204]) reported that AI developers were responsible for the medico-legal problems associated with AI use. Some agreed that this responsibility was shared by physicians (47.1% [96 of 204]) or hospitals (43.1% [88 of 204]). Finally, most respondents (76.5% [156 of 204]) agreed that oncologists should protect patients from biased AI tools, but only 27.9% (57 of 204) were confident in their ability to identify poorly representative AI models.

CONCLUSIONS AND RELEVANCE

In this cross-sectional survey study, few oncologists reported that patients needed to understand AI models, but most agreed that patients should consent to their use, and many tasked patients with choosing between physician- and AI-recommended treatment regimens. These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise.

摘要

重要性

人工智能 (AI) 工具正在迅速融入癌症护理。了解利益相关者对与肿瘤学中 AI 实施相关的伦理问题的看法,对于优化 AI 的部署至关重要。

目的

评估肿瘤学家对 AI 在临床护理中使用的伦理领域的看法,包括熟悉程度、预测、可解释性(解释结果如何确定的能力)、偏见、尊重和责任。

设计、地点和参与者:这项横断面、基于人群的调查研究于 2022 年 11 月 15 日至 2023 年 7 月 31 日在全美范围内进行,共纳入了通过国家计划和提供者登记系统确定的 204 名美国肿瘤学家。

主要结果和措施

主要结果是对参与者是否同意患者在癌症治疗决策中需要对 AI 模型的使用提供知情同意的问题的回答。

结果

在完成的 387 份调查中,有 204 份完成(应答率为 52.7%)。参与者代表了 37 个州,120 名(63.7%)为男性,128 名(62.7%)为非西班牙裔白人,60 名(29.4%)来自学术实践;95 名(46.6%)接受过一些关于医疗保健中 AI 使用的教育,45.3%(203 名中的 92 名)报告对临床决策模型有一定了解。大多数参与者(84.8%[204 名中的 173 名])表示,基于 AI 的临床决策模型需要由肿瘤学家来解释,以便在诊所中使用;23.0%(47 名中的 204 名)表示他们也需要由患者来解释。81.4%的参与者(204 名中的 166 名)支持在治疗决策中使用 AI 模型。当呈现 AI 决策模型选择与肿瘤学家计划推荐的不同治疗方案的情况时,最常见的反应是同时呈现两种方案并让患者决定(36.8%[204 名中的 75 名]);来自学术环境的受访者比其他环境的受访者更有可能让患者决定(比值比,2.56;95%置信区间,1.19-5.51)。大多数受访者(90.7%[204 名中的 185 名])表示,AI 开发者应对与 AI 使用相关的医法问题负责。一些人同意医生(47.1%[204 名中的 96 名])或医院(43.1%[204 名中的 88 名])也应分担这一责任。最后,大多数受访者(76.5%[204 名中的 156 名])同意肿瘤学家应保护患者免受有偏见的 AI 工具的影响,但只有 27.9%(204 名中的 57 名)有信心能够识别代表性不足的 AI 模型。

结论和相关性

在这项横断面调查研究中,很少有肿瘤学家表示患者需要了解 AI 模型,但大多数人同意患者应该同意使用这些模型,许多人让患者在医生和 AI 推荐的治疗方案之间做出选择。这些发现表明,在肿瘤学中实施 AI 必须包括对其对护理决策的影响进行严格评估,以及在出现与 AI 使用相关的问题时确定决策责任。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/348eb95c4046/jamanetwopen-e244077-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验